Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR DETECTING AND CLASSIFYING MENTAL STATES
Document Type and Number:
WIPO Patent Application WO/2007/030869
Kind Code:
A1
Abstract:
A method of detecting and classifying mental states, comprising the steps of receiving bio-signals from one or more bio-signal detectors; generating multiple different representations of each bio-signal; determining the value of one or more features of the each bio-signal representation; and comparing the feature values to one or more than one mental state signature, each mental state signature defining reference feature values indicative of a predetermined mental state.

Inventors:
LE TAN THI THAI (AU)
DO NAM HOAI (AU)
TORRE MARCO KENNETH DELLA (AU)
KING WILLIAM ANDREW (AU)
PHAM HAI HA (AU)
DELIC EMIR (AU)
THIE JOHNSON (AU)
ALLSOP DAVID JOHN (AU)
Application Number:
PCT/AU2006/001332
Publication Date:
March 22, 2007
Filing Date:
September 12, 2006
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EMOTIV SYSTEMS PTY LTD (AU)
LE TAN THI THAI (AU)
DO NAM HOAI (AU)
TORRE MARCO KENNETH DELLA (AU)
KING WILLIAM ANDREW (AU)
PHAM HAI HA (AU)
DELIC EMIR (AU)
THIE JOHNSON (AU)
ALLSOP DAVID JOHN (AU)
International Classes:
A63F13/10; G06F3/00
Domestic Patent References:
WO2001007128A12001-02-01
WO1997033515A11997-09-18
WO2004037086A12004-05-06
Foreign References:
US6292688B12001-09-18
US20020188217A12002-12-12
EP1139240A22001-10-04
US6422999B12002-07-23
US20030032890A12003-02-13
US5740812A1998-04-21
Attorney, Agent or Firm:
PHILLIPS ORMONDE & FITZPATRICK (367 Collins Street Melbourne, Victoria 3000, AU)
Download PDF:
Claims:

WHAT IS CLAIMED IS:

1 . A method of detecting and classifying mental states, comprising the steps of: receiving bio-signals from one or more bio-signal detectors; generating multiple different representations of each bio-signal; determining the value of one or more features of the each bio-signal representation; and comparing the feature values to one or more than one mental state signature, each mental state signature defining reference feature values indicative of a predetermined mental state.

2. The method according to claim 1 , wherein the step of generating multiple different representations of each bio-signal comprises the step of dividing the bio-signals into different epochs.

3. The method according to claim 2, wherein the step of generating multiple different representations of each bio-signal further comprises the step of generating representations of the bio-signal epochs into one or more different domains.

4. The method according to claim 3, wherein each bio-signal epoch is divided into one or more than one of different frequency, temporal and spatial domain representations.

5. The method according to claim 4, wherein the different frequency domain representations are obtained by dividing each bio-signal epoch into distinguishable frequency bands.

6. The method according to claim 4, wherein the different temporal domain representations are obtained by dividing each bio-signal epoch into a plurality of time segments.

7. The method according to claim 6, wherein the time segments in each epoch are temporally overlapping.

8. The method according to claim 6, wherein the time segments in each epoch do not temporally overlap.

9. The method according to claim 4, wherein the different spatial domain representations are obtained by dividing each bio-signal epoch into a plurality of spatially distinguishable channels.

10. The method according to claim 9, wherein each channel is derived from a different bio-signal detector.

11. The method according to claim 1 , wherein the step of determining the value of one or more features of the each bio-signal representation comprises determining values of features of individual bio-signal representations.

12. The method according to claim 11 , wherein one or more than one feature comprises signal power of one or more than one bio-signal representations.

13 The method according to claim 11 , wherein one or more than one feature comprises signal power of one or more than one spatially distinguishable channels.

14. The method according to claim 11 , wherein one or more than one feature comprises a change in signal power of one or more than one bio-signal representations.

15. The method according to claim 11 , wherein one or more than one feature comprises a change in signal power of one or more than one spatially distinguishable channels.

16. The method according to claim 1 , wherein the step of determining the value of one or more features of the each bio-signal representation comprises determining values of features between different bio-signal representations.

17. The method according to claim 16, wherein at least coherence or correlation are detected between different bio-signal representations.

18. The method according to claim 17, wherein one or more than one feature comprises the correlation or coherence between signal power in different spatially distinguishable channels.

19. The method according to claim 17, wherein one or more than one feature comprises correlation or coherence between changes in signal power in different frequency bands.

20. The method according to claim 1 , wherein the step of determining the value of one or more features of the each bio-signal representation comprises applying one or more transforms to the different bio-signal representations.

21 . The method according to claim 20, wherein the one or more transforms comprises any one or more of a Fourier Transform, wavelet transform or other linear or non-linear mathematical transform.

22. The method according to claim 1 , wherein the step of comparing the feature values to one or more than one mental state signature comprises: using a neural network to classify whether the feature values are indicative of the presence of a predefined mental state.

23. The method according to claim 1 , wherein the step of comparing the feature values to one or more than one mental state signature comprises: performing a distance measure to measure the similarity between the feature values and the reference features values to classify whether the feature values are indicative of the presence of a predefined mental state.

24. The method according to claim 1 , wherein the mental state is an emotional state.

25. The method according to claim 1 , wherein the mental state results from mental focus on a task, image or other willed experience.

26. A method of creating a signature for use in a method of detecting and classifying mental states according to claim 1 , comprising the steps of: eliciting a desired mental state from a user; determining the features of the bio-signal representations that most significantly indicate the presence of the desired mental state by the user; and generating the signature from a combination of those features.

27. A method according to claim 26, wherein the step of determining the features of the bio-signal representations that most significantly indicate the presence of the desired mental state by the user comprises the step of: performing any one or more of an ANOVA test, a T test, a Discriminant Function analysis, a MANOVA test, a Bonferroni analysis, False Discovery Rate analysis and Dunn Sidack analysis on the bio-signal representation features.

28. A method according to claim 26, wherein the desired mental state is not predefined.

29. A method according to claim 26, and further comprising the step of: using feature values determined when the desired mental state is elicited from one or more users to update the signature for that mental state.

30. An apparatus for detecting and classifying mental states, comprising: a processor and associated memory device for carrying out a method according to any one of claims 1 to 29.

31 . A computer program product, tangibly stored on machine readable medium, the product comprising instructions operable to cause a processor to carry out a method according to any one of claims 1 to 29.

33. A computer program product comprising instructions operable to cause a processor to carry out a method according to any one of claims 1 to 29.

Description:

METHOD AND SYSTEM FOR DETECTING AND CLASSIFYING MENTAL

STATES

FIELD

The present invention relates generally to the detection and classification of the mental state of a human. The invention is suitable for use in an electronic entertainment or other platforms in which electroencephalograph (EEG) data is collected and analyzed in order to determine a subject's response to stimuli, such as an emotional response, or to measure the mental state of a user when they are consciously focusing on a task, image or willed experience, in order to provide control signals to that platform. It will therefore be convenient to describe the invention in relation to that exemplary but non-limiting application.

BACKGROUND

Interactions between humans and machines are usually restricted to the use of cumbersome input devices such as keyboards, joysticks and other manually operable controls. A number of input devices have been developed to assist disabled users in providing commands without requiring the use of manually operable controls. Some of these input devices detect eyeball movement or are voice activated to minimize the physical movement required by a user to operate these input devices. A number of studies have been conducted to determine the feasibility of eliminating physical movement from control inputs by detecting the mental state of a user. Most of these studies have been conducted in the medical sphere to determine the responsiveness of patients to external stimuli in situations where those patients are unable to otherwise communicate with medical staff.

To date though, attempts to detect the mental state of a user have been rudimentary only, and are unsuited to use in complex environments, such as contemporary software-based gaming or like platforms.

SUMMARY

It would be desirable to provide a method and system for detecting and classifying a range of mental states in a manner that was suitable for use in a

variety of contemporary applications. It would also be desirable for that system and method to be adaptable to suit a number of applications, without requiring the use of significant data processing resources.

It would also be desirable for the method and system for detecting and classifying mental states to be suitable for use in real time applications, with a minimum of time being required to train and develop a usable interactive system. It would also be desirable to provide a method and system for detecting and classifying mental states that ameliorate or overcome one or more disadvantages of known detection and classification methods and systems. There also exists a need to provide technology that simplifies man machine interactions. It would be preferable for this technology to be robust, powerful and adaptable to a number of platforms and environments. It would also be desirable for technology to optimize the use of natural human interaction techniques so that the man machine interaction is as natural as possible for a human user.

With that in mind, one aspect of the present invention provides a method of detecting and classifying mental states. The method comprises the steps of receiving bio-signals from one or more bio-signal detectors; generating multiple different representations of each bio-signal; determining the value of one or more features of the each bio-signal representation; and comparing the feature values to one or more than one mental state signature, each mental state signature defining reference feature values indicative of a predetermined mental state.

The step of generating multiple different representations of each bio- signal may comprise the step of dividing the bio-signals into different epochs. Preferably, the step of generating multiple different representations of each bio- signal further comprises the step of generating representations of the bio-signal epochs into one or more different domains. Each bio-signal epoch may be divided into one or more than one of different frequency, temporal and spatial domain representations.

The different frequency domain representations may be obtained by dividing each bio-signal epoch into distinguishable frequency bands. The different temporal domain representations may be obtained by dividing each bio-signal epoch into a plurality of time segments. In one or more embodiments,

the time segments in each epoch temporally overlap but in other embodiments the time segments in each epoch do not temporally overlap.

The different spatial domain representations may be obtained by dividing each bio-signal epoch into a plurality of spatially distinguishable channels. Each channel may be derived from a different bio-signal detector.

The step of determining the value of one or more features of the each bio-signal representation may comprise determining values of features of individual bio-signal representations. The feature may comprise, for example, signal power of one or more than one bio-signal representations, signal power of one or more than one spatially distinguishable channels, a change in signal power of one or more than one bio-signal representations or a change in signal power of one or more than one spatially distinguishable channels.

The step of determining the value of one or more features of the each bio-signal representation may comprise determining values of features between different bio-signal representations. At least coherence or correlation may be detected between different bio-signal representations. One or more than one feature may comprise the correlation or coherence between signal power in different spatially distinguishable channels. One or more than one feature may comprises correlation or coherence between changes in signal power in different frequency bands.

The step of determining the value of one or more features of the each bio-signal representation may comprise applying one or more transforms to the different bio-signal representations, such as a Fourier Transform, wavelet transform or other linear or non-linear mathematical transform. The step of comparing the feature values to one or more than one mental state signature may comprise using a neural network to classify whether the measured feature values are indicative of the presence of a predefined mental state.

The step of comparing the feature values to one or more than one mental state signature may comprises performing a distance measure to measure the similarity between the measured feature values and the reference features values to classify whether the measured feature values are indicative of the presence of a predefined mental state.

The mental state may be an emotional state, but may also result from

mental focus on a task, image or other willed experience.

Another aspect of the invention provides a method of creating a signature for use in a method of detecting and classifying mental states as described above. The method may comprise the steps of eliciting a desired mental state from a user; determining the features of the bio-signal representations that most significantly indicate the presence of the desired mental state by the user; and generating the signature from a combination of those features. The desired mental state need not be predefined.

The method may further include the step of using feature values that are determined when the desired mental state is elicited from the user to update the signature for that mental state.

Yet another aspect of the invention provides an apparatus for detecting and classifying mental states, comprising a processor and associated memory device for carrying out a method as described above. A further aspect of the invention provides a computer program product, tangibly stored on machine readable medium, the product comprising instructions operable to cause a processor to carry out a method as described above.

A still further aspect of the invention provides a computer program product comprising instructions operable to cause a processor to carry out a method as described above.

FIGURES

These and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying figures which depict various views and embodiments of the device, and some of the steps in certain embodiments of the method of the present invention, where:

Figure 1 is a schematic diagram of an apparatus for detecting and classifying mental states in accordance with the present invention;

Figure 2 is a schematic diagram illustrating the position of bio-signal detectors in the form of scalp electrodes forming part of a head set used in the apparatus shown in Figure 1 ;

Figures 3 and 4 are flow charts illustrating the broad functional steps

performed during detection and classification of mental states by the apparatus shown in Figure 1 ; and

Figure 5 is a graphical representation of bio-signals processed by the apparatus of Figure 1 and the transformation of those bio-signals.

DESCRIPTION

Turning now to Figure 1 , there is shown an apparatus 100 for detecting and classifying mental states. The mental states can be deliberative or non- deliberative. In general, deliberative mental states occur when a subject consciously focuses on a task, image or willed experience. In contrast, non- deliberative mental states are mental states, such as emotions, preference, or sensations, which lack the subjective quality of a volitional act.

The apparatus 100 includes a head set 102 of bio-signal detectors capable of detecting various bio-signals from a subject, particularly electrical signals produced by the body, such as electroencephalograph (EEG) signals, electrooculograph (EOG) signals, electromyograph (EMG) signal, or like signals. The apparatus 100 is capable of detection of at least some mental states (both deliberative and non-deliberative) using solely electrical signals, particularly EEG signals, from the subject, and without direct measurement of other physiological processes, such as heart rate, blood pressure, respiration or galvanic skin response, as would be obtained by a heart rate monitor, blood pressure monitor, and the like. In addition, the mental states that can be detected and classified are more specific than the gross correlation of brain activity of a subject, e.g., as being awake or in a type of sleep (such as REM or a stage of non-REM sleep), conventionally measured using EEG signals. For example, specific emotions, such as excitement, or specific willed tasks, such as a command to push or pull an object, can be detected.

In the exemplary embodiment illustrated in the drawings, the headset 102 includes a series of scalp electrodes for capturing EEG signals from a subject or user. It should be noted, however, that the EEG signals measured and used by the apparatus 100 can include signals outside the frequency ranges of theta, alpha and beta waves (4-30 Hz), that are commonly analysed in research systems. The scalp electrodes may directly contact the scalp or alternately may be of the non-contact type that do not require direct placement

on the scalp. Unlike systems that provide high-resolution 3-D brain scans, e.g., MRI or CAT scans, the headset is generally portable and non-constraining.

The electrical fluctuations detected over the scalp by the series of scalp sensors are attributed largely to the activity of brain tissue located at or near the skull. The source is the electrical activity of the cerebral cortex, a significant portion of which lies on the outer surface of the brain below the scalp. The scalp electrodes pick up electrical signals naturally produced by the brain and make it possible to observe electrical impulses across the surface of the brain. Although in this exemplary embodiment the headset 102 includes several scalp electrodes, in other embodiments only one or more scalp electrodes, e.g., sixteen electrodes, may be used in the headset.

Each of the signals detected by the headset 102 of electrodes is fed through a sensor interface 104, which can include an amplifier to boost signal strength and a filter to remove noise, and then digitized by the analogue to digital converter 106. Digitized samples of the signal captured by each of the scalp sensors are stored during operation of the apparatus 100 in a data buffer

108 for subsequent processing.

The apparatus 100 further includes a processing system 109 including a digital signal processor 112, a co-processing device 1 10 and associated memory device for storing a series of instructions (otherwise known as a computer program or computer control logic) to cause the processing system

109 to perform desired functional steps. Notably, the memory includes a series of instructions defining at least one algorithm 114 to be performed by the digital signal processor 1 12 for detecting and classifying a predetermined type of mental state. Upon detection of each predefined type of mental state, a corresponding control signal is transmitted in this exemplary embodiment to an input/output interface 1 16 for transmission via a wireless transmission device 1 18 to a platform 120 for use as a control input by electronic entertainment applications, programs, simulators or the like. As well as enabling the classification and detection of mental states, the apparatus 100 also enables the generation of signatures for mental states. This can be important since some signatures can define a mental state that can be used across a population. These signatures are then used by the processing system 109 for classification and detection of the mental state for users other

than the subject from whom the signatures were generated.

In one embodiment, the algorithms are implemented in software and the series of instructions is stored in the memory of the processing system, e.g., in the memory of the processing system 109. The series of instructions causes the processing system 109 to perform the functions of the invention as described herein. Prior to being loaded into the memory, the instructions can be tangibly embodied in a machine readable storage device, such as a computer disk or memory card, or in a propagated signal. In another embodiment, the algorithms are implemented primarily in hardware using, for example, hardware components such as application specific integrated circuits (ASICs). Implementation of the hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art. In yet another embodiment, the algorithms are implemented using a combination of software and hardware. Other implementations of the apparatus 100 are possible. Instead of a digital signal processor, an FPGA (field programmable gate array) could be used. Rather than a separate digital signal processor and co-processor, the processing functions could be performed by a single processor. The buffer 108 could be eliminated or replaced by a multiplexer (MUX), and the data stored directly in the memory of the processing system. A MUX could be placed before the A/D converter stage so that only a single A/D converter is needed. The connection between the apparatus 100 and the platform 120 can be wired rather than wireless.

Although the apparatus 100 is illustrated in Figure 1 with all processing functions occurring in a single device that is external to the platform, other implementations are possible. For example, the apparatus can include a headset assembly that includes the headset, a MUX, A/D converter(s) before or after the MUX, a wireless transmission device, a battery for power supply, and a microcontroller to control battery use, send data from the MUX or A/D converter to the wireless chip, and the like. The apparatus can also include a separate processor unit that includes a wireless receiver to receive data from the headset assembly, and the processing system, e.g., the digital signal processor and the co-processor. The processor unit can be connected to the platform by a wired or wireless connection. As another example, the apparatus can include a head

set assembly as described above, the platform can include a wireless receiver to receive data from the headset assembly, and a digital signal processor dedicated to detection of mental states can be integrated directly into the platform. As yet another example, the apparatus can include a head set assembly as described above, the platform can include a wireless receiver to receive data from the headset assembly, and the mental state detection algorithms are performed in the platform by the same processor, e.g., a general purpose digital processor, that executes the application, programs, simulators or the like. Figure 2 illustrates one example of a positioning system 200 of the scalp electrodes forming part of the headset 102. The system 200 of electrode placement shown in Figure 2 is referred to as the "10-20" system and is based on the relationship between the location of an electrode and the underlying area of cerebral cortex. Each point on the electrode placement system 200 indicates a possible scalp electrode position. Each site is indicated by a letter to identify the lobe and a number or other letter to identify the hemisphere location. The letters F, T, C, P, and O stand for Frontal, Temporal, Central, Parietal and Occipital. Even numbers referred to the right hemisphere and odd numbers refer to the left hemisphere. The letter Z refers to an electrode place on the mid- line. The mid-line is a line along the scalp on the sagittal plane originating at the nasion and ending at the inion at the back of the head

The "10" and "20' refer to percentages of the mid-line division. The midline is divided into 7 positions, namely, Nasion, Fpz, Fz, Cz, Pz, Oz and Inion, and the angular intervals between adjacent positions are 10%, 20%, 20%, 20%, 20% and 10% of the mid-line length respectively.

In operation, the headset 102, including scalp electrodes positioned according to the system 200, are placed on the head of a subject in order to detect EEG signals. Figure 3 shows a series of steps carried out by the apparatus 100 during the capture of those EEG signals and subsequent data preparation operations carried out by the processing system 109.

At step 300, the EEG signals are captured and then digitised using the analogue to digital converters 106. The data samples are stored in the data buffer 108. The EEG signals detected by the headset 102 may have a range of characteristics, but for the purposes of illustration typical characteristics are as

follows: Amplitude 10 - 4000 /λ/, Frequency Range 0.16 - 256 Hz and Sampling Rate 128 - 2048 Hz.

At step 302, the data samples are conditioned for subsequent analysis. Sources of possible noise that are desired to be eliminated from the data samples include external interference introduced in signal collection, storage and retrieval. For EEG signals, examples of external interference include power line signals at 50/60 Hz and high frequency noise originating from switching circuits residing in the EEG acquisition hardware. A typical operation carried out during this conditioning step is the removal of baselines via high pass filters. Additional checks are performed to ensure that data samples are not collected when a poor quality signal is detected from the headset 102. Signal quality information can be fed back to a user to help them to take corrective action.

An artefact removal step 304 is then carried out to remove signal interference. EEG signals consist, in this example, of measurements of the electrical potential at numerous locations on a user's scalp. These signals can be represented as a set of observations χ n of some "signal sources" sm where n e [1 :N], m e [1 :M], n is channel index, N is number of channels, m is source index, M is number of sources. If there exists a set of transfer functions F and G that describe the relationship between s m and χ n , one can then identify with a certain level of confidence which sources or components have a distinct impact on observation x n and their characteristics. Different techniques such as Independent Component Analysis (ICA) are applied by the apparatus 100 to find components with greatest impact on the amplitude of X n . These components often result from interference such as power line noise, signal drop outs, and muscle, eye blink, and eye movement artefacts.

The EEG signals are converted, in steps 306, 308 and 310, into different representations that facilitate the detection and classification of the mental state of a user of the headset 102.

The data samples are firstly divided into equal length time segments within epochs, at step 306. While in the exemplary embodiment illustrated in Figure 5 there are seven time segments of equal duration within the epoch, in another embodiment the number and length of the time segments may be altered. Furthermore, in another embodiment, time segments may not be of equal duration and may or may not overlap within an epoch. The length of each

epoch can vary dynamically depending on events in the detection system such as artefact removal or signature updating. However, in general, an epoch is selected to be sufficiently long that a change in mental state, if one occurs, can be reliably detected. Figure 5 is a graphical illustration of EEG signals detected from the 32 electrodes in the headset 102. Three epochs 500, 502 and 504 are shown each with 2 seconds before and 2 seconds after the onset of a change in the mental state of a user. In general, the baseline before the event is limited to 2 seconds whereas the portion after the event (EEG signal containing emotional response) varies, depending on the current emotion that is being detected. The processing system 109 divides the epochs 500, 502 and 504 into time segments. In the example shown in Figure 5, the epoch 500 is divided into 1 second long segments 506 to 518, each of which overlap by half a second. A 4 second long epoch would then yield 7 segments.

The processing system 109 then acts in steps 308 and 310 to transform the EEG signal into the different representations so that the value of one or more features of each EEG signal representation can be calculated and collated at step 312. For example, for each time segment and each channel, the EEG signal can be converted from the time domain (signal intensity as a function of time) into the frequency domain (signal intensity as a function of frequency). In an exemplary embodiment, the EEG signals are band-passed (during transform to frequency domain) with low and high cut-off frequencies of 0.16 and 256 Hz, respectively.

As another example, the EEG signal can be converted into a differential domain (marginal changes in signal intensity as a function of time). The frequency domain can also be converted into a differential domain (marginal changes in signal intensity as a function of frequency), although this may require comparison of frequency spectrums from different time segments.

In step 312 the value of one or more features of each EEG signal representation can be calculated (or collected from previous steps if the transform generated scalar values), and the various values assembled to provide a multi-dimensional representation of the mental state of the subject. In addition to values calculated from transformed representations of the EEG signal, some values could be calculated from the original EEG signals.

As an example of the calculation of the value of a feature, in the

frequency domain, the aggregate signal power in each of a plurality of frequency bands can be calculated. In an exemplary embodiment described herein, seven frequency bands are used with the following frequency ranges: δ(2-4Hz), θ(4-8Hz), α1 (8-1 OHz), α2(10-13Hz), β1 (13-20Hz), β2(20-30Hz) and γ(30-45). The signal power in each of these frequency bands is calculated. In addition, the signal power can be calculated for various combinations of channels or bands. For example, the total signal power for each spatial channel (each electrode) across all frequency bands could be determined, or the total signal power for a given frequency band across all channels could be determined.

In other embodiments of the invention, both the number of and ranges of the frequency bands may be different to the exemplary embodiment depending notably on the particular application or detection method employed. In addition, the frequency bands could overlap. Furthermore, features other than aggregate signal power, such as the real component, phase, peak frequency, or average frequency, could be calculated from the frequency domain representation for each frequency band.

In this exemplary embodiment, the signal representations are in the time, frequency and spatial domains. The multiple different representations can be denoted as x" Jk where n, i, j, k are epoch, channel, frequency band, and segment index, respectively. Typical values for these parameters are: i e [1 :32] 32 spatially distinguishable channels (referenced Fpi to CPz) j e [1 :7] 7 frequency distinguishable bands (referenced δ to γ) The operations carried out in step 310-312 often produce a large number of state variables. For example, calculating correlation values for 2 four-second long epochs consisting of 32 channels, using 7 frequency bands gives more than 1 million state variables:

32 C 2 x7 2 x7 2 = 1190896

Since individual EEG signals and combinations of EEG signals from different sensors can be used, as well as wide range of features from a variety of different transform domains, the number of dimensions to be analysed by the processing system 109 is extremely large. This huge number of dimensions

enables the processing system 109 to detect a wide range of mental states, since the entire or a significant portion of the cortex and a full range of features are considered in detecting and classifying a mental state.

Other common features to be calculated by the processing system 109 at step 312 include the signal power in each channel, the marginal changes of the power in each frequency band in each channel, the correlations/coherence between different channels, and the correlations between the marginal changes of the powers in each frequency band. The choice between these properties depends on the types of mental state that are desired to distinguish. In general, marginal properties are more important in case of short term emotional burst whereas in a long term mental state, other properties are more significant.

A variety of techniques can be used to transform the EEG signal into the different representations and to measure the value of the various features of the EEG signal representations. For example, traditional frequency decomposition techniques, such as Fast Fourier Transform (FFT) and band-pass filtering, can be carried out by the processing system 109 at step 308, whilst measures of signal coherence and correlation can be carried out at step 310 (in this later case, the coherence or correlation values can be collated in step 312 to become part of the multi-dimensional representation of the mental state). Assuming that the correlations/coherence is calculated between different channels, this could also be considered a domain, e.g., a spatial coherence/correlation domain (coherence/correlation as a function of electrode pairs). For example, in other embodiments, a wavelet transform, dynamical systems analysis or other linear or non-linear mathematical transform may be used in step 310. The FFT is an efficient algorithm of the discrete Fourier transform which reduces the number of computations needed for N data points from 2N 2 to 2N 1Og 2 N. Passing a data channel in time domain through an FFT, will generate a description for that data segment in the complex frequency domain.

Coherence is a measure of the amount of association or coupling between two different time series. Thus, a coherence computation can be carried out between two channels a and b, in frequency band Cn, where the Fourier components of channels a and b of frequency fμ are xaμ and xbμ is:

Thus, a coherence computation can be carried out between two channels α and b, in frequency band ω n , where the Fourier components of

channels a and b of frequency f u are x αu and x bu is:

Correlation is an alternative to coherence to measure the amount of association or coupling between two different time series. For the same assumption as of coherence section above, a correlation r αb , computation can be carried out between the signals of two channels x a (Y,) and χ b (t,) , is defined as,

σ<X - X g )( X h - X t) r πh =

I∑ αι - χ α )2 b} - χ b )2

V ' J where x αι and x bl have already had common band-pass filtering 1010 applied to them.

Figure 4 shows in the various data processing operations, preferably carried out in real-time, which are then carried out by the processing system 109. At step 400, the calculated values of one or more features of each signal representation are compared to one or more mental state signatures stored in the memory of the processing system 109 to classify the mental state of the user. Each mental state signature defines reference feature values that are indicative of a predetermined mental state.

A number of techniques can be used by the processing device 109 to match the pattern of the calculated feature values to the mental state signatures. A multi layer perceptron neural network can be used to classify whether a signal representation is indicative of a mental state corresponding to a stored signature. The processing system 109 can use a standard perceptron with n inputs, one or more hidden layers of m hidden nodes and an output layer with / output nodes. The number of output nodes is determined by how many independent mental states the processing system is trying to recognize. Alternately, the number of networks used may be varied according to the number of mental states being detected. The output vector of the neural network can be expressed as,

Y = F 2 (W 2 -F 1 (W 1 -X))

where Wi is m by (n+ 1 ) weight matrix, W 2 is an / by {m+ 1 ) weight matrix (the additional column in the weight matrices allows for a bias term to be added) and X = (Xi 1 X 2 ,...X n ) is the input vector. F 1 and F 2 are the activation functions that act on the components of the column vectors separately to produce another column vector and Y is the output vector. The activation function determines how the node is activated by the inputs. The processing system 109 uses a sigmoid function. Other possibilities are a hyperbolic tangent function or even a linear function. The weight matrices can be determined either recursively or all at once. Distance measures for determining similarity of an unknown sample set to a known one can be used as an alternative technique to the neural network. Distances such as the modified Mahalanobis distance, the standardised Euclidean distance and a projection distance can be used to determine the similarity between the calculated feature values and the reference feature values defined by the various mental state signatures to thereby indicate how well a user's mental state reflects each of those signatures.

The mental state signatures and weights can be predefined. For example, for some mental states, signatures are sufficiently uniform across a human population that once a particular signature is developed (e.g., by deliberately evoking the mental state in test subjects and measuring the resulting signature), this signature can be loaded into the memory and used without calibration by a particular user. On the other hand, for some mental states, signatures are sufficiently non-uniform across the human population that predefined signatures cannot be used or can be used only with limited satisfaction by the subject. In such a case, signatures (and weights) can be generated by the apparatus 100, as discussed below, for the particular user (e.g., by requesting that the user make a willed effort for some result, and measuring the resulting signature). Of course, for some mental states the accuracy of a signature and/or weights that was predetermined from test subjects can be improved by calibration for a particular user. For example, to calibrate the subjective intensity of a non- deliberative mental state for a particular user, the user could be exposed to a stimulus that is expected to produce a particular mental state, the resulting bio- signals compared to a predefined signature. The user can be queried regarding the strength of the mental state, and the resulting feedback from the user

applied to adjust the weights. Alternatively, calibration could be performed by a statistical analysis of the range of stored multi-dimensional representations. To calibrate a deliberative mental state, the user can be requested to make a willed effort for some result, and the multi-dimensional representation of the resulting mental state can be used to adjust the signature or weights.

The apparatus 100 can also be adapted to generate and update signatures indicative of a user's various mental states. At step 402, data samples of the multiple different representations of the EEG signals generated in steps 300 to 310 are saved by the processing system 109 in memory, preferably for all users of the apparatus 100. An evolving database of data samples is thus created which allows the processing device 109 to progressively improve the accuracy of mental state detection for one or more users of the apparatus 100.

At step 404, one or more statistical techniques are applied to determine how significant each of the features is in characterising different mental states. Different coordinates are given a rating based on how well they differentiate. The techniques implemented by the processing system 109 use a hypothesis testing procedure to highlight regions of the brain or brainwave frequencies from the EEG signals, which activate during different mental states. At a simplistic level, this approach typically involves determining whether some averaged (mean) power value for a representation of the EEG signal differs to another, given a set of data samples from a defined time period. Such a "mean difference" test is performed by the processing system 109 for every signal representation. Preferably, the processing system 109 implements an Analysis of

Variance (ANOVA) F ratio test to search for differences in activation, combined with a paired Student's T test. The T test is functionally equivalent to the one way ANOVA test for two groups, but also allows for a measure of direction of mean difference to be analysed (i.e. whether the mean value of a mental state 1 is larger than the mean value for a mental state 2, or vice versa). The general formula for the Student's T test is: mean of mental state 1 - mean of mental state 2 t = - variance of mental state 1 ] ( variance of mental state 2 n for mental state 1 J I n for mental state 2

The "n" which makes the denominator in the lower half of the T equation is the number of time series recorded for a particular mental state which make up the means being contrasted in the numerator, (i.e. the number of overlapping or non-overlapping epochs recorded during an update. The subsequent t value is used in a variety of ways by the processing system 109, including the rating of the feature space dimensions to determine the significance level of the many thousands of features that are typically analysed. Features may be weighted on a linear or non-linear scale, or in a binary fashion by removing those features which do not meet a certain level of significance.

The range of t values that will be generated from the many thousands of hypothesis tests during a signature update can be used to give an overall indication to the user of how far separated the detected mental states are during that update. The t value is an indication of that particular mean separation for the two actions, and the range of t values across all coordinates provides a metric for how well, on average, all of the coordinates separate.

The above-mentioned techniques are termed univariate approaches as the processing system 109 performs the analysis for each individual coordinate at a time, and make feature selections decisions based on those individual t test or ANOVA test results. Corrections may be made at step 406 to adjust for the increased chance of probability error due to the use of the mass univariate approach. Statistical techniques suitable for this purpose include the following multiplicity correction methods: Bonferroni, False Discovery Rate and Dunn Sidack. An alterative approach is for the processing system 109 to analyse all coordinates together in a mass multivariate hypothesis test, which would account for any potential covariation between coordinates. The processing system 109 can therefore employ such techniques as Discriminant Function Analysis and Multivariate analysis of variance (MANOVA), which not only provides a means to select feature space in a multivariate manner, but also allows the use of eigenvalues created during the analysis to actually classify unknown signal representations in a real-time environment.

At step 408, the processing system 109 prepares for classifying incoming real-time data by weighting the coordinates so that those with the greatest

significance in detecting a particular mental state are given precedence. This can be carried out by applying adaptive weight preparation, neural network training or statistical weightings.

The signatures stored in the memory of the processing system 109 are updated or calibrated at step 410. The updating process involves taking data samples, which is added to the evolving database. This data is elicited for the detection of a particular mental state. For example, to update a willed effort mental state, a user is prompted to focus on that willed effort and signal data samples are added to the database and used by the processing system 109 to modify the signature for that detection. When a signature exists, detections can provide feedback for updating the signatures that define that detection. For example, if a user wants to improve their signature for willing an object to be pushed away, the existing detection can be used to provide feedback as the signature is updated. In that scenario, the user sees the detection improving, which provides reinforcement to the updating process.

At step 412, a supervised learning algorithm dynamically takes the update data from step 410 and combines it with the evolving database of recorded data samples to improve the signatures for the mental state that has been updated. Signatures may initially be empty or be prepared using historical data from other users which may have been combined to form a reference or universal starting signature.

At step 414, the signature for the mental state that has been updated is made available for mental state classification (at step 400) as well as signature feedback rating at step 416. As a user develops a signature for a given mental state, a rating is available in real-time which reflects how the mental state detection is progressing. The apparatus 100 can therefore provide feedback to a user to enable them to observe the evolution of a signature over time.

The discussion above has focused on determination of the presence or absence of a particular mental state. However, it is also possible to determine the intensity of that particular mental state. The intensity can be determined by measuring the "distance" of the transformed signal from the user to a signature. The greater the distance, the lower the intensity. To calibrate the distance to the subjective intensity experienced by the user to an intensity scale, the user can be queried regarding the strength of the mental state. The resulting

feedback from the user is applied to adjust the weights to calibrate the distance to the intensity scale.

It will be appreciated from the foregoing that the apparatus 100 advantageously enables the online creation of signatures in near real-time. The detection of a user's mental state and creation of a signature can be achieved in a few minutes, and then refined over time as the user's signature for that mental state is updated. This can be very important in interactive applications, where a short term result is important as well as incremental improvement over time.

It will also be appreciated from the foregoing that the apparatus 100 advantageously enables the detection of a mental state having a pregenerated signature (whether predefined or created for the particular user) in real-time. Thus, the detection of the presence or absence of a user's particular mental state, or the intensity of that particular mental state, can be achieved in realtime. Moreover, signatures can be created for mental states that need not be predefined. The apparatus 100 can classify mental states that are recorded for, not just mental states that are predefined and elicited via pre-defined stimuli.

Each and every human brain is subtly different. While macroscopic structures such as the main gyri (ridges) and sulci (depressions) are common, it is only at the largest scale of morphology at which these generalizations can be made. The intricately detailed folding of the cortex is as individual as fingerprints. This variation in folding causes different parts of the brain to be near the skull in different individuals.

For this reason the electrical impulses, when measured in combination on the scalp, differ between individuals. This means that the EEG recorded on the scalp must be interpreted differently from person to person. Historically, systems that aim to provide an individual with a means of control via EEG measurement have required extensive training, often of the system used and always by the user. The mental state detection system described herein can utilize a huge number of feature dimensions which cover many spatial areas, frequency ranges and other dimensions. In creating and updating a signature, the system ranks features by their ability to distinguish a particular mental state, thus highlighting those features that are better able to capture the brain's activity in a

given mental state. The features selected by the user reflect characteristics of the electrical signals measured on the scalp that are able to distinguish a particular mental state, and therefore reflect how the signals in their particular cortex are manifested on the scalp. In short, the user's individual electrical signals that indicate a particular mental state have been identified and stored in a signature. This permits real-time mental state detection or generation within minutes, through algorithms which compensate for the individuality of EEG.

It is to be understood that various modifications and/or additions may be made to the method and system for detecting and classifying a mental state without departing from the spirit or ambit of the present invention as defined in the claims appended hereto.