Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PORTABLE AUDITORY APPLIANCE WITH MOOD SENSOR AND METHOD FOR PROVIDING AN INDIVIDUAL WITH SIGNALS TO BE AUDITORILY PERCEIVED BY SAID INDIVIDUAL
Document Type and Number:
WIPO Patent Application WO/2012/072141
Kind Code:
A1
Abstract:
The method for providing an individual with signals to be auditorily perceived by said individual comprises the steps a) measuring at least one magnitude related to a state of mind of said individual; b) obtaining output audio signals, wherein said output audio signals are dependent on a result of said measuring; c) converting said output audio signals into signals to be auditorily perceived by said individual; and preferably also the step of p) providing data representative of a target state of mind of said individual, wherein said output audio signals are dependent on said data representative of said target state of mind of said individual. The portable appliance comprises a source of audio signals structured and configured for outputting output audio signals; an output converter structured and configured for converting said output audio signals into signals to be auditorily perceived by an individual; a sensing unit structured and configured for sensing at least one magnitude related to a state of mind of said individual; and a control unit operationally interconnected between said sensor and said source of audio signals, wherein said control unit is structured and configured for accomplishing that said output audio signals are dependent on a result of said sensing.

Inventors:
BORETZKI MICHAEL (CH)
GEHRING STEPHAN (CH)
Application Number:
PCT/EP2010/068760
Publication Date:
June 07, 2012
Filing Date:
December 02, 2010
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PHONAK AG (CH)
BORETZKI MICHAEL (CH)
GEHRING STEPHAN (CH)
International Classes:
A61B5/00; H04R25/00
Domestic Patent References:
WO2002009473A22002-01-31
Foreign References:
US20070071262A12007-03-29
US20070239294A12007-10-11
US20030060728A12003-03-27
US20060183980A12006-08-17
US20040234089A12004-11-25
EP1674062A12006-06-28
US6330339B12001-12-11
US20100196861A12010-08-05
US20100234671A12010-09-16
Attorney, Agent or Firm:
KREUTZ, Thomas, J. (Schwäntenmos 14, Zumikon, CH)
Download PDF:
Claims:
Patent Claims :

1. Method for providing an individual (10) with

signals (A) to be auditorily perceived by said individual, said method comprising the steps of:

a) measuring at least one magnitude related to a state of mind of said individual (10);

b) obtaining output audio signals (S2), wherein said

output audio signals are dependent on a result (M) of said measuring;

c) converting said output audio signals (S2) into

signals (A) to be auditorily perceived by said

individual (10) .

2. The method according to one claim 1, comprising the step of

p) providing data (T) representative of a target state of mind of said individual (10) ;

wherein said output audio signals (S2) are dependent on said data (T) representative of said target state of mind of said individual (10) .

3. The method according to claim 1 or claim 2, comprising the step of

d) deriving, from said result (M) of said measuring, data indicative of a state of mind of said individual, wherein said data are referred to as current-mood data;

wherein said output audio signals (S2) are dependent on said current-mood data, in particular, wherein step d) is carried out at least one of continuously, quasi- continuously, intermittently, periodically, and repeatedly, and comprising the steps of

m) receiving from said individual (10) or from another person input indicative of a state of mind of said individual (10); and

n) comparing the state of mind indicated by said input with the state of mind indicated by said current-mood data .

4. The method according to one of the preceding claims, comprising the step of

d) deriving, from said result (M) of said measuring, data indicative of a state of mind of said individual, wherein said data are referred to as current-mood data;

wherein said output audio signals (S2) are dependent on said current-mood data, in particular, wherein step d) is carried out at least one of continuously, quasi- continuously, intermittently, periodically, and repeatedly, and comprising the step of

o) providing said individual (10) with an indication

indicative of the state of mind of said individual as indicated by said current-mood data; in particular, wherein said indication is provided

optically and/or acoustically.

5. The method according to one of the preceding claims, wherein said at least one measured magnitude comprises at least one of the group consisting of

— a property obtained from said individual's voice, in particular at least one of the voice pitch, the voice loudness level, the speaking speed, the occurrence of certain words or phrases;

— a temperature of said individual's body or of a part thereof;

— a conductivity or resistance of said individual's skin or another magnitude indicative of sweat emission from said individual's skin;

— a property of a movement of said individual's body or of a part thereof, in particular a frequency or a length of a path of a movement or an accelleration of a movement of said individual's body or of said part thereof;

— said user's blood pressure, in particular a systolic and/or a diastolic pressure value;

— a property of said user's heart beat, in particular a heart beat frequency and/or a heart beat regularity and/or a heart beat intensity.

6. The method according to one of the preceding claims, wherein step b) comprises the step of

e) deriving said output audio signals (S2) by processing audio signals referred to as primary audio

signals (SI) ;

wherein said processing of said primary audio signals (SI) is carried out in a fashion dependent on a result (M) of said measuring, and in particular wherein said processing mentioned in step e) is carried out for changing at least one of

— a frequency composition of said output audio

signals (S2), in particular towards a relatively higher or towards a relatively lower content of high frequencies with respect to lower frequncies;

— a loudness dynamics of said output audio signals (S2) towards increased dynamics or towards decreased dynamics ;

— a loudness of said output audio signals (S2) towards an increased loudness or towards a decreased loudness

— a spatial acoustical focus.

7. The method according to claim 6, wherein said primary audio signals are representative of acoustic sound present in an acoustic environment in which said individual is located.

8. The method according to one of the preceding claims, wherein step b) comprises the step of

f) selecting, in dependence of a result (M) of said

measuring, audio signals referred to as primary audio signals (SI), wherein said primary audio signals (S2) are identical with said output audio signals (S2), or said output audio signals (S2) are derived from said primary audio signals (SI) ;

in particular, wherein step f) comprises the steps of g) controlling a sound generating unit (2a) in a fashion dependent on a result (M) of said measuring;

h) generating said primary audio signals (SI) by means of the so-controlled sound generating unit (2a) .

9. The method according to claim 8, wherein step f) comprises the step of

i) selecting said primary audio signals (SI) or a portion thereof from a multitude of stored audio signals;

in particular, wherein said multitude of stored audio signals comprises a multitude of audio signals

representative of speech, even more particularly of speech of a care-taker or therapist of said individual.

10. The method according to claim 9, comprising the step of loading said selected primary audio signals (SI) or said portion thereof from a remote location, in particular from the internet (www) . Use of a method according to one of the preceding aims for at least one of influencing signals auditorily perceived by said individual (10) ; adjusting sound processing; adjusting sound processing in a hearing device (H) ; adjusting sound processing in a hearing device (H) to the needs and preferences of said individual (10); optimizing sound processing in a hearing device (H) ; verifying the effectiveness of sound processing in a hearing device (H) ; monitoring a state of mind of said individual (10) ; influencing a state of mind of said individual (10) .

h portable appliance (1), comprising a source of audio signals (2) structured and

configured for outputting output audio signals (S2); an output converter (3) structured and configured for converting said output audio signals (S2) into signals (A) to be auditorily perceived by an

individual (10); a sensing unit (4) structured and configured for sensing at least one magnitude related to a state of mind of said individual (10); — a control unit (5) operationally interconnected between said sensing unit (4) and said source (2) of audio signals, wherein said control unit (5) is structured and configured for accomplishing that said output audio signals (S2) are dependent on a result of said sensing.

13. The appliance (1) according to claim 12, comprising a memory unit (6) comprising data (T) representative of a target state of mind of said individual, and wherein said control unit (5) is structured and configured for

accomplishing that said output audio signals (S2) are dependent on said data (T) representative of said target state of mind of said individual (10) .

14. The appliance according to claim 12 or claim 13, wherein said sensing unit (4) comprises a sensor for quantifying at least one property of said individual's voice, in particular at least one of a pitch sensor for measuring the voice pitch, a loudness sensor for measuring the voice loudness level, a sensor for determining the speaking speed, a sensor for determining the occurrence of certain words or phrases.

15. The appliance according to one of claims 12 to 14 wherein the appliance is identical with a hearing syst or the appliance comprises a hearing systern.

16. The appliance according to one of claims 12 to 15, comprising a communication interface (9) suitable for connecting the appliance (1) to the internet (www), wherein said control unit (5) is structured and configured for controlling said interface (9) so as to accomplish loading data (I) from the internet into the appliance (1), in particular digitally stored data (I) representative of sound .

17. The appliance according to one of claims 12 to 16, wherein said source of audio signals (2) comprises a mechanical-to-electrical converter (8) having an output and a signal processing unit (7) having an input, said input of said signal processing unit (7) being operationally

connected to said output of said mechanical-to-electrical converter (8), wherein said control unit (5) is structured and configured for controlling said signal processing unit (7) in dependence of said result of said sensing, more particularly wherein said control unit (5) is structured and configured for controlling sound processing applied by said signal processing unit (7) to audio signals fed to said input of said signal processing unit (7) in dependence of said result of said sensing.

18. The appliance according to one of claims 12 to 17, wherein said source (2) of audio signals comprises a sound generating unit (2a) structured and configured for

generating audio signals referred to as generated audio signals, wherein said output audio signals (S2) are identical with said generated audio signals or with audio signals derived from these, and wherein said sound

generating unit (2a) is controlled by said control unit (5) such that said generated audio signals are dependent on a result (M) of said sensing, and wherein said sound

generating unit (2a) is structured and configured for reproducing stored digital data (D) representative of sound, in particular wherein said control unit (5) is structured and configured for selecting, from more than one stored digital data (D) representative of sound of

different contents and in dependence of said result (M) of said sensing, stored digital data (D) representative of said generated audio signals.

19. Hearing system comprising an appliance (1) according to one of claims 1 to 18.

Description:
PORTABLE AUDITORY APPLIANCE WITH MOOD SENSOR AND METHOD FOR

PROVIDING AN INDIVIDUAL WITH SIGNALS TO BE AUDITORILY PERCEIVED BY SAID INDIVIDUAL

Technical Field

The invention relates to appliances which provide an individual with auditorily perceivable signals, in

particular to portable appliances of that kind. It relates to methods, appliances and uses according to the opening clauses of the claims. Particularly, the invention concerns hearing systems, but in some aspects also mobile multimedia systems and mobile multimedia appliances, in particular audio systems and audio appliances.

The invention can be applied in various fields, such as in hearing rehabilitation, in psychotherapy, in music

consumption and in other fields.

Under a hearing device, a device is understood, which is worn in or adjacent to an individual's ear with the object to improve the individual's audiological perception. Such improvement may also be barring acoustic signals from being perceived in the sense of hearing protection for the individual. If the hearing device is tailored so as to improve the perception of a hearing impaired individual towards hearing perception of a normal-hearing individual, then we speak of a hearing-aid device. With respect to the application area, a hearing device may be applied, e.g., behind the ear, in the ear, completely in the ear canal or may be implanted.

A hearing system comprises at least one hearing device. In case that a hearing system comprises at least one

additional device, all devices of the hearing system are operationally connectable within the hearing system.

Typically, said additional devices such as another hearing device, a remote control or a remote microphone, are meant to be worn or carried by said individual.

Under audio signals we understand electrical signals, analogue and/or digital, which represent sound.

Background of the Invention

In the field of hearing-aid devices, it is known to adjust hearing device sound processing parameters in dependence the acoustic environment the hearing-aid device user (and the hearing-aid device) is currently in. This is

accomplished by analyzing and classifying sound picked-up by the microphone of the hearing-aid device.

WO 02/09473 A2 discloses a hearing aid comprising a device being a noise generator for forwarding a natural or a synthetic "temple" noise (as defined in WO 02/09473 A2 ) ; a sound microphone; these two devices each forwarding the noise to a mixer, wherein said mixer is connected to an amplifier connected to an ear phone. All these devices are connected to each other by suitable connecting means. Said hearing aid may be a complete unit, being assembled from separate parts of the hearing aid or may be a combination of parts of existing devices and a combination with the other parts.

US 2004/234089 Al discloses a system for enhancing the hearing of certain sounds, the system including an electro- acoustic transducer for producing sounds in the vicinity of an ear, according to signals provided thereto, and a compensatory signal generator coupled with the electro- acoustic transducer, the compensatory signal generator producing a compensatory signal, according to at least a portion of a compensatory waveform, the compensatory waveform being determined according to ear otoacoustic emissions, the compensatory signal being employed to enhance the hearing, the compensatory signal generator providing the compensatory signal to the electro-acoustic transducer .

EP 1 674 062 Al discloses a personal monitoring system for a user, comprising: a sensor for sensing an internal body parameter of the user and/or a sensor for sensing an ambient parameter of the ambient around said user, an earpiece for being worn at least in part in the ear canal of the user including an acoustic output transducer for providing sound to the user's ear canal, an evaluation unit communicating with the sensor, means for individually implementing an individually defined regulation for the sensed parameter into said evaluation unit, the evaluation unit being adapted for monitoring sensed values of the parameter over time and comparing them to the individually defined regulations for the sensed parameter and being adapted for continuously judging whether the sensed values of the parameter comply with the individual regulation or not, and a compliance control unit communicating with the evaluation unit and with the output transducer for

providing acoustic signals to the user's ear via the output transducer for providing the user with acoustic information regarding the present compliance of the sensed parameter with the implemented individual regulations depending on the judgement made by the evaluation unit.

US 6 330 339 Bl discloses: Outputs of a pulse sensor, a brain wave sensor, a conductivity sensor and an

acceleration sensor are input to respectively corresponding condition detecting means, and the condition of the wearer (biological information, motion) is detected by the

condition detecting means. The condition determining means determines the operation mode of the hearing aid from the condition of the wearer according to predetermined

algorithm. Operation mode control portion drives an

earphone based on the operation mode. By this, the

characteristics of the hearing aid can be varied adapting to the wearer's condition.

US 2010/196861 Al discloses a method of operating a hearing instrument for processing an input sound and to provide an output stimulus according to a user's particular needs, and a related system, a computer readable medium and a data processing system. An object of US 2010/196861 Al is to provide an improved customization of a hearing instrument. The method includes the steps a) providing an estimate of the present cognitive load of the user; b) providing processing of an input signal originating from the input sound according to a user's particular needs; and

c) adapting the processing in dependence of the estimate the present cognitive load of the user. The estimate of the present cognitive load of a user is produced by in-situ direct measures of cognitive load (e.g. based on EEG- measurements, body temperature, etc.) or by an on-line cognitive model in the hearing aid system whose parameters have been preferably adjusted to fit to the individual user.

US 2010 / 0234671 Al discloses systems and methods for using particular types of music compositions having certain characteristics to treat depression and related disorders, autism, and other disorders. The music for use in music therapy efforts includes characteristics for modification of the psycho-physiological apparatus and response

including the use of vocal invented language elements to simulate pre-verbal communication and elements to coincide with and work with natural chrono-biological and circadian rhythms. For example, activating elements are provided in some compositions that are to be played immediately prior to a peak of chrono-biological activity. As another

example, other compositions include deactivating elements to improve relaxation immediately prior to a natural low of chrono-biological activity. Such activating and deactivating elements include musical elements such as changes of volume, frequency selection, and tempo. The compositions may be used to treat depression and other disorders based on timing and application of music the using the compositions.

Summary of the Invention

One object of the invention is to create a new way for providing an individual with signals to be auditorily perceived by said individual. A corresponding appliance and, in addition, the respective method shall be provided.

Another object of the invention is to provide a way of influencing a state of mind of an individual, in particular to do so in a specific way and/or in an improved way.

Another object of the invention is to provide a way of influencing a state of mind of an individual by means of sound auditorily perceived by said individual.

Another object of the invention is to provide an improved way of adjusting parameters of a hearing device to the needs and preferences of an individual using the hearing device .

Further objects emerge from the description and embodiments below. At least one of these objects is at least partially

achieved by appliances and methods according to the patent claims .

The method for providing an individual with signals to be auditorily perceived by said individual comprises the steps of:

a) measuring at least one magnitude related to a state of mind of said individual;

b) obtaining output audio signals, wherein said output audio signals are dependent on a result of said measuring;

c) converting said output audio signals into signals to be auditorily perceived by said individual.

This way, the individual can be provided with auditorily perceivable signals in one or another way, depending on the individual ' s state of mind.

In another but similar aspect of the invention, the method is a method for providing an individual with auditory signals . In another but similar aspect of the invention, the method is a method for influencing signals auditorily perceived by an individual.

In another aspect of the invention, the method is a method for adjusting sound processing.

In another aspect of the invention, the method is a method for adjusting sound processing in a hearing device. In another aspect of the invention, the method is a method for optimizing sound processing in a hearing device.

In another aspect of the invention, the method is a method for verifying the effectiveness of sound processing in a hearing device.

In another aspect of the invention, the method is a method for monitoring a state of mind of an individual.

In another aspect of the invention, the method is a method for influencing a state of mind of an individual.

The term "state of mind of an individual" - no matter whether we refer to an estimated or to a target or to another "state of mind of an individual" - comprises at least one of the group consisting of

— said individual's mood;

— said individual's emotions;

— said individual's feelings;

— an affect said individual has.

In particular, said magnitude related to a state of mind of said individual is a magnitude related to a current state of mind of said individual. Usually, said magnitude is a magnitude indicative of a (current) state of mind of said individual or is an indicator for a (current) state of mind of said individual.

It is in particular possible to provide that said output audio signals are designed for evening out mood swings of said individual. In one embodiment, said at least one measured magnitude is measured at said individual's body and/or at emissions of said individual's body.

In one embodiment which may be combined with the before- addressed embodiment, step a) is carried out at least one of continuously, quasi-continuously, intermittently, periodically, and repeatedly. E.g., said at least one magnitude is ongoinly measured.

In one embodiment which may be combined with one or more of the before-addressed embodiments, step a) is carried out by means of a portable sensor. In particular, step a) is carried out in an automated fashion by means of a portable sensor .

In one embodiment which may be combined with one or more of the before-addressed embodiments, the method comprises the step of

d) deriving, from said result of said measuring, data

indicative of a state of mind of said individual, wherein said data are referred to as current-mood data;

wherein said output audio signals are dependent on said current-mood data. In particular, step d) is carried out at least one of continuously, quasi-continuously,

intermittently, periodically, and repeatedly. It is well possible to provide that step d) is carried out in an automated fashion. It is furthermore well possible to provide that said data indicative of a state of mind of said individual are data indicative of a current state of mind of said individual, in particular wherein said "current" characterizes at least approximately the time of said measuring.

This is a suitable way for estimating said individual's current mood or, in other words, to carry out mood

recognition.

In particular, if more than one of such magnitudes are measured, it is possible to use a classifier for

determining said data indicative of said state of mind of said individual by means of classification, in particular in a similar way to classification used for classifying hearing situations / acoustic environments in the field of hearing-aid devices.

In one embodiment referring to the before-addressed

embodiment, the method comprises the steps of

m) receiving from said individual or from another person input indicative of a state of mind of said

individual; and

n) comparing the state of mind indicated by said input with the state of mind indicated by said current-mood data.

This allows to improve the mood recognition. Usually, the respective state of mind of said individual will refer to a current state of mind of said individual.

In one embodiment referring to one or both of the two before-addressed embodiments, the method comprises the step of o) providing said individual with an indication indicative of the state of mind of said individual as indicated by said current-mood data.

In particular, said indication is provided optically and/or acoustically.

This allows to inform the individual about the estimated mood. And it further allows to verify the correctness of the estimation of said individual's (current) mood; e.g., the individual could confirm the correctness, or otherwise enter corrections to be made for a more correct or more precise determination of said (current) mood; and this way, the quality of the mood estimation can be improved; this embodiment can well be carried out in conjunction with and step m) , in particular after step m) .

In one embodiment which may be combined with one or more of the before-addressed embodiments, the method comprises the step of

p) providing data representative of a target state of

mind of said individual;

wherein said output audio signals are dependent on said data representative of said target state of mind of said individual .

In particular, said output audio signals are dependent on said target state of mind and on said data indicative of a state of mind of said individual (cf . step d) ) . More specifically, it is possible to provide that said output audio signals are designed for changing said individual's state of mind from a state derived from a result of said measurement of said at least one magnitude to said target state of mind, i.e. from the state indicated by said current-mood data to the state indicated by said data representative of a target state of mind. It is possible to provide that said output audio signals are designed for one or both of stimulating said individual and becalming said individual .

In one embodiment which may be combined with one or more of the before-addressed embodiments, said at least one

measured magnitude comprises a property obtained from said individual's voice. In particular, it comprises at least one of the voice pitch, the voice loudness level, the speaking speed, the occurrence of certain words or phrases.

In one embodiment which may be combined with one or more of the before-addressed embodiments, said at least one

measured magnitude comprises at least one of the group consisting of

— a property obtained from said individual's voice, in particular at least one of the voice pitch, the voice loudness level, the speaking speed, the occurrence of certain words or phrases;

— a temperature of said individual's body or of a part thereof;

— a conductivity or resistance of said individual's skin or another magnitude indicative of sweat emission from said individual's skin;

— a property of a movement of said individual's body or of a part thereof, in particular a frequency or a length of a path of a movement or an accelleration of a movement of said individual's body or of said part thereof, and in particular the presence and intensity and/or frequency of a tremor, of a trembling or of a shivering;

— said user's blood pressure, in particular a systolic and/or a diastolic pressure value;

— a property of said user's heart beat, in particular a heart beat frequency and/or a heart beat regularity and/or a heart beat intensity, in particular wherein said heart beat intensity is derived from a

measurement of said user's blood pressure.

The first-named point (relating to said individual's voice) can quite readily be implemented if an input acoustic-to- electrical converter (such as a microphone) and a sound analysis unit is already present, such as is the case in many modern hearing devices and hearing systems.

In one embodiment which may be combined with one or more of the before-addressed embodiments, step b) comprises the step of

e) deriving said output audio signals by processing audio signals referred to as primary audio signals;

wherein said processing of said primary audio signals is carried out in a fashion dependent on a result of said measuring. Therein, it is possible to provide that said primary audio signals are outputted from a mechanical-to- electrical converter, in particular from a microphone, and more specifically from a mechanical-to-electrical converter of a hearing system or of a hearing device or from a microphone of a hearing system or of a hearing device.

E.g., said primary audio signals are representative of sound present in the (current) acoustic environment of said individual. E.g., in a hearing system, the sound processing is dependent on said result of said measuring. Typically, it will be provided that said output audio signals and said primary audio signals have the same content, e.g., they represent the same spoken words or the same musical piece. In one embodiment referring to the before-addressed

embodiment, said processing mentioned in step e) is carried out for changing at least one of

— a frequency composition of said output audio signals, in particular towards a relatively higher or towards a relatively lower content of high frequencies with respect to lower frequencies;

— a loudness dynamics of said output audio signals

towards increased dynamics or towards decreased dynamics ; — a loudness of said output audio signals towards an

increased loudness or towards a decreased loudness;

— a spatial acoustical focus.

In a first alternative, this can be accomplished for stimulating said individual; in a second alternative, this can be accomplished for calming down said individual.

Said changing said spatial acoustical focus can be

accomplished by, e.g., adjusting beam former settings, more particularly beamformer settings of a directional microphone .

Said high frequencies are usually frequencies above 1kHz.

In one embodiment to be combined with the before-addressed embodiment, said primary audio signals are representative of acoustic sound present in an acoustic environment in which said individual is located.

In one embodiment which may be combined with one or more of the before-addressed embodiments, step b) comprises the step of

f) selecting, in dependence of a result of said

measuring, audio signals referred to as primary audio signals, wherein said primary audio signals are identical with said output audio signals, or said output audio signals are derived from said primary audio signals.

Said deriving said output audio signals from said primary audio signals is usually accomplished by means of sound processing. By means of step f) , it is possible to select a content of said output audio signals in dependence of a result of said measuring.

In one embodiment referring to the before-addressed

embodiment, step f) comprises the steps of

g) controlling a sound generating unit in a fashion

dependent on a result of said measuring;

h) generating said primary audio signals by means of the so-controlled sound generating unit. Note that, in case step g) is provided, the selecting mentioned in step f) is usually carried out by selecting the corresponding way of controlling said sound generating unit, e.g., by adjusting sound generating unit parameters accordingly.

In one embodiment referring to one or both of the two before-addressed embodiments, step f) comprises the step of i) selecting said primary audio signals or a portion

thereof from a multitude of stored audio signals;

in particular, wherein said multitude of stored audio signals comprises a multitude of audio signals

representative of speech, even more particularly of speech of a care-taker or therapist of said individual. Said primary audio signals can be representative of speech spoken by said care-taker or therapist of said individual. Usually, said stored audio signals are digitally stored audio signals. Usually, said stored audio signals are stored digital data representative of sound.

In one embodiment referring to the before-addressed

embodiment, the method comprises the step of loading said selected primary audio signals or said portion thereof from a remote location, in particular from the internet.

In one embodiment which may be combined with one or more of the before-addressed embodiments, step c) is carried out involving a hearing device, or more particularly by means of a hearing device, in particular wherein the hearing device is a hearing-aid device. Alternatively, it is possible to carry out step c) by means of or involving a headphone or an earphone. In one embodiment which may be combined with one or more of the before-addressed embodiments, the method comprises carrying out, after steps a) , b) and c) , the steps of a') measuring again said at least one magnitude related to a state of mind of said individual;

b") obtaining amended output audio signals, wherein said amended output audio signals are dependent on a result of said measuring mentioned in step a) and a result of of said further measuring mentioned in step a') .

In one embodiment referring to the before-addressed

embodiment, the method comprises the steps of

j ) deriving from a result of the repeated measuring

mentioned in step a') data indicative of how said signals to be auditorily percevied by said individual did effect said individual's state of mind, wherein said data are referred to as control data; and

k) converting said amended output audio signals into

signals to be auditorily perceived by said individual, wherein said amended output audio signals are

dependent on said control data.

This way, an iterative process is enabled for obtaining - and optimizing - the output audio signals. A closed-loop control for optimizing the output audio signals can be implemented this way.

In one embodiment which may be combined with one or more of the before-addressed embodiments, one or more and in particular each of the cited method steps are carried out in an automated fashion. In one embodiment which may be combined with one or more of the before-addressed embodiments, one or more and in particular each of the cited method steps are carried out by means of a portable appliance worn or carried by said individual. E.g., step a) would be carried out by means of a portable sensor, in particular by means of a sensor carried or worn by said individual; step b) would be carried out by means of a portable source of audio signals, in particular by means of a source of audio signals carried or worn by said individual; step c) would be carried out by means of a portable output converter, in particular by means of an output converter carried or worn by said individual; step d) would be carried out by means of a portable estimating unit, in particular by means of an estimating unit carried or worn by said individual.

The use is a use of a method according to one of the above- described methods for at least one of

— influencing signals auditorily perceived by said

individual ; — adjusting sound processing;

— adjusting sound processing in a hearing device;

— adjusting sound processing in a hearing device to the needs and preferences of said individual;

— optimizing sound processing in a hearing device; — verifying the effectiveness of sound processing in a hearing device;

— monitoring a state of mind of said individual; — influencing a state of mind of said individual.

The portable appliance comprises

— a source of audio signals structured and configured for outputting output audio signals; — an output converter structured and configured for

converting said output audio signals into signals to be auditorily perceived by an individual;

— a sensing unit structured and configured for sensing at least one magnitude related to a state of mind of said individual;

— a control unit operationally interconnected between said sensing unit and said source of audio signals, wherein said control unit is structured and configured for accomplishing that said output audio signals are dependent on a result of said sensing.

This makes it possible to achieve that said output audio signals are dependent on a state of mind of said individual as derived from a result of said sensing, in particular of a current state of mind of said individual. Usually, said magnitude is related to a current state of mind of said individual. By means of said sensing, it is attempted to determine a (current) state of mind of said individual.

The appliance can be used for carrying out one or more of the above-described methods. The portable appliance can be, e.g., an audio-related appliance and/or a portable audio- related system. The portable appliance can be, e.g., a portable appliance for providing an individual with signals to be auditorily perceived by said individual. Usually, the portable appliance will be carried or worn by said

individual, and the portable appliance will usually be structured and configured for being carried or worn by an individual.

Said signals to be auditorily perceived by an individual are usually sound, i.e. sound waves.

Usually, said output converter is or comprises a

loudspeaker, in particular a receiver of a hearing device. In one embodiment referring to the before-addressed

embodiment, said at least one measured magnitude is

measured at said individual's body and/or at emissions of said individual's body.

In one embodiment which may be combined with one or both of the before-addressed appliance embodiments, said sensing unit comprises at least one sensor for quantifying at least one property of said individual's voice. In particular, it comprises at least one of

— a pitch sensor for measuring the voice pitch, — a loudness sensor for measuring the voice loudness

level,

— a sensor for determining the speaking speed,

— a sensor for determining the occurrence of certain

words or phrases.

In one embodiment which may be combined with one or both of the before-addressed appliance embodiments, said sensing unit comprises at least one sensor of the group consisting

— a sensor for quantifying at least one property of said individual's voice, in particular at least one of a pitch sensor for measuring the voice pitch, a loudness sensor for measuring the voice loudness level, a sensor for determining the speaking speed, a sensor for determining the occurrence of certain words or phrases; — a thermometer for measuring a temperature of said

individual's body or of a part thereof;

— a conductivity or resistance sensor for measuring a conductivity or resistance of said individual's skin, or another sensor for measuring another magnitude indicative of sweat emission from said individual's skin;

— a velocity and/or acceleration and/or movement sensor for measuring a property of a movement of said

individual's body or of a part thereof, in particular for measuring a frequency of a movement or a length of a path of a movement or an accelleration of a movement of said individual's body or of said part thereof, and in particular the presence and intensity and/or frequency of a tremor, of a trembling or of a

shivering;

— a blood pressure sensor for measuring said user's

blood pressure, in particular a systolic and/or a diastolic pressure value; — a sensor for quantifying a property of said user's heart beat, in particular a heart beat frequency and/or a heart beat regularity and/or a heart beat intensity, in particular wherein said heart beat intensity is derived from a measurement of said user's blood pressure.

Usually, said sensor is a measuring sensor, in particular a sensor for measuring a physical magnitude.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said source of audio signals is or comprises a portable audio system or appliance .

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said source of audio signals is or is comprised in a device of a hearing system, in particular in a hearing device.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, the appliance comprises a device to be worn in and/or near said

individual's ear, and in particular said device comprises said output converter.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said source of audio signals is comprised in a device to be worn in and/or near said individual's ear.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, the appliance comprises a memory unit comprising data representative of a target state of mind of said individual, and said control unit is structured and configured for accomplishing that said output audio signals are dependent on said data representative of said target state of mind of said

individual. That target state of mind is a state of mind of the individual which is attempted to be achieved by means of the appliance and by means of the respective method, respectively.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, the appliance comprises a communication interface suitable for connecting the appliance to the internet, wherein said control unit is structured and configured for controlling said interface so as to accomplish loading data from the internet into the appliance, in particular digitally stored data

representative of sound. In particular, it is possible to provide that said source of audio signals comprises stored digital data representative of sound loaded from the internet. Data transfer and exchange may be accomplished, e.g., using the File Transfer Protocol FTP.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said source of audio signals comprises a signal processing unit for processing audio signals which is structured and configured for outputting said output audio signals, wherein said control unit is structured and configured for controlling said signal processing unit in dependence of said result of said sensing. The signal processing unit can be stuctured and configured for outputting said output audio signals in response to feeding audio signals to an input of said signal processing unit. The signal processing unit can be structured and configured for obtaining said output audio signals by processing audio signals fed to an input of said processing unit. Accordingly, the signal processing applied to audio signals fed to said signal processing unit (we can refer to these as primary audio signals) depends on said result of said sensing. Since a controlling of a signal processing unit usually is accomplished by adjusting signal processing parameters, one can in this case also say that said control unit is structured and configured for

accomplishing that said output audio signals are dependent on a result of said sensing by adjusting signal processing parameters of said signal processing unit in dependence of said result of said sensing. This embodiment can well be implemented in a hearing system. And in this embodiment, it is well possible to provide that audio signals

representative of (acoustic) sound present in the (current) acoustic environment of said individual are the audio signals (primary audio signals) which are then processed in the described way.

In one embodiment referring to the before-addressed

embodiment, said source of audio signals comprises an input converter structured and configured for converting acoustic sound into audio signals, and an output of said input converter is operationally connected to said input of said sound processing unit for feeding at least audio signals outputted from said input converter or audio signals derived from these to said signal processing unit. In particular, said input converter is a mechanical-to- electrical converter. Usually, said acoustic sound is sound waves .

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said source of audio signals comprises a mechanical-to-electrical

converter having an output and a signal processing unit having an input, said input of said signal processing unit being operationally connected to said output of said mechanical-to-electrical converter, wherein said control unit is structured and configured for controlling said signal processing unit in dependence of said result of said sensing, more particularly wherein said control unit is structured and configured for controlling sound processing applied by said signal processing unit to audio signals fed to said input of said signal processing unit in dependence of said result of said sensing.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said source of audio signals comprises a sound generating unit structured and configured for generating audio signals referred to as generated audio signals, and said output audio signals are identical with said generated audio signals or with audio signals derived from these, and said sound generating unit is controlled by said control unit such that said generated audio signals are dependent on a result of said sensing.

In one embodiment referring to the before-addressed

embodiment, said sound generating unit is structured and configured for synthesizing audio signals. In this case, it can be provided that the kind of synthesized audio signals is dependent on said result of said sensing, and in

particular the contents of the synthesized audio signals depends on said result of said sensing.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said sound generating unit is structured and configured for

reproducing stored digital data representative of sound, and in particular, said control unit is structured and configured for selecting, from more than one stored digital data representative of sound of different contents and in dependence of said result of said sensing, stored digital data representative of said generated audio signals. This way, it can be accomplished that the kind of stored digital data representative of sound depends on said result of said sensing, in particular wherein the contents of the

reproduced audio signals depends on said result of said sensing .

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, the appliance comprises a user interface structured and configured for receiving input (in particular voluntary input) from said individual for influencing said output audio signals and, in particular, the appliance comprises, more particularly, in addition, said sensing unit, wherein said sensing unit is distinct from said user interface. Said user interface may allow, e.g., to adjust an output volume of signals emitted from the appliance, more particularly an output volume of said signals to be auditorily perceived by said individual. In this embodiment, said sensing unit usually does not sense manipulations of said user interface and/or does not sense manipulations of user controls such as switches and/or knobs and/or selectors.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, said magnitude usually is a magnitude not originating from a manual operation of a user interface by said individual, in particular wherein said user interface comprises manual controls such as switches and/or knobs and/or selectors. In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, the appliance comprises a hearing device.

In one embodiment which may be combined with one or more of the before-addressed appliance embodiments, the appliance comprises or is a hearing system.

The hearing system according to the invention comprises an appliance according to the invention.

The invention comprises appliances with features of

corresponding methods according to the invention, and vice versa, i.e. methods with features of corresponding

appliances .

The advantages of the appliances basically correspond to the advantages of corresponding methods and vice versa.

Further embodiments and advantages emerge from the

dependent claims and the figures. Brief Description of the Drawings

Below, the invention is described in more detail by means of examples and the included drawings. The figures show:

Fig. 1 a schematic illustration of a method and of an appliance and an individual;

Fig. 2 a schematic illustration of a method and of an appliance;

Fig. 3 a schematic illustration of a method and of an appliance communicating with the internet.

The reference symbols used in the figures and their meanin are summarized in the list of reference symbols. The described embodiments are meant as examples and shall not confine the invention.

Detailed Description of the Invention

In section "Summary of the Invention" above, many aspects and details related to the invention have already been explained. Already from that alone, but at least when considering the following few examples, a person skilled in the art will be enabled to carry out the invention in its various aspects.

Fig. 1 shows a schematic illustration of a method and of an appliance 1 and an individual 10. The individual wears hearing devices H which are comprised in the appliance 1. The appliance 1 is a portable appliance and furthermore comprises a sensing unit 4, a control unit 5, an optional memory unit 6 and a source 2 of audio signals.

Sensing unit 4 allows to measure a magnitude related to a state of mind of individual 10. For this, usually a sensor will be employed. E.g., a resistance of the individual's skin could be measured and used for estimating the amount of sweat on the individual's skin and therefrom (or

directly from the resistance) estimate, to how much stress individual 10 is exposed; and/or by means of a microphone, the individual's voice could be recorded, and using a computer or a voice analysis unit, one or more properties of said individual's voice could be extracted from the recorded voice, such as the voice pitch or the speaking speed, and therefrom, the individual's state of mind, e.g., the individual's degree of excitation, could be estimated. It is generally advisable to measure or sense several magnitudes related to a state of mind of individual 10, namely in order to achieve a strongly improved (i.e. more reliable) estimation of the individual's state of mind, e.g., of the individual's mood.

Accordingly, a result of the measuring, or more

particularly data M representative of a result of the measuring, is obtained. These data M make it possible to enable control unit 5 to control the source 2 of audio signals in dependence thereof and thus in dependence of the individual's mood. Source 2 of audio signals generates output audio signals S2 which therefore depend on data M. The output audio

signals S2 are converted into signals that are auditorily perceivable by the individual 10, i.e. usually, output audio signals S2 will be converted into sound (sound waves), such as in the indicated case with hearing

devices H being employed for that conversion, but, e.g., in case of cochlea implants, the auditorily perceivable signals would be electrical signals; and mechanical

signals, such as signals for exciting the individual's tympanic membrane are thinkable, too.

Note that it is well possible that source 2 of audio signals is comprised in one or both of the hearing

devices H.

Memory unit 6 comprises data T representative of a target state of mind of individual 10. And the influencing of output audio signals S2 may also depend on these data T. This can be used, e.g., for evening out mood swings.

A simple example: Using the above-sketched sweat / skin conductivity sensor, and assuming that the individual 10 should be rather calm, not too excited and not stressed, data T could be indicative thereof or of a threshold conductivity value, and if the conductivity value (embodied as data M) obtained by means of the sensor applied to individual 10 exceeds that threshold value, either calming music could be played to individual 10 (via source 2 of audio signals and hearing devices H) , or sound to which individual 10 is exposed by means of hearing devices H is decreased in loudness and/or reduced with respect to the relative amount of higher frequencies to lower frequencies, e.g., by means of a lowpass filter. It can be provided that these effects (calmity of music; loudness reduction and/or reduction of high frequencies) are provided the more pronounced the stronger said threshold value is exceeded.

Figs. 2 and 3 show the already indicated two aspects of how data M and/or data T (and possibly other data) may be used for influencing output audio signals S2, which aspects can be applied separately or in combination; i.e. it is

possible to realize solely one of these aspects, but on the other hand, it is also possible to combine these aspects in one and the same embodiment. Fig. 2 illustrates that the sound impression of perceivable signals originating from data representative of a given or pre-determined content is changed in dependence of said data M (and optionally also of other data) . And Fig. 3 illustrates that output data S2 representative of different content are selected in

dependence of said data M (and optionally also of other data) and then converted into perceivable signals.

Combined, the two aspects result in that the sound

impression of perceivable signals originating from data representative of a given certain content is changed in dependence of said data M or other data, wherein that certain content is selected in dependence of said data M (and optionally also of other data) .

Fig. 2 shows a schematic illustration of a method and of an appliance 1. The embodiment of Fig. 2 is similar to the one of Fig. 1, but with respect to source 2 of audio signals, more detaileds are shown. Source 2 of audio signals comprises a mechanical-to-electrical converter 8 (input converter) such as a microphone, and a signal processing unit 7 such as a DSP chip. Output audio signals S2

generated by source 2 of audio signals are fed to an output converter 3 such as an electrical-to-mechanical converter, e.g., a loudspeaker. Units 2 and 3 can well be embodied in a hearing device such as in one or both hearing devices H like those shown in Fig. 1.

Mechanical-to-electrical converter 8 converts sound

present in the individual's environment into primary audio signals SI, which are fed to signal processing unit 7 in which they are processed in dependence of data M (and optionally also in dependence of data T and/or other data) , so as to obtain output audio signals S2 which are then converted into sound by means of output converter 3, so as to obtain signals A to be auditorily perceived by

individual 10.

This embodiment can well be implemented in a hearing device, making use of units 8, 7 and 3 which usually are anyway present in a modern hearing device. The sensing unit 4 could be implemented in a processor of the hearing device structured and configured for voice analysis

(possibly in DSP 7) and fed with audio signals from

microphone 8 (cf. the dotted arrow in Fig. 2), such that this is an example for that even the hearing device alone (without further devices) could implement the complete appliance 1.

Fig. 3 shows a schematic illustration of a method and of an appliance 1 communicating with the internet www. Also this embodiment is similar to the one of Fig. 1, but it shows more details with respect to another possible embodiment of source 2 of audio signals, and, in addition, appliance 1 is connectable to the internet www.

Like in the before-described embodiments, control unit 5 controls source 2 of audio signals in dependence of data M which are related to a state of mind of the individual and, optionally, of data T which indicate a target state of mind. Source 2 of audio signals comprises a sound

generating unit 2a which is capable of interpreting MP3 data and a memory unit 12 in which digital data D

representative of sound are stored, such as MP3 data.

Control unit 5 now determines, which of the data D are selected, i.e. it determines the contents of the signals A auditorily presented to the individual and possibly also the loudness or the kind of filtering applied for deriving the corresponding output audio signals S2. And all this is carried out in dependence of the data M and possibly also of the data T.

Moreover, appliance 1 comprises a communication interface 9 controlled by control unit 5 which allows to have a (direct or indirect) communication connection to the internet www. Thus, it is possible to load data I from the internet

(e.g., using the File Transfer Protocol FTP), e.g., from an external server 11 in which suitable data I such as MP3 data are stored. The selection which data I to load will depend on data M and/or on data I, e.g., if it turns out from measurements carried out at the individual and under consideration of the data T , that the individual should become more lively or vivid, data I representative of exciting and motivating music could be loaded via the internet, stored via communication interface 9 (and

possibly also via control unit 5) in memory unit 12 and then - as data D - be passed to sound generating unit 2a, so as to finally present that specifically selected music to the individual.

Instead of music, one could also use speech. E.g., speech spoken by a care-taker or therapist of the individual could be stored in memory unit 12 (possibly also loaded via the internet), so that - depending on the individual's

estimated current mood (cf. data M) and possibly also depending on a target mood (cf. data T) - encouraging or becalming words or sentences as spoken by said care-taker or therapist could be reproduced.

Furthermore, it is (in all the embodiments) possible - by continued or repeated measuring / sensing - to estimate the effect achieved by the signals A (which depend at least on before-obtained data M) . So, if the desired effect has not been achieved, such kind of signals A can be presented to the user which depend on the newly obtained data M, so as to come closer to the desired effect on the individual's state of mind. Thus, a closed-loop control can be

implemented which allows to optimize the procedure.

Furthermore, it is possible to collect data from a

multitude of appliances of different individuals, e.g., in a computer system such as in a server, and analyze these data. These data can be analyzed, in particularly

statistically analyzed. This may lead to improvements with respect to the interpretation of measuring signals M, thus resulting in a better mood recognition, and/or with respect to the selection of ways of signal processing and/or the selection of (contents of) primary audio signals SI, thus resulting in a more efficient mood steering.

Aspects of the embodiments have been described in terms of functional units. As is readily understood, these

functional units may be realized in virtually any number of hardware and/or software components adapted to perform the specified functions. For example, instead of having one control unit 5, the same effect can be realized by means of several operationally interconnected control units, wherein it is possible to realize these in one or in several integrated circuit chips. And the sensing unit 4 may comprise several sensors.

List of Reference Symbols

I appliance

2 source of audio signals

2a sound generating unit

3 output converter, electrical-to-mechanical converter, loudspeaker

4 sensing unit

5 control unit

6 memory unit

7 signal processing unit

8 mechanical-to-electrical converter, microphone

9 communication interface

10 individual

II external computer, external server

12 memory unit

A signals to be auditorily perceived by an individual; sound waves

D stored digital data representative of sound

H hearing device

I data, data from the internet M result of the measuring, result of the sensing; data representative of a result of the measuring, data representative of a result of the sensing

SI primary audio signals

S2 output audio signals

T data representative of a target state of mind of an individual

W sound, sound waves

www internet