Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR MONITORING EMOTIONS
Document Type and Number:
WIPO Patent Application WO/2019/132772
Kind Code:
A1
Abstract:
A method and system for monitoring emotions of a user is provided. The method includes the steps of: monitoring at least one physiological parameter of the user when the user is in a normal condition to determine a threshold value of the at least one physiological parameter of the user; monitoring voice profile and speech pattern of the user when the user is in the normal condition to determine a normal voice profile and identify a normal speech pattern of the user respectively; generating normal face profile of the user, wherein the normal face profile comprises a plurality of emotions-based characteristics of the face; determining a current value of the at least one physiological parameter, a current voice profile and a current speech pattern of the user, and a current face profile of the user using at least one image capturing and voice processing device; computing an emotional index of the user using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile; and identifying at least one emotion of the user based on the emotional index.

Inventors:
SHANTHARAM SUDHEENDRA (IN)
Application Number:
PCT/SG2018/000010
Publication Date:
July 04, 2019
Filing Date:
December 26, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KAHA PTE LTD (SG)
International Classes:
A61B5/00; G10L25/63; G06K9/46; G10L17/26
Foreign References:
US20140112556A12014-04-24
CN106539573A2017-03-29
CN106910514A2017-06-30
CN107220591A2017-09-29
CN105739688A2016-07-06
Download PDF:
Claims:
We claim:

1. A method for monitoring emotions of a user, the method comprising:

monitoring at least one physiological parameter of the user when the user is in a normal condition, to determine a threshold value of the at least one physiological parameter of the user;

monitoring voice profile and speech pattern of the user when the user is in the normal condition, to determine a normal voice profile and identify a normal speech pattern of the user respectively; generating normal face profile of the user, wherein the normal face profile comprises a plurality of emotions-based characteristics of the face;

determining a current value of the at least one physiological parameter, a current voice profile and a current speech pattern of the user, and a current face profile of the user using at least one image capturing and voice processing device; computing an emotional index of the user using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile; and identifying at least one emotion of the user based on the emotional index.

2. The method as claimed in claim 1, further comprising:

receiving an input pertaining to co-ordinates of a geographical location;

identifying a plurality of users residing within a predetermined area of the geographical location, where the plurality of users are subscribers of an application server to assist in determination of the emotional index of the geographical location; computing the emotional index of each user of the plurality of users; and determining the emotional index of the geographical location based on the emotional index of each user of the plurality of users

3. The method as claimed in claim 1, further comprising:

measuring current level of skin conductance of the user; associating a predefined weightage with the measured current level of the skin conductance to generate a first value; adding the first value within the computed emotional index.

4. The method as claimed in claim 1, wherein identifying at least one emotion of the user based on the emotional index comprises: computing a mean value of the emotional index for a predetermined number of days; defining at least three ranges based on the mean value of the emotional index; and identifying the at least one emotion of the user in accordance with the at least three ranges.

5. The method as claimed in claim 1, further comprising:

determining information regarding a number and a type of words or phrases spoken by the user, decibel value, speed of speech while exhibiting the at least one emotion in the normal speech pattern; and computing the current speech pattern in accordance with the number and the type of words or phrases spoken by the user.

6. The method as claimed in claim 1, wherein computing an emotional index of the user comprises: comparing the current value and the threshold value of the at least one physiological parameter to determine a first value, current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively to determine a second value, the current face profile with the normal face profile to determine a third value; associating with the first value, the second value and the third value respective weighing factors; and computing the emotional index using the first value, the second value, the third value and the respective weighing factors.

7. The method as claimed in claim 1, further comprising; notifying to at least another user regarding the at least one emotion of the user.

8. The method as claimed in claim 1, further comprising: initiating at least one preventive action based on the emotional index of the user.

9. A system for monitoring emotions of a user, the system comprising; at least one sensor configured to monitor at least one physiological parameter of the user when the user is in a normal condition to determine a threshold value of the at least one physiological parameter of the user;

a voice profiler configured to monitor voice profile and speech pattern of the user when the user is in the normal condition to determine a normal voice profile and identify' a normal speech pattern of the user respectively; a face pattern recognizer configured to generate a normal face profile of the user, wherein the normal face profile comprises a plurality of emotions-based characteristics of the face; a controller configured to determine a current value of the at least one physiological parameter, a current voice profile and a current speech pattern of the user, and a current face profile of the user using at least one image capturing and processing device; an emotional index calculator configured to compute an emotional index of the user using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile; and an emotion identifier configured to identify at least one emotion of the user based on the emotional index.

10. The system as claimed in claim 9, wherein the emotional index calculator is further configured to: compare the current value and the threshold value of the at least one physiological parameter to determine a first value, current voice profile and the current speech patern of the user with the normal voice profile and normal speech pattern of the user respectively to determine a second value, the current face profile with the normal face profile to determine a third value;

associate with the first value, the second value and the third value respective weighing factors; and compute the emotional index using the first value, the second value, the third value and the respective weighing factors.

Description:
METHOD AND SYSTEM FOR MONITORING EMOTIONS

FIELD OF THE INVENTION

The present disclosure relates to the field of parameters monitoring and more particularly to the method and system for monitoring emotions of a user based on monitored parameters.

BACKGROUND OF THE INVENTION

A hectic and a chaotic life schedule have gifted human beings many undesirable and nonessential medical and psychological conditions. As a result, human beings face stress, anger, frustration, anxiety and a disturbed emotional environment in a continuous manner.

Various studies suggest that emotions such as stress, anger and other unpleasant emotions cause a substantial damage to a health of the person. The effects of these unpleasant emotions grew stronger with time until a proper action is not taken, It is imperative to monitor emotions of the human beings and take preventive actions before the emotions take over the health and activities of the person.

Therefore, there exists a need for a method and a system for determining and monitoring emotions of the user.

SUMMARY OF THE INVENTION

The present invention provides mechanisms capable of monitoring, recording, identifying and determining such emotions in real time. An emotional index of the user may be determined for different circumstances, locations, public or private places etc. The emotional index can be used in real-time determination of health of the user. A smart wearable device may be used to closely monitor the emotions of the user through one or more sensors and correlate the same with the health condition of the user. Based on the monitoring, an alert may be automatically triggered if the health condition is found to be unstable and beyond a threshold value.

In an embodiment, a method for monitoring emotions of a user is provided. The method includes the steps of: monitoring at least one physiological parameter of the user when the user is in a normal condition and to determine a threshold value of the at least one physiological parameter of the user; monitoring voice profile and speech pattern of the user when the user is in the normal condition to determine a normal voice profile and identify a normal speech pattern of the user respectively; generating normal face profile of the user, wherein the normal face profile comprises a plurality of emotions-based characteristics of the face; determining a current value of the at least one physiological parameter, a current voice profile and a current speech patern of the user, and a current face profile of the user using at least one image capturing and voice processing device; computing an emotional index of the user using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile; and identifying at least one emotion of the user based on the emotional index.

In another embodiment, a system for monitoring emotions of a user is provided. The system includes at least one sensor configured to monitor at least one physiological parameter of the user when the user is in a normal condition to determine a threshold value of the at least one physiological parameter of the user; a voice profiler configured to monitor voice profile and speech pattern of the user, when the user is in the normal condition to determine a normal voice profile and identify a normal speech pattern of the user respectively; a face pattern recognizer configured to generate a normal face profile of the user, wherein the normal face profile comprises a plurality of emotions- based characteristics of the face; a controller configured to determine a current value of the at least one physiological parameter, a current voice profile and a current speech pattern of the user, and a current face profile of the user using at least one image capturing and voice processing device; an emotional index calculator configured to compute an emotional index of the user using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile; and an emotion identifier configured to identity at least one emotion of the user based on the emotional index.

It is object of the invention to determine the mood of the user in real-time.

It is another object of the invention to ascertain health condition of the user based on its user’s real time emotions and mood,

It is another object of the present invention to trigger an alert to the user or its guardians on detecting or foreseeing instability in emotional and health conditions.

It is further an object of the invention to ascertain the emotional index of a community residing in a location.

To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.

BRIEF DESCRIPTION OF FIGURES

These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein: Figure 1 illustrates a block diagram of a system level environment in accordance with an embodiment;

Figure 2 illustrates a block diagram of a smart wearable device in accordance with an embodiment of the present invention;

Figure 3 is a flowchart illustrating a method for monitoring emotions of a user in accordance with an embodiment of the present invention;

Figure 4 illustrates a block diagram of a system for monitoring emotions of a user in accordance with an embodiment of the present invention; and

Figure 5 illustrates a typical hardware configuration of a computer system, which is representative of a hardware environment for practicing the present invention.

Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.

DETAILED DESCRIPTION:

For the purpose of promoting and understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.

It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.

Reference throughout this specification to“an aspect”,“another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention Thus, appearances of the phrase “in an embodiment”, “in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.

The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises... a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.

Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.

Figure 1 shows a block diagram of system level environment in accordance with an embodiment of the present invention. The system 100 includes one or more smart wearable devices 102, one or more mobile devices 104, an application server 106 and a database 108 connected to each other through a communication network 110, The smart wearable device 102 may be any smart device which is capable of sending commands, instructions to the application server 106 and the mobile device 104. The smart wearable device 102 may include, but not limited, to a smart watch, smart fitness bands, smart shoes, smart glass, smart earphones/ headphones, smart clothing, smart jewelry to name a few. The smart wearable device 102 may be connected to the application server 106 and the mobile device 104 through a radio communication network or Internet or short-range wireless communication methods like wifi, Bluetooth, ZigBee, or Infrared transmission. The communication network 110 comprises a master and peripheral type of connection between the smart wearable device 102 and the mobile device 104. The communication network 110 may, for example be a radio network providing a short range, preferably between about lOm-lOOm, and more preferably up to 30m of the smart wearable device 102 and the mobile device 104 to connect with each other, Further, the smart wearable device 102 may be connected either directly to the application server 106 or through the mobile device 104 (a mobile application is configured in the mobile device 104 which can receive inputs from smart wearable device 102, and the application server 106). The smart wearable device 102 is configured to track different emotions such as anger, fear, sadness, stress, anxiety, happiness, amusement, aggressive, and/or among other emotions of the user. The different emotions may be ascertained by monitoring various physiological parameters, voice profile, facial expressions etc. captured in real time, The physiological parameters include, but not limited to, optical, heat, pressure, sweat (liquid), blood pressure, heart rate, sleep level, sedentary levels, dizziness levels, fatigue levels, BMI, movement of body parts, etc. associated with the user. In an embodiment, the mobile device 104 operates independently, and is configured to track the emotions of a user without the smart wearable device 102. In another embodiment, the smart wearable device 102, in operational interconnection with the mobile device 104, tracks the emotions of a user. The smart wearable device 102 communicates and transmits the details pertaining to the user activities, body condition, health- based parameters to the mobile device 104 and application server 106. The data tracked and captured by the smart wearable device 102 and mobile device 104 is processed by the application server 106 and stored in database 108. The application server 106 is further configured to process data pertaining to emotional index of different users belonging to a common location from a plurality of smart wearable devices 102. The emotional index of different user would assist in determining the emotional index of the location.

Figure 2 illustrates a block diagram of a smart wearable device 102 in accordance with an embodiment of the present invention. The smart wearable device 102 includes one or more first sensor 202 to track and capture physiological parameters including optical, heat, pressure, sweat (liquid), heart rate, sleep levels, sedentary levels, skin conductance etc. associated with the user. The first sensor 202 may include accelerometers, step-counters, temperature sensors, blood pressure sensors, heart rate sensors, skin conductance sensors, sleep monitoring sensors, pedometers, among others which capture various types of data associated with the user. The data captured by the first sensor 202 is communicated through a transceiver 204 to the mobile device 104 and application server 106. Further, the smart wearable device 102 comprises a second sensor 206 for sensing user selection input, received via an I/O unit 208, which is captured by a controller 210 and is transmitted by the transceiver 204. The second sensor 206 may, for example be a touch sensing device, or a push button on a top surface or side of the smart wearable device 102, a biometric device, a voice sensing device, a speech recognition device, a facial recognition device, a gesture sensing device, a movement sensing device, a light sensing device or a sound sensing device among others. The user selection effects a change in voltage or current, which is captured by the controller 210 working in operational interconnection with the second sensor 206. The smart wearable device 102 also includes a memory unit 212 for storing data captured by various sensors. The commands for the operation of each of the smart wearable device 102 are pre-programmed in their respective memory unit 212. The smart wearable device 102 further includes a power supply unit 214 including a battery for supplying power to various modules/units of the smart wearable device 102.

Referring to Figure 3 which is a flowchart illustrating a method for monitoring emotions of a user in accordance with an embodiment of the present invention. The method 300 includes step 302 of monitoring at least one physiological parameter of the user when the user is in a normal condition to determine a threshold value of the at least one physiological parameter of the user. The at least one physiological parameter includes parameters relating to optical, heat, pressure, sweat (liquid), blood pressure, heart rate, sleep levels, skin conductance levels, dizziness levels, BMI, etc. associated with the body conditions of the user. The physiological parameter also includes parameters relating to movement of the body parts such as hands, eyes, legs, etc. The one or more first sensor 202 as referred in Figure 2 may be used to monitor data values pertaining to the aforesaid parameters. The normal condition of the user may be defined as a condition where the user is in stable condition and the values pertaining to any of the aforesaid physiological parameter is within a range that is reflects a healthy state of the user. The normal condition of the user is determined based on age, gender, day- today activities. The values that define the healthy state of the user are classified as threshold values. In other words, the threshold values are the values within which the user remains within a normal condition or docs not affect the health status of the user in a negative manner. In an implementation, the physiological sensors may monitor steps per minute, total number of steps in a day, week, month, activity levels for determining the normal condition of the user.

At step 304, the voice profile and speech pattern of the user are monitored when the user is in the normal condition. Based on the monitoring, the normal speech pattern and normal voice profile of the user respectively is identified. The normal speech pattern and normal voice profile may define a pattern and profile respectively which reflect a routine state of speech and voice pattern of the user without having any effect on the mental and physical health of the user. Suitable voice and speech processing algorithms known in the art may be used for defining the normal voice profile and speech pattern. The normal voice pattern may be determined by recording a conversation and finding an emotion in the conversation, and analyzing the user’s response for such conversation. This may further includes the analysis of the body language, way of communication, language being spoken, and frequency of change of one language to another, convenience in speaking a particular language, cuss words etc. The speech pattern and voice profile may be defined be analyzing voice modulation, paralinguistic, use of formal and informal words, tone, volume, speed, and voice quality, closed group conversations, sign language,

At step 306, a normal face profile of the user is generated. The normal face profile comprises a plurality of emotions -based characteristics of the face. The normal face profile may be captured using suitable image capturing devices having face recognition algorithms known in the art. The user may be requested to provide its routine face expressions exhibiting different emotion characteristics such as happy, unhappy, sad, anxious, crying state and other sensitive emotions etc. In an embodiment, one or more sensors may be used to automatically monitor and capture the one or more real time facial expressions of the user. Further, such facial expressions of the users are analyzed with the normal face profile of the user, to retrieve one or more emotions of the user. In an implementation, the user may provide the facial expressions using pre-captured images stored in a pre-defined memory location. In another implementation, the user may be requested to click images exhibiting different emotion characteristics. Each of such expressions is classified as normal face profile of the user and is used for further ascertaining of the emotional state of the user.

Thereafter, at step 308, activating at least one smart wearable device on determining said type of at least one such condition and a current value of the at least one physiological parameter, a current voice profile and a current speech pattern of the user, and a current face profile respectively of the user is determined using at least one image capturing and voice processing device, an at step 310, an emotional index of the user is determined using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile. The step 310 of computing an emotional index of the user includes comparing the current value and the threshold value of the at least one physiological parameter to determine a first value, current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, to determine a second value, the current face profile with the normal face profile to determine a third value; associating with the first value, the second value and the third value respective weighing factors; and computing the emotional index using the first value, the second value, the third value and the respective weighing factors. For example, when a number of cuss words in a day exceeds the threshold value, a value of 1 is incremented to the emotional index. Similarly, when the user is conversing in a high decibel in a public place, which is beyond the threshold value of normal speaking, a value of 1 is incremented to the emotional index. In a similar manner, when a speed of the speech exceeds the threshold value of 160wpm, then a value of 1 is incremented to the emotional index. In an embodiment, each of the weighting factors are of same value relative to each other. In an embodiment, the normal emotional index of a user in a day is computed through the first, second and third values with their respective weights. In one example, the weighing factors may also depend on one or more internal or external parameters with respect to the user, such as, the time user spending in a traffic, the duration in which the user drives the vehicle, amount of water/food consuming, the quality of food consumed by the user, the activities of the user, sleep data, sedentary data, heart rate, blood pressure, consumption of beverages etc. For all parameters listed above, a threshold value, a normal value and a real-time value. In an embodiment, the emotional index is a combined index of all activities, internal and external factors which affects the body of the user either directly or indirectly,

Thereafter, the method 300 at step 312 identifies at least one emotion of the user based on the computed emotional index. The at least one emotion of the user based on the emotional index may be identified by computing a mean value of the emotional index for a predetermined number of days; defining at least three ranges based on the mean value of the emotional index; and identifying the at least one emotion of the user in accordance with the at least three ranges. For example, the emotional index is determined based on the values of last seven days. In another example, the mean value of the emotional index is generated and at least three categories such as a good emotional index, a medium emotional index and a low motional index are defined based on the range of the mean values. The good emotional index indicates that the user is generally in a happy mood, the medium emotional index indicates that the user is in calm mood and the low emotional index indicates that the user is in bad mood.

In an embodiment, the emotional index of the user may be used to determine an emotional index of a specific location. Further, the emotional index of the specific location is used to determine a safe state of the specific location. For example, the system is configured to determine a number of users present at the specific location. Further, heart rate of each of the users present at the specific location is determined. Based on the determined heart rate of each of the user, an emotional condition of each user present at the specific location is determined. If based on the emotional index of each of the user, it is determined that a substantial number of users at the specific location are in a happy mood, the specific location is declared as a safe location. In an implementation, the method 300 further includes receiving an input pertaining to co-ordinates of a geographical location; identifying a plurality of users residing within a predetermined area of the geographical location, where the plurality of users are subscribers of an application server to assist in determination of the emotional index of the geographical location; computing the emotional index of each user of the plurality of users; and determining the emotional index of the geographical location based on the emotional index of each user of the plurality of users

In an implementation, the method 300 further includes measuring current level of skin conductance of the user; associating a predefined weightage with the measured current level of the skin conductance to generate a first value; adding the first value within the computed emotional index.

In an implementation, the current speech pattern is governed in accordance with the number, the type of words or phrases spoken by the user, change in decibel value, speed of speech, change in language. The method 100 in such includes the steps of: determining information regarding a number and a type of words or phrases spoken by the user, decibel value, speed of speech while exhibiting the at least one emotion in the normal speech pattern; and computing the current speech pattern in accordance with the number and the type of words or phrases spoken by the user. For example, when the number of cuss words in a day exceeds the threshold value, a value of 1 is incremented to the emotional index, further, when the user is conversing in a high decibel in a public place, which is beyond the threshold value of normal speaking, a value of 1 is incremented to the emotional index, similarly, speed of speech exceeds the threshold value of I60wpm, then a value of 1 is incremented to the emotional index. In an embodiment, any other combinations of one or more parameters which are internal and external to the user, are associated and values ranging from 0, 1, 2 are incremented or decremented to the emotional index, which depends on the complexity of situation and one or more parameters In one example, usually a value of 1 is incremented when the real time value of one or more parameter is more than the threshold value. In the similar way, a value of 1 is decremented from the emotional index, when the real-time value of one or more parameters is being the same and below the threshold value, for pre-determined amount of time and with respect to one location.

In an implementation, the method 300 includes notifying to at least another user regarding the at least one emotion of the user. The notification may be a textual or visual notification or a voice notification. The smart wearable device is configured to generate a trigger or alarm to notify the identified at least one emotion of the user. Based on the identified emotional index of the user, the user may be instructed to initiate a preventive action. The instruction may include suitable recommendations to bring the emotional index levels within normal acceptable range.

The method 300 at 314, activates at least one smart wearable device on determining said type of at least one such conditions. When the system determines the increase in emotional index and frequent rise in the value relating to one or more parameters and subject emotions, the smart wearable device associated with the user is activated, and which performs one or more actions to limit such identified emotion of the user below the threshold value.

In an embodiment, the method 300 may analyze user profile data, age, gender, height. BMI, weight, work life, nature of work, time spent in traffic, vehicle ride data (the way of driving a vehicle) for determining the emotional index.

In an embodiment, one or more machine learning processes are implemented to monitor, record, identify and determine the various emotional conditions of the user in a real time.

In an embodiment, the method is configured to receive user inputs in a real time to determine and optimize the value of the emotional index for the user.

In an embodiment, a resting heart rate of the user is determined. For example, the resting heart rate is determined using the smart wearable device or using other smart devices which are configured to automatically determine the resting heart rate. The measured resting heart rate is stored in a database for further processing. In an embodiment, the resting heart rate is a floating value. In other words, the resting heart rate includes a range of values which can be within at least a 10 percentage of determined heart rate when the user is in a resting mode. Further, the least and highest resting (heart rate) values are automatically determined and set by the smart wearable device, and an optimal resting heart rate is identified for the user. In an example, the Max Heart Rate is calculated as - 220 - Age of person in years. The Heart Rate (HR) may be divided into a plurality of zones:

HR Zone 1 = 50% to 60% of Max HR

HR Zone 2 = 60% to 70% of Max HR

HR Zone 3 = 70% to 80% of Max HR

HR Zone 4 = 80% to 90% of Max HR

HR Zone 5 = 90% to 100% of Max HR

Usually, the threshold value is determined for a parameter = 70% or 80 % of normal age group value. For example, the normal HR for moderate age group is 180, hence, the threshold is 70% of 180, In another implementation, for each of the parameter, above the threshold value, a value of‘ 1’ is incremented to the index, in a day. In the similar way, when the value of the parameters is equal or less than the threshold value, then ‘no change’ or‘O’ is incremented to the emotional index, in reference to that parameter. Every day, each parameter values are analyzed with the threshold values and emotional index is determined. In an embodiment, an emotional index is determined using the current heart rate and the resting heart rate of the user. For example, when the current heart rate of the user is not within the range values (least or highest heart rates) associated with the resting heart rate, the system is configured to determine emotions of the user by comparing at least the current heart rate with the resting heart rate. When, the current heart rate is within the range value, then the user is in ‘normal’ or‘ideal’ state. In an embodiment, a delta value i.e., a change value is measured between the current heart rate and the resting heart rate. The system is configured to use the delta value or a percentage change to determine the emotional index. Subsequently, the emotional index is used to determine different emotions such as anger, fear, sadness, stress, anxiety, happiness, amusement, aggressive, and/or among other emotions of the user. For example, if the resting heart rate of the user is 80 beats per minute and when the current heart rate is going beyond 120 beats per minute, then an emotional index may indicate that the emotional condition of the user is anxious. In a similar way, the heart rate of the user is closely monitored and appropriate changes in the delta value assist in determination of the emotion condition of the user In another example, the system is capable of detecting an emotion, when the user maintains the current heart rate for at least 0-15 seconds.

In an embodiment, the emotional index is used to determine health condition of the user in a real-time manner. For example, a smart wearable device is configured to determine likability parameters of getting a heart attack. The smart wearable device is configured to monitor heart rate variations of the user, and automatically trigger the alarm when heart rate goes below such threshold values (which are determined for a person, based on his medical conditions) or any problem with pulse (heart beats).

In an embodiment, the emotional index may relative to the altitude of the location of the user. For instance, a user at high altitude may have an unstable heart rate in comparison to when present at normal altitude. In an implementation, the sleep levels may determine to ascertain the emotional index of a user, The sleep levels determined predetermined number of hours and days may be used to ascertain the emotional index. For instance, lack of sleep may affect the mood of the user. In another example, when bad sleep is detected in previous day, then, the method 300 considers 70% of emotional index of that day, meaning, a bad sleep is responsible for emotional change in a day, which is relative.

Referring to Figure 4, a block diagram of a system for monitoring emotions of a user in accordance with an embodiment of the present invention is provided. The system 400 includes at least one physiological parameter sensor 402 configured to monitor at least one physiological parameter of the user when the user is in a normal condition to determine a threshold value of the at least one physiological parameter of the user. The physiological parameters are monitored using sensors 402 provided with sensing capabilities to sense physiological parameters relating to optical, heat, pressure, sweat (liquid), heart rate, sleep levels, sedentary levels, skin conductance etc. The physiological parameter sensor (or body condition monitoring sensors) 402 include, but not limited to, accelerometers, ste -counters, temperature sensors, blood pressure sensors, heart rate sensors, skin conductance sensors, sleep monitoring sensors, pedometers etc. A voice profiler 404 embedded with suitable speech and voice processing processors is configured to monitor voice profile and speech pattern of the user when the user is in the normal condition to determine a normal voice profile and to identify a normal speech pattern of the user respectively. The voice profiler 404 is configured to ascertain the body language, way of communication, speaking a language, changing one language to another (frequency of changing, most convenient language), voice modulation, paralinguistic, formal and informal words, tone, volume, speed, and voice quality, closed group conversations, sign language etc. The system 400 further includes a face pattern recognizer 406 configured to generate a normal face profile of the user, wherein the normal face profile comprises a plurality of emotions- based characteristics of the face. The face pattern recognizer 406 includes suitable image capturing devices that operate in operational interconnection with face recognition processors. In an implementation, emotions-based characteristics of the face may be automatically determined using machine learning techniques or may be classified based on the input from the user. The system 400 further includes a controller 408 configured to determine a current value of the at least one physiological parameter, a current voice profile and a current speech pattern of the user, and a current face profile of the user. The current value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user, and the current face profile are determined in real time using at least one sensor 402, a voice profiler 404 and face pattern recognizer 406 respectively. An emotional index calculator 410 is further provided to compute an emotional index of the user using the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user with the normal voice profile and normal speech pattern of the user respectively, the current face profile with the normal face profile. The system 400 includes an emotion identifier unit 412 configured to identify at least one emotion of the user based on the emotional index.

A memory' unit 414 is provided to store the details pertaining the current value and the threshold value of the at least one physiological parameter, the current voice profile and the current speech pattern of the user, the normal voice profile, normal speech pattern of the user respectively, the current face profile, the normal face profile, emotional index of one or more users. In an embodiment, the details stored in the memory unit 414 are synchronized with the least one sensor 402, a voice profiler 404 and face pattern recognizer 406 respectively on timely basis. The system 400 further includes a display device 416 for displaying the details pertaining to emotional index of the user.

In an embodiment, the system 400 is provided with a notification module 418 that is configured to send notifications to the user and other users of the identified emotional index or guardians or closed group members. The notification may help in taking preventive action to avoid any unwanted health problem to take place. The notification may include but is not limited to a message, a phone call, and a vibration based notification. The system 400 is further configured to automatically generate recommendations relating to the preventative actions if the emotional index is found be outside the acceptable range or beyond the threshold value. The execution of the recommendations by the user may help in bringing the emotional index within acceptable range.

Referring to Figure 5, a typical hardware configuration of a computer system, which is representative of a hardware environment for practicing the present invention, is illustrated. The computer system 500 can include a set of instructions that can be executed to cause the computer system 500 to perform any one or more of the methods disclosed. The computer system 500 may operate as a standalone device or may be connected, e.g,, using a network, to other computer systems or peripheral devices.

In a networked deployment, the computer system 500 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 500 can also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a printer, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single computer system 500 is illustrated, the term "system" shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.

The computer system 500 may include a processor 502 e.g,, a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor 502 may be a component in a variety of systems. For example, the processor may be part of a standard personal computer or a workstation. The processor 502 may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor 502 may implement a software program, such as code generated manually (i.e., programmed).

The computer system 500 may include a memory 504, such as a memory 504 that can communicate via a bus 508, The memory 504 may be a main memory, a static memory, or a dynamic memory. The memory 504 may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory , electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, the memory 504 includes a cache or random-access memory for the processor 502. In alternative examples, the memory 504 is separate from the processor 502, such as a cache memory of a processor, the system memory, or other memory. The memory 504 may be an external storage device or database for storing data. Examples include a hard drive, compact disc ("CD"), digital video disc ("DVD"), memory card, memory stick, floppy disc, universal serial bus ("USB") memory device, or any other device operative to store data. The memory 504 is operable to store instructions executable by the processor 502 The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor 502 executing the instructions stored in the memory 504. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like. As shown, the computer system 500 may or may not further include a display unit 510, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid- state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information, The display 510 may act as an interface for the user to see the functioning of the processor 502, or specifically as an interface with the software stored in the memory 04 or in the drive unit 516.

Additionally, the computer system 500 may include an input device 512 configured to allow a user to interact with any of the components of system 500. The input device 512 may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the computer system 500.

The computer system 500 may also include a disk or optical drive unit 516. The disk drive unit 516 may include a computer-readable medium 522 in which one or more sets of instructions 524, e.g. software, can be embedded. Further, the instructions 524 may embody one or more of the methods or logic as described. In a particular example, the instructions 524 may reside completely, or at least partially, within the memory 504 or within the processor 502 during execution by the computer system 500. The memory 504 and the processor 502 also may include computer-readable media as discussed above.

The present invention contemplates a computer-readable medium that includes instructions 524 or receives and executes instructions 524 responsive to a propagated signal so that a device connected to a network 526 can communicate voice, video, audio, images or any other data over the network 526. Further, the instructions 524 may be transmitted or received over the network 526 via a communication port or interface 520 or using a bus 508. The communication port or interface 520 may be a part of the processor 502 or may be a separate component. The communication port 520 may be created in software or may be a physical connection in hardware. The communication port 520 may be configured to connect with a network 526, external media, the display 510, or any other components in system 500 or combinations thereof. The connection with the network 526 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed later. Likewise, the additional connections with other components of the system 500 may be physical connections or may be established wirelessly. The network 526 may alternatively be directly connected to the bus 508,

The network 526 may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network including GSM, GPRS, 3G, EVDO, mesh, or other networks type, an 802, 11, 802, 16, 802.20, 802.1Q or WiMax network. Further, the network 526 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.

In an alternative example, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement various parts of the system 500, Applications that may include the systems can broadly include a variety of electronic and computer systems. One or more examples described may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit, Accordingly, the present system encompasses software, firmware, and hardware implementations.

The system described may be implemented by software programs executable by a computer system. Further, in a non-limited example, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement various parts of the system. The system is not limited to operation with any particular standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g,, TCP/IP, UDP/IP, HTML, HTTP) may be used. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed are considered equivalents thereof.

The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.