Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
WORK MANAGEMENT APPARATUS, METHOD, AND PROGRAM
Document Type and Number:
WIPO Patent Application WO/2018/158704
Kind Code:
A1
Abstract:
It is provided a line manufacturing control apparatus, method, and computer program for controlling at least a part of a manufacturing line. The apparatus comprises an activity obtaining unit configured to obtain information indicating an activity of a worker during execution of an operation, by the worker, in the manufacturing line; a first estimation unit configured to estimate emotion and cognition of the worker using first learning data; a second estimation unit configured to estimate performance of the worker using second learning data; and a controller configured to control a functioning of the at least a part of the manufacturing line based on an estimation result of performance of the worker in the operation obtained by the second estimation unit. Apparatuses, methods and programs are also disclosed for providing driving assistance and health care support. The state of a worker is constantly determined accurately and objectively, and an accurate worker performance is estimated and thus obtained. Vital sign measurement data and motion measurement data obtained from workers WK1, WK2, and WK3 during operation are used as primary indicators. The primary indicators and learning data generated separately are used to estimate the emotion and the cognition of each worker. The estimated emotion and cognition are used as secondary indicators. The secondary indicators and relational expressions generated separately are used to estimate the productivity of each worker. The estimated performance can be used to control a manufacturing line.

Inventors:
KOTAKE YASUYO (JP)
NAKAJIMA HIROSHI (JP)
HATAKENAKA SATOKA (JP)
Application Number:
PCT/IB2018/051279
Publication Date:
September 07, 2018
Filing Date:
February 28, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OMRON TATEISI ELECTRONICS CO (JP)
International Classes:
G06Q10/00; A61B5/16; A61B5/18
Domestic Patent References:
WO2009052633A12009-04-30
WO2006000166A12006-01-05
Foreign References:
US20140240132A12014-08-28
US20140336473A12014-11-13
JP2011019921A2011-02-03
US20090066521A12009-03-12
JPS5530019B21980-08-07
JP2016252368A2016-12-27
IB2017055272W2017-09-01
IB2017058414W2017-12-27
Other References:
STEVEN J. LUCK: "An Introduction to the Event-Related Potential Technique"
J. POSNER ET AL.: "The Neurophysiological Bases of Emotion: An fMRI Study of the Affective Circumplex Using Emotion-Denoting Words", HUM BRAIN MAPP, vol. 30, no. 3, March 2009 (2009-03-01), pages 883 - 895
Download PDF:
Claims:
CLAIMS

1 . A line manufacturing control apparatus for controlling at least a part of a manufacturing line, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of a worker during execution of an operation, by the worker, in the manufacturing line, the information indicating an activity of the worker being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the worker during the operation based on the information indicating the activity, obtained by the activity obtaining unit, used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

a second estimation unit configured to estimate performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between the performance, and the emotion and the cognition of the worker in the operation, a controller configured to control a functioning of the at least a part of the manufacturing line based on an estimation result of performance of the worker in the operation obtained by the second estimation unit. 2. The line manufacturing control apparatus according to claim 1 , wherein at least two amongst the at least one first sensor, the at least one second sensor and the at least one third sensor are different from each other.

3. The line manufacturing control apparatus according to claim 1 or 2, wherein, when at least two amongst the at least one first sensor, the at least one second sensor and the at least one third sensor are substantially the same, then said at least two sensors being substantially the same are set according to different respective configurations. 4. The line manufacturing control apparatus according to any of claims 1 to 3, wherein the activity sensor and the at least one third sensor are substantially the same.

5. The line manufacturing control apparatus according to any of claims 1 to 4, wherein the second learning data comprises data generated on the basis of information indicating performance, said information indicating emotion of at least one worker, and said information indicating cognition of the at least one worker, wherein information indicating performance indicate performance in correspondence of said information indicating emotion and said information indicating cognition.

6. A system comprising a line manufacturing apparatus according to any of claims 1 to 5, and at least one article obtained by means of said manufacturing apparatus.

7. A manufacturing line operation efficiency estimation apparatus for estimating performance in executing, by a worker, an operation within a manufacturing line, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of the worker during the operation, the information indicating an activity of the worker being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

a second estimation unit configured to estimate performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the worker in the operation.

8. A drive assisting apparatus for providing vehicle driving assistance, the apparatus comprising: an activity obtaining unit configured to obtain information indicating an activity of a subject during driving a vehicle, the information indicating an activity of the subject being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the subject during driving based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

a second estimation unit configured to estimate performance of the subject in driving based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the driving,

a controller configured to provide driving assistance based on an estimation result of the performance of the subject in the driving obtained by the second estimation unit.

9. The apparatus according to claim 8, wherein the driving assistance is provided to at least one amongst the subject and the vehicle.

10. An apparatus for healthcare support of a subject, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of the subject when executing an operation, the information indicating an activity of the subject being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the subject when executing the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

a second estimation unit configured to estimate performance of the subject in executing the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the operation,

a controller configured to provide the subject with a healthcare support feedback based on an estimation result of the performance of the subject in the operation obtained by the second estimation unit.

11 . The apparatus for healthcare support of a subject according to claim 510 wherein executing an operation includes at least one amongst executing an interacting operation with a machine and performing a physical exercise. 12. A method for providing vehicle driving assistance, the method comprising steps of:

obtaining information indicating an activity of a subject during driving a vehicle, the information indicating an activity of the subject being information relating to at least one physiological parameter obtained by means of at least one activity sensor; estimating emotion and cognition of the subject during driving based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the subject in driving based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the driving,

providing driving assistance based on an estimation result of the performance of the subject in the driving obtained by the second estimation unit.

13. The method according to claim 12, wherein the driving assistance is provided to at least one amongst the subject and the vehicle. 14. A method for healthcare support of a subject, the method comprising steps of:

obtaining information indicating an activity of the subject when executing an operation, the information indicating an activity of the subject being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the subject during driving based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the subject in executing the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the operation, providing the subject with a healthcare support feedback based on an estimation result of the performance of the subject in the operation obtained by the second estimation unit. 15. The method for healthcare support of a subject according to claim 14, wherein executing an operation includes at least one amongst executing an interacting operation with a machine and performing a physical exercise.

16. The apparatus according to any of claims 1 to 7, wherein

the first learning data includes at least one first regression equation representing a relationship between the emotion and the activity of the worker, and the at least one first regression equation is generated with a correct value being information indicating an emotion self-reported by the worker and a variable being information indicating an activity of the worker obtained during a time period within which the self-reporting is performed.

17. The according to claim 16, wherein

the at least one first regression equation representing the relationship between the emotion and the activity of the worker includes a first regression equation generated for emotional arousal, and a first regression equation generated for emotional valence when the information indicating the emotion self-reported by the worker includes the emotional arousal and the emotional valence.

18. The apparatus according to any of claims 1 to 7, 16, and 17, wherein

the first learning data includes a second regression equation representing a relationship between the cognition and the activity of the worker, and the second regression equation is generated with a correct value being information indicating a success or a failure of the operation by the worker and a variable being information indicating an activity of the worker obtained during a time period immediately before the success or the failure of the operation is obtained. 19. The apparatus according to claim 18, wherein

the second regression equation representing the relationship between the cognition and the activity of the worker is generated with a correct value being information indicating a success or a failure of the operation by the worker and a variable being at least one of a feature quantlity indicating hand movement of the worker extracted from the information indicating the activity of the worker and a feature quantity indicating eye movement of the worker extracted from the information indicating the activity of the worker.

20. The apparatus according to any of claims 1 to 7 and 16 to 19, wherein

the second learning data includes at least one of a first relational expression for estimating a skill level of the worker based on a difference between a current secondary indicator and a past secondary indicator that are estimates of the emotion and the cognition of the worker, and a second relational expression for estimating a misoperation frequency of the worker based on variations among current secondary indicators and variations among past secondary indicators that are the estimates of the emotion and the cognition of the worker.

21 . The work management apparatus according to any one of claims 1 to 7 and 16 to 19, further comprising

a controller configured to control an operation of the system based on an estimation result of the performance of the worker in the operation obtained by the second estimation unit.

22. The work management apparatus according to claim 21 , wherein the controller divides the estimated performance in the operation into a plurality of classes based on at least one predetermined threshold, and controls the operation of the system based on control information preliminarily associated with each of the classes.

23. A computer program enabling a processor to function as the units included in the apparatus according to any one of claims 1 to 13 and 16 to 22.

24. A line manufacturing control method for controlling at least a part of a manufacturing line, the apparatus comprising steps:

obtaining information indicating an activity of a worker during execution of an operation, by the worker, in the manufacturing line, the information indicating an activity of the worker being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor,

said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between the performance, and the emotion and the cognition of the worker in the operation,

controlling a functioning of the at least a part of the manufacturing line based on an estimation result of performance of the worker in the operation obtained by the second estimation unit.

25. A computer program comprising instructions which, when executed on a computer, cause the computer to execute steps according to any of claims 12 to 15 and 24.

Description:
WORK MANAGEMENT APPARATUS, METHOD, AND PROGRAM

FIELD

[0001] The present invention relates to a work management apparatus, a method, and a program used in a system involving an operation performed by a worker. Further, the invention relates to a manufacturing line operation efficiency estimation apparatus, a line manufacturing control apparatus, a drive assisting apparatus, a healthcare support apparatus, and corresponding methods and programs.

BACKGROUND

[0002] For example, early detection of equipment malfunctions in various systems, such as production lines, is a key to preventing the operational efficiency from decreasing. A system has thus been developed for detecting a sign of an equipment malfunction by, for example, obtaining measurement data indicating the operating states of equipment from multiple sensors, and comparing the obtained measurement data with pre-generated learning data (refer to, for example, Patent Literature 1 ).

[0003] In a production line involving an operation performed by a worker, factors known to influence the productivity, or specifically the quality and the amount of production, include 4M (machines, methods, materials, and men) factors. Three of these factors, namely, machines, methods, and materials (3M), have been constantly improved and enhanced to increase the productivity. However, the factor "men" depends on the skill level, the aptitude, and the physical and mental states of a worker. Typically, a manager visually observes the physical and mental states of the worker, and provides an appropriate instruction for the worker to maintain and enhance the productivity.

CITATION LIST

PATENT LITERATURE

[0004] Patent Literature 1 : Japanese Patent No. 5530019

SUMMARY

TECHNICAL PROBLEM

[0005] However, this technique of observing the state of a worker relies on the experience or the intuition of the manager for accurate determination of the state of the worker affecting the workability. This technique may not always determine the state of the worker accurately.

[0006] In response to the above issue, one or more aspects of the invention are directed to a work management apparatus, a method, and a program that allow constantly accurate determination of the state of the worker affecting the workability without relying on the experience or the intuition of a manager. Further, prior art techniques have tried to improve safety in driving. However, there is a problem that known techniques do not take accurately into account the state of the driver, and how to obtain and use this in an objective and repeatable way in order to further improve safety. Still further, healthcare devices are known for supporting healthcare of a person; however, these devices do not take into account the accurate state of the person, and how this can be obtained and used in an objective and repeatable way so as to contribute to an improvement of the person's health conditions.

SOLUTION TO PROBLEM

[0007] In response to the above issue as recognized by the inventors, a first aspect of the present invention provides a work management apparatus, a work management method, or a work management program for managing an operation performed by a worker in a system involving the operation performed by the worker. Managing the operation includes estimating performance of the worker in the operation, and controlling an operation of the system based on such estimation. In other words, any of the apparatus, method, or program is for use within a system involving the operation performed by the worker. Thus, an apparatus according to the present aspect includes a manufacturing line operation efficiency estimation apparatus for estimating performance in executing, by a worker, an operation within a manufacturing line, and a line manufacturing control apparatus for controlling at least a part of a manufacturing line.

The apparatus or the method includes an activity obtaining unit or process for obtaining information indicating an activity of the worker during the operation, a first estimation unit or process for estimating emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, and a second estimation unit or process for estimating performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between the performance, and the emotion and the cognition of the worker in the operation.

[0008] In the apparatus, method, or program according to a second aspect of the present invention, the first learning data includes at least one first regression equation representing a relationship between the emotion and the activity of the worker. The at least one first regression equation is generated with a correct value being information indicating an emotion self-reported by the worker (or indirectly obtained by highly accurate measurement(s), see also further below) and a variable being information indicating an activity of the worker obtained during a time period within which the self-reporting is performed.

[0009] In the apparatus, method, or program according to a third aspect of the present invention, the at least one first regression equation representing the relationship between the emotion and the activity of the worker includes a first regression equation generated for emotional arousal, and a first regression equation generated for emotional valence when the information indicating the emotion self-reported by the worker includes the emotional arousal and the emotional valence. [0010] In the apparatus, method, or program according to a fourth aspect of the present invention, the first learning data includes a second regression equation representing a relationship between the cognition and the activity of the worker. The second regression equation is generated with a correct value being information indicating a success or a failure of the operation by the worker and a variable being information indicating an activity of the worker obtained during a time period immediately before the success or the failure of the operation is obtained.

[001 1] In the apparatus, method, or program according to a fifth aspect of the present invention, the second regression equation representing the relationship between the cognition and the activity of the worker is generated with a correct value being information indicating a success or a failure of the operation by the worker and a variable being at least one of a feature quantity indicating hand movement of the worker extracted from the information indicating the activity of the worker and a feature quantity indicating eye movement of the worker extracted from the information indicating the activity of the worker.

[0012] In the apparatus, method, or program according to a sixth aspect of the present invention, the second learning data includes at least one of a first relational expression for estimating a skill level of the worker based on a difference between a current secondary indicator and a past secondary indicator that are estimates of the emotion and the cognition of the worker, and a second relational expression for estimating a misoperation frequency of the worker based on variations among current secondary indicators and variations among past secondary indicators that are the estimates of the emotion and the cognition of the worker.

[0013] The apparatus, method, or program according to a seventh aspect of the present invention further includes a controller for controlling an operation of the system based on an estimation result of the performance of the worker in the operation obtained by the second estimation unit.

[0014] In the apparatus, method, or program according to an eighth aspect of the present invention, the controller divides the estimated performance in the operation into a plurality of classes based on at least one predetermined threshold, and controls the operation of the system based on control information preliminarily associated with each of the classes.

Further aspects are herein described, numbered as A1 , A2, etc. for convenience:

According to aspect A1 , it is provided a line manufacturing control apparatus for controlling at least a part of a manufacturing line, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of a worker during execution of an operation, by the worker, in the manufacturing line, the information indicating an activity of the worker preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicate of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor and/or to an activity performed by the worker/subject; and

a second estimation unit configured to estimate performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between the performance, and the emotion and the cognition of the worker in the operation, a controller configured to control a functioning of the at least a part of the manufacturing line based on an estimation result of performance of the worker in the operation obtained by the second estimation unit. In general, the second sensor may be a sensor suitable for measuring a parameter indicative of the cognition state of the subject/worker, and the information indicating cognition preferably relate to at least one parameter indicative of the cognition state as measured by such a sensor. As also later explained, the cognition sensor may be suitable for measuring physiological parameters relating to cognition, and/or any parameter relating to an activity and/or execution of an activity and/or result of execution of an activity by the subject.

A2. The line manufacturing control apparatus according to aspect A1 , wherein at least two amongst the at least one first sensor, the at least one second sensor and the at least one third sensor are different from each other.

A3. The line manufacturing control apparatus according to aspect A1 or A2, wherein, when at least two amongst the at least one first sensor, the at least one second sensor and the at least one third sensor are substantially the same, then said at least two sensors being substantially the same are set according to different respective configurations.

A4. The line manufacturing control apparatus according to any of aspects A1 to A3, wherein the activity sensor and the at least one third sensor are substantially the same [but may be different, or differently set, e.g. with different level of accuracies in the learning phase and in the field].

A5. The line manufacturing control apparatus according to any of aspects A1 to A4, wherein the second learning data comprises data generated on the basis of information indicating performance, said information indicating emotion of at least one worker, and said information indicating cognition of the at least one worker, wherein information indicating performance indicate performance in correspondence of said information indicating emotion and said information indicating cognition.

A6. A system comprising a line manufacturing apparatus, and at least one article obtained by means of said manufacturing apparatus. It is noted that any of aspects A2 to A6 also apply to the below aspects, such that repetitions are avoided.

A7. A manufacturing line operation efficiency estimation apparatus for estimating performance in executing, by a worker, an operation within a manufacturing line, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of the worker during the operation, the information indicating an activity of the worker preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and

a second estimation unit configured to estimate performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the worker in the operation.

A8. A drive assisting apparatus for providing vehicle driving assistance, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of a subject during driving a vehicle, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the subject during driving based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and a second estimation unit configured to estimate performance of the subject in driving based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the driving,

a controller configured to provide driving assistance based on an estimation result of the performance of the subject in the driving obtained by the second estimation unit.

A9. The apparatus according to aspect A8, wherein the driving assistance is provided to at least one amongst the subject and the vehicle.

A10. An apparatus for healthcare support of a subject, the apparatus comprising: an activity obtaining unit configured to obtain information indicating an activity of the subject when executing an operation, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the subject when executing the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and a second estimation unit configured to estimate performance of the subject in executing the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the operation, a controller configured to provide the subject with a healthcare support feedback based on an estimation result of the performance of the subject in the operation obtained by the second estimation unit.

A1 1 . The apparatus for healthcare support of a subject according to aspect A10, wherein executing an operation includes at least one amongst executing an interacting operation with a machine and performing a physical exercise.

A12. An apparatus for handling performance in executing a task by a subject, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of the subject when executing the task, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the subject when executing the task based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and a second estimation unit configured to estimate performance of the subject in executing the task based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the task.

A13. The apparatus according to aspect A12, further comprising

a controller configured to provide the subject with an intervention based on an estimation result of the performance of the subject in the task obtained by the second estimation unit.

A14. A method for providing vehicle driving assistance, the method comprising steps of:

obtaining information indicating an activity of a subject during driving a vehicle, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the subject during driving based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the subject in driving based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the driving,

providing driving assistance based on an estimation result of the performance of the subject in the driving obtained by the second estimation unit.

A15. The method according to aspect A14, wherein the driving assistance is provided to at least one amongst the subject and the vehicle. A16. A method for healthcare support of a subject, the method comprising steps of:

obtaining information indicating an activity of the subject when executing an operation, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the subject during driving based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the subject in executing the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the operation,

providing the subject with a healthcare support feedback based on an estimation result of the performance of the subject in the operation obtained by the second estimation unit.

A17. The method for healthcare support of a subject according to aspect A1 1 , wherein executing an operation includes at least one amongst executing an interacting operation with a machine and performing a physical exercise.

A18. A method for handling performance in executing a task by a subject, the method comprising steps of:

obtaining information indicating an activity of the subject when executing the task, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the subject when executing the task based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the subject and a relationship between the activity and the cognition of the subject, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the subject in executing the task based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the subject in the task.

A19. The apparatus according to aspect A18, further comprising

providing the subject with an intervention based on an estimation result of the performance of the subject in the task obtained by the second estimation unit.

A20. A work management apparatus for managing an operation performed by a worker in a system involving the operation performed by the worker, the apparatus comprising:

an activity obtaining unit configured to obtain information indicating an activity of the worker during the operation, the information indicating an activity of the subject preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

a first estimation unit configured to estimate emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and

a second estimation unit configured to estimate performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between performance, and the emotion and the cognition of the worker in the operation.

A21 . The work management apparatus according to any of aspects A1 to A13 and A19 to A20, wherein

the first learning data includes at least one first regression equation representing a relationship between the emotion and the activity of the worker, and the at least one first regression equation is generated with a correct value being information indicating an emotion self-reported by the worker and a variable being information indicating an activity of the worker obtained during a time period within which the self-reporting is performed.

A22. The work management apparatus according to aspect A21 , wherein

the at least one first regression equation representing the relationship between the emotion and the activity of the worker includes a first regression equation generated for emotional arousal, and a first regression equation generated for emotional valence when the information indicating the emotion self-reported by the worker includes the emotional arousal and the emotional valence.

A23. The work management apparatus according to any of aspects A1 to A13 and A19 to A22, wherein

the first learning data includes a second regression equation representing a relationship between the cognition and the activity of the worker, and the second regression equation is generated with a correct value being information indicating a success or a failure of the operation by the worker and a variable being information indicating an activity of the worker obtained during a time period immediately before the success or the failure of the operation is obtained.

A24. The work management apparatus according to aspect A23, wherein

the second regression equation representing the relationship between the cognition and the activity of the worker is generated with a correct value being information indicating a success or a failure of the operation by the worker and a variable being at least one of a feature quantlity indicating hand movement of the worker extracted from the information indicating the activity of the worker and a feature quantity indicating eye movement of the worker extracted from the information indicating the activity of the worker.

A25. The work management apparatus according to any of aspects A1 to A13 and A19 to A24, wherein

the second learning data includes at least one of a first relational expression for estimating a skill level of the worker based on a difference between a current secondary indicator and a past secondary indicator that are estimates of the emotion and the cognition of the worker, and a second relational expression for estimating a misoperation frequency of the worker based on variations among current secondary indicators and variations among past secondary indicators that are the estimates of the emotion and the cognition of the worker.

A26. The work management apparatus according to any of aspects A1 to A13 and A19 to A25, further comprising

a controller configured to control an operation of the system based on an estimation result of the performance of the worker in the operation obtained by the second estimation unit.

A27. The work management apparatus according to aspect A26, wherein

the controller divides the estimated performance in the operation into a plurality of classes based on at least one predetermined threshold, and controls the operation of the system based on control information preliminarily associated with each of the classes.

A28. A work management method that is implemented by a work management apparatus for managing an operation performed by a worker in a system involving the operation performed by the worker, the method comprising:

obtaining information indicating an activity of the worker during the operation, the information indicating an activity of the orker preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one subject, information indicating cognition of the at least one subject, and information indicating activity of the at least one subject, wherein said information indicating emotion preferably relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition preferably relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity preferably relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the worker in the operation based on the estimated emotion and cognition used as secondary indicators, and second learning data indicating a relationship between the performance, and the emotion and the cognition of the worker in the operation.

A29. A work management program enabling a processor to function as the units included in the work management apparatus according to any one of aspects A1 to A13 and A19 to A27.

A30. A line manufacturing control method for controlling at least a part of a manufacturing line, the apparatus comprising steps:

obtaining information indicating an activity of a worker during execution of an operation, by the worker, in the manufacturing line, the information indicating an activity of the worker preferably being information relating to at least one physiological parameter obtained by means of at least one activity sensor;

estimating emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker, wherein the first learning data preferably comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor, said information indicating cognition relate to at least one parameter indicative of cognition and obtained by means of at least one second sensor, and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor; and

estimating performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between the performance, and the emotion and the cognition of the worker in the operation,

controlling a functioning of the at least a part of the manufacturing line based on an estimation result of performance of the worker in the operation obtained by the second estimation unit.

A31 . A computer program comprising instructions which, when executed on a computer, cause the computer to execute steps according to any of aspects A14 to A18, A28 and A30.

It is noted that what is stated for a worker applies to a subject, and viceversa.

ADVANTAGEOUS EFFECTS

[0015] The apparatus, method, or program according to the first aspect of the present invention uses information indicating the activity of the worker during the operation used as a primary indicator. The primary indicator, and the first learning data generated separately are used to estimate the emotion and the cognition of the worker. The estimated emotion and cognition are then used as secondary indicators. The secondary indicators, and the second learning data generated separately are used to estimate the performance of the worker in the operation. This enables the performance of the worker in the operation to be estimated based on the information indicating the activity of the worker during the operation, and enables work management in accordance with human performance corresponding to emotion and cognition, which has not been used before. In other words, factory productivity can be monitored and/or improved by taking into account the accurate state of the worker(s), which is obtained in an objective and repeatable way autonomously by an apparatus, method, or program.

[0016] The apparatus, method, or program according to the second aspect of the present invention involves the first learning data including the first regression equations representing the relationship between the emotion and the activity of the worker, and thus enables the emotion of the worker to be estimated by computation using the first regression equations, for example, without storing a large amount of learning data into a database.

[0017] The apparatus, method, or program according to the third aspect of the present invention estimates the emotion of the worker using one first regression equation generated for emotional arousal and another first regression equation generated for emotional valence. This allows the emotion of the worker to be output as numerical information represented by arousal and valence.

[0018] The apparatus, method, or program according to the fourth aspect of the present invention involves the first learning data including the second regression equation representing the relationship between the cognition and the activity of the worker, and thus enables the cognition to be estimated by computation using the second regression equation without storing a large amount of learning data into a database.

[0019] The apparatus, method, or program according to the fifth aspect of the present invention generates the second regression equation with a correct value being information indicating a result of the operation by the worker, which is a success or a failure, and a variable being the feature quantities indicating hand movement of the worker and/or the feature quantities indicating eye movement of the worker. This allows the cognition associated with the operation by the worker to be estimated accurately based on at least one of the hand movement and the eye movement directly representing the state of the operation.

[0020] The apparatus, method, or program according to the sixth aspect of the present invention prepares the second learning data as at least one of the first relational expression for estimating the skill level of the worker and the second relational expression for estimating the misoperation frequency of the worker. This allows the performance of the worker in the operation to be estimated using a specific indicator of the skill level or the misoperation frequency.

[0021] The apparatus, method, or program according to the seventh aspect of the present invention controls the operation of the system based on the estimation results of the performance of the worker in the operation. When, for example, a decrease is estimated in the performance of the worker in the operation, the system is controlled to decrease its speed. This prevents quality deterioration of the product.

[0022] The apparatus, method, or program according to the eighth aspect of the present invention stratifies (or standardizes) the operation of the system in accordance with the class of the estimated performance of the worker in the operation. This allows the operation of the system to be more easily controlled in accordance with a decrease in the performance of the worker in the operation while maintaining the intended workability.

[0023] The above aspects of the present invention provide a work management system, an apparatus, a method, and a program that enable the state of the worker affecting the productivity to be determined always accurately without relying on the experience or the intuition of a manager.

According to further aspects, it is possible improving safety in driving, since the state of the driver can be objectively obtained by means of an apparatus and monitored to prevent hazardous situations. Also, the accurate state can be used to provide driving assistance, thus also increasing safety. Still further, the accurate state of a person can be objectively obtained by a healthcare support apparatus, so that the health condition of the person can be better monitored and/or improved.

BRIEF DESCRIPTION OF THE DRAWINGS

[0024] Fig. 1 is a schematic diagram of a system according to an embodiment of the present invention.

Fig. 2 is a diagram showing an example emotion input device and an example measurement device included in the system shown in Fig. 1 .

Fig. 3 is a diagram showing another measurement device included in the system shown in Fig. 1 .

Fig. 4 is a functional block diagram of a production management apparatus installed in the system shown in Fig. 1 .

Fig. 5 is a flowchart showing the procedure and the details of emotion learning performed by the production management apparatus shown in Fig. 4.

Fig. 6 is a flowchart showing the procedure and the details of cognition learning performed by the production management apparatus shown in Fig. 4.

Fig. 7 is a flowchart showing the first half part of the procedure and its details for generating and storing emotion learning data in an emotion learning mode shown in Fig. 5.

Fig. 8 is a flowchart showing the second half part of the procedure and its details for generating and storing the emotion learning data in the emotion learning mode shown in Fig. 5.

Fig. 9 is a flowchart showing the first half part of the procedure and its details for generating and storing learning data in the cognition learning shown in Fig. 6.

Fig. 10 is a diagram showing an example working process used for describing cognition estimation.

Fig. 11 is a flowchart showing the procedure and the details of production management performed by the production management apparatus shown in Fig. 4.

Fig. 12 is a flowchart showing emotion estimation and its details in the procedure shown in Fig. 1 1 . Fig. 13 is a flowchart showing cognition estimation and its details in the procedure shown in Fig. 1 1 .

Fig. 14 is a flowchart showing productivity estimation and line control in the procedure shown in Fig. 1 1 .

Fig. 15 is a diagram describing the definition of emotion information that is input through the emotion input device shown in Fig. 2.

Fig. 16 is a diagram showing example input results of emotion information obtained through the emotion input device in the system shown in Fig. 1 .

Fig. 17 is a diagram showing the classification of emotion information that is input through the emotion input device in the system shown in Fig. 1 .

Fig. 18 is a diagram showing variations in emotion information that is input through the emotion input device in the system shown in Fig. 1 .

Fig. 19 illustrates a block diagram of a mental state model that is well suited for technical applications wherein a person interacts with a device/machine.

Fig. 20 shows how cognitive and emotional states can be measured by way of objective and repeatable measurements.

Fig. 21 shows examples of objective and repeatable measurements.

DETAILED DESCRI PTION

[0025] The present invention is based, amongst others, on the recognition that the human factor influencing for instance productivity is based on the mental state of a person, and that it is preferable using an appropriate model for the person (i.e. of his/her mental state) that takes into account different types of states of a person, wherein the states are directly or indirectly measurable by appropriate sensors. Thus, the mental state can be objectively and systematically observed, as well as estimated in view of the intended technical application, such that productivity can be better monitored and/or increased, and in general efficiency in executing a task can be better monitored and/or increased.

More in detail, in order to allow a technical application that objectively and systematically takes into account a mental state, the latter can be modeled by a combination of a cognitive state (also cognition, in the following) and an emotional state (also emotion, in the following) of a person. The cognitive state of the person relates to, for example, a state indicating a level of ability acquired by a person in performing a certain activity, for instance on the basis of experience (e.g. by practice) and knowledge (e.g. by training), as also further below discussed. The cognitive state is directly measureable, since it directly relates to the execution of a task by the person. Emotional state has been considered in the past solely as a subjective and psychological state, which could not be established objectively e.g. by technical means like sensors. Other (more recent) studies however led to a revision of such old view, and show in fact that emotional states of a person are presumed to be hard wired and physiologically (i.e. not culturally) distinctive; further, being based also on arousal (i.e. a reaction to a stimuli), emotions can be indirectly obtained from measurements of physiological parameters objectively obtained by means of suitable sensors, as also later mentioned with reference to Figure 20.

Figure 19 shows a model of a mental state that can be used, according to the inventors, for technical applications dealing for instance with human or men factor influencing for instance productivity (the same model can also be used for other applications, later discussed, like for instance assisted driving or healthcare support). In particular, the model comprises a cognitive part 510 and an emotional part 520 interacting with each other. The cognitive part and the emotional part represent the set of cognitive states and, respectively, the set of emotional states that a person can have, and/or that can be represented by the model. The cognitive part directly interfaces with the outside world (dashes line 560 represents a separation to the outside worlds), in what the model represents as input 540 and output 550. The input 540 represents any stimuli that can be provided to the person (via the input "coupling port" 540, according to this schematic illustration), and the output 550 (a schematic illustration of an output "coupling port" for measuring physiologic parameters) represents any physiological parameters produced by the person, and as such measurable. The emotional part can be indirectly measured, since the output depends on a specific emotional state at least indirectly via the cognitive state: see e.g. line 525 (and 515) showing interaction between emotion and cognition, and 536 providing output, according to the model of Figure 19. In other words, an emotional state will be measurable as an output, even if not directly due to the interaction with the cognitive part. It is herein not relevant how the cognitive part and the emotional part interact with each other. What matters to the present discussion is that there are input to the person (e.g. one or more stimuli), and output from the person as a result of a combination of a cognitive state and an emotional state, regardless of how these states/parts interact with each other. In other words, the model can be seen as a black box having objectify measurable input and output, wherein the input and output are causally related to the cognitive and emotional states, though the internal mechanism for such causal relationship are herein not relevant.

Despite the non-knowledge of the internal mechanisms of the model, the inventors have noted that such a model can be useful in practical and technical application in the industry, like for instance when wanting to handle human/men factors influencing productivity, or when wanting to control certain production system parameters depending on human performance, as it will also become apparent in the following.

Figure 20 shows how cognitive and emotional states can be measured by way of objective and repeatable measurements, wherein a circle, triangle and cross indicates that the listed measuring methods are respectively well suitable, less suitable (due for instance to inaccuracies), or (at present) considered not suitable. Other techniques are also available, like for instance image recognition for recognizing facial expressions or patterns of facial expressions that are associated to a certain emotional state. In general, cognitive and emotional states can be measured by an appropriate method, wherein certain variable(s) deemed suitable for measuring the given state are determined, and then measured according to a given method by means of suitable sensor(s). As also evident from Figure 20, the emotional state can be obtained by measuring respective physiological parameter(s) by at least one emotional state sensor, preferably set according to an emotional state sensor configuration, and the cognitive state can be measured by at least one cognitive state sensor preferably set according to a cognitive state sensor configuration, wherein the at least one emotional state sensor is different from the at least one cognitive state sensor and/or the emotional state sensor configuration is different from the cognitive state sensor configuration. In other words, the emotion sensor is a sensor suitable for measuring at least a physiological parameter relating to emotion, and the cognitive sensor is a sensor suitable for measuring at least a physiological parameter relating to cognition. For instance, with reference to Figure 20, LoS (Line of Sight) measurements can be performed for estimating or determining the cognitive state and/or the emotion state, however the configuration of the sensor is different since the parameter(s)/signal(s) used is different depending on whether the emotion or cognition wants to be determined. An example of the sensor for obtaining LoS is represented by a camera and an image processing unit (either integrated or separated from the camera), wherein the camera and/or the processing unit are differently set in order to acquire a signal related to the cognitive state (e.g. any one or a combination of the following examples: the position of LoS, the track of LoS, the LoS speed, the speed of following objects by the eye(s), the congestion angle, and/or the angle of field of vision, etc.) or a signal related to the emotion state (any one or a combination of the following examples: size of pupils, number of blinks, etc.). For example, if the number of blinks wants to be detected, the camera should be set to acquire a given number of images (or a video with a given, preferably high, number frames per second) and image processing unit for recognizing one blink; when the position of LoS wants to be detected, the camera may be set to acquire just one image, even if more is preferable, and the image processing unit to detect the LoS position from the given image(s). Similar considerations apply to other signals relating to LoS for either cognitive state or emotional state; also, similar considerations apply to other types of signals like those relating to the autonomic nervous system or musculoskeletal system as directly evident from Figure 20. With this regard, it is also noted that (at least according to the present knowledge) blood pressure measurements are suitable for detecting the emotional state, but not the cognitive state: thus, in this case, any blood pressure sensor would be suitable for obtaining an emotional state, and any sensor suitable for obtaining blood pressure would be an example of the emotional state sensor regardless of its configuration. Similarly, any sensor suitable for detecting movement and motion (e.g. any or a combination of: actions, track of actions, action speed, action patters, etc., see figure 20) is an example of a cognitive state sensor regardless of its configuration. Thus, as also shown in figure 20, a cognitive state and an emotional state can be detected by a cognitive state sensor and, respectively, emotional state sensor, and/or - when the sensor itself can be the same or similar - by a different configuration of the sensor. Herein, by sensor it is meant a sensing device for detecting physical signals, possibly together (as necessary) with a processing unit for obtaining information on the cognitive or emotion state on the basis of the physical signal.

With reference to the emotional state sensors, it is noted that for instance the emotional state can be obtained on the basis of (i) brain related parameter(s) and/or (ii) appearance related parameter(s) and/or other parameter(s).

(i) The brain related parameter(s) obtained by suitable sensors and/or sensor configuration(s), see also Figure 20:

The brain related parameter(s) can be represented for example by brain waves obtained by EEG, e.g. by detecting an event-related potential ERP (defined as a stereotyped electrophysiological response to a stimulus). More in particular, using a relationship between the applied stimuli (ex. music, picture for relaxing, excitement, etc.) and the measured EEG pattern corresponding to the ERP induced by a (preliminary learned/known or learned for each user) stimuli, it is possible to determine whether the specific characteristic of the EEG is associated with a known emotional state (e.g. appearances of alpha waves when relaxing). In other words, according to this example, by observing the EEG pattern, and specifically the ERP, it is possible to obtain an indirect measure of the emotional state. For more on ERP, see e.g. An Introduction to the Event-Related Potential Technique, Second Edition, Steven J. Luck, ISBN: 9780262525855.

According to another example, the brain blood flow obtained by fMRI (functional Magnetic Resonance Imaging) can be used as a brain related parameter: the active region of the brain, in fact, can indicate some emotional states; for example, the correlations of BOLD (blood oxygen level dependent) signal with ratings of valence and arousal can be obtained in this way, thus achieving an indirect measure of the emotional state (see e.g. The Neurophysiological Bases of Emotion: An fMRI Study of the Affective Circumplex Using Emotion-Denoting Words, by J. Posner et al, Hum Brain Mapp. 2009 Mar; 30(3): 883^895, doi: 10.1002/hbm.20553).

The above measurements methods/devices can be also combined together. Techniques based on (i) are accurate, but the measurement device may be large and the user's motions may be largely limited.

(ii) Appearance related parameter(s) can be obtained from suitable sensors and/or sensor configurations (see also e.g. Figure 20), for instance on the basis of:

Facial image analysis of facial expression(s) (as captured for instance by a camera): for instance, using pixel information such as RGB value and intensities, one or more parameters including the angles of the eyebrows, the angle of the mouth, the degree of mouth opening, and/or the degree of eye openings are calculated; the emotion can then be determined (preferably, automatically by a hardware/software unit) based on the combination of one or more such parameters using available set of templates defining the relationship between those parameters and emotions. Acoustic analysis of voice expressions: similar to the facial expressions, the emotion can be determined using the available set of templates defining the relationship between the parameters and emotions.

A combination of facial expression and voice expressions can also be used. Emotions estimated on the basis of appearance related parameter(s) are estimated with an higher/increased accuracy when the information amount increases, e.g. when the amount of parameters used increases, or (mathematically speaking) when using a higher dimensional information. In simpler words, when acoustic analysis and facial analysis are both executed, and/or when facial analysis is performed on the basis of multiple analysis on eyebrows, angle of mouth, etc., then accuracy can be increased. The more the parameters used in the analysis, however, the larger the computing resources needed for processing; moreover, providing/arranging camera for each user or requesting the voice utterances may not always be possible depending on the situations. Thus, the higher accuracy comes at a price in the terms of computational resources and/or complexity of the camera/machines used for such analysis.

(iii) Other parameters, possibly obtained by other sensors and/or different configurations of sensors (see e.g. Figure 20), can be used for estimating emotions, like for instance:

Pupil size by eye image recognition (i.e. an analysis made on image(s) taken of the eye(s) of a subject), wherein the Time Resolution TR is preferably higher than 200Hz, for example;

Heart electrical activity, detected by ECG, preferably having TR higher than 500Hz, for example.

Techniques based on (iii) are accurate, but may require large computing resources in analysis.

As anticipated, cognition can be estimated for instance by LoS measurements, either by means of a specific sensor, or by a sensor being substantially the same as the one used for emotion, but differently set (set according to a different configuration) such that physiological parameter(s) are detected corresponding to cognition. More in general, the cognition sensor is a sensor suitable for obtaining physiological parameters related to cognition. For example, such physiological parameters relating to cognition can be one or a combination of LoS parameter(s), EEG parameter(s), movement and/or motion parameter(s) like for example:

- As also anticipated, LoS parameters (including eye movement) relevant to cognition may be obtained by measuring for instance: position of LoS, and/or track of LoS, and/or LoS speed, and/or speed of following object(s), and/or congestion angle, and/or angle of field of vision. These parameters may be detected by eye image recognition with a camera;

- With further reference to figure 20, EEG related parameters can be obtained by measuring for instance: increase and decrease in wavelength a and/or β (alpha and/or beta waves), wavelength ratio α/β; these parameters may be thus detected by EEG measurements;

- With further reference to figure 20, movement and/or motion parameters relating to cognition can be obtained by measuring for instance: action, and/or tracks of actions, and/or action speed, and/or action patterns, and/or hand movement. These parameters may be detected by measuring with an acceleration sensor acceleration generated by movement of the target, or by movement/motion recognition in a video (sequential images) capturing the target by means of a camera; by comparing or evaluating the taken picture(s) and/or video(s) against a known picture(s) and/or video(s), cognition is obtained for the subject performing the operation. The feature amount in this example may be represented by the number or incidence of misoperations, or by the number of objects (parts) deviated from the predetermined positions, as also further below discussed with reference to factory automation. In the case of a vehicle/driving application, the cognition sensor can be a recording device for recording vehicle operations (such as acceleration, braking, steering operations, etc.) together with vehicle environment images (i.e., images of outside the vehicles). In this case, for instance, the number or incidence of misoperations is obtained by comparing the standard operation (e.g. stop before the stop line) with detected operation in response to the external event occurred in the vehicle environment (e.g, traffic signal turned into yellow or red).

Further, activity can be obtained by means of an activity sensor suitable for measuring vital sign and/or motion related parameters, and includes for example sensors for measuring heart electrical activity H, and/or skin potential activity G, and/or motion BM, and/or an activity amount Ex. An example of an activity sensor is referred as a wearable measurement device 3 as in figure 3, and further later described. Similarly, a camera 4 (see again Figure 3) mounted on a helmet or cap of a subject may be used (by means e.g. of image processing aimed at detecting movement) as a sensor for detecting motion related parameters indicating information about the activity of the subject. Still further, a sensor for detecting blood pressure may be used as an activity sensor. For instance, as also later described, the activity sensor can be a sensor capable of measuring heart electrical activity H, skin potential activity G, motion BM, activity amount Ex, etc. With reference to the example of heart electrical activity H, the activity sensor (or a suitable configuration of a sensor suitable for measuring heart electrical activity) is capable of measuring the heartbeat interval (R-R interval, or RRI), and/or the high frequency components (HF) and/or the low frequency components (LF) of the power spectrum of the RRI, with a required Time Resolution (TR) preferably set to 100Hz - 200Hz. Such parameters can be obtained for instanced by means of an ECG device and/or a pulse wave device. As discussed above, see e.g. the other parameters (iii) used for measuring emotions, heart activity can be used also for estimating emotions; however, the sensors used for measuring heart activity related to emotions must be set differently that the same sensors when used for measuring heart activity related to an activity performed by the subject; in the example herein discussed, for instance, a TR of 100-200Hz suffices for measuring activity, while a TR of 500Hz or more is preferable for measuring emotions. This means that that activity measurement can be achieved with less computational resources than emotion measurements. Regardless of the complexity necessary for obtaining activity information and emotional information, both are used - once obtained - in order to generate learning data indicating a relationship between activity information and emotional information.

Referring to emotions, by any one of or any combination of above techniques, including (i) to (iii), emotional state can be sensed; however, for sensing the emotions accurately, fluctuations of the states, or the continuous variations of the states are important information to consider, which require relatively high time resolution and high dimensional information (thus resulting in high computing resources). Similar considerations apply to cognition sensors. In short, sensing emotion and cognition may require computationally demanding sensor units, and in general complex sensors; Further, such emotion and/or cognition sensors may be cumbersome, or not easy to deploy in certain environments, especially for a daily use or when more subjects are closely interacting.

In contrast thereto, the activity sensor is a sensor that requires smaller information amount, and/or less processing load (including processing time), and/or less time resolution, and/or constructionally simpler and/or less complex than the emotional sensor.

As anticipated, a variety of sensors are suitable for obtaining such measurements, and they are herein not all described since any of them is suitable as long as they provide any of the parameters listed in figure 20, or any other parameters suitable for estimating cognitive and/or emotional states. The sensors can be wearables, e.g. included in a wrist or chest wearable device or in glasses, an helmet like device for measuring brain activity from the scalp (e.g. EEG/NIRS), or a large machine like PET/fMRI.

Thus, it is possible to model a person, like for instance a factory operator (or a driver of a vehicle, or a person using an healthcare supporting device), by using a model as illustrated in figure 19, and collect measurements of physiological parameters of the person as shown in figures 20 and 21 . In this way, as also shown in the following, it is possible to improve for instance factory productivity or improving the monitoring of factory productivity, safety of driving, improving monitoring of health conditions of a person, maintaining or improving health conditions of a person.

The above explanation is provided as illustrative and propaedeutic to the understanding of the invention and following embodiments/examples, without any limitation on the same.

Turning to the invention, and referring for the sake of illustration to the application of monitoring productivity on a production line: emotional and cognitive states can be estimated on the basis of first learning data and on information indicating an activity of the worker (i.e. information obtained from measurements on the worker, or in other words information relating to at least one physiological parameter obtained by means of at least one activity sensor as above illustrated, or below further detailed); the worker performance can then be estimated on the basis of the estimated cognition and emotion, and of second learning data. The emotion and cognition estimation allow obtaining an accurate estimation of the overall mental state (see e.g. the above discussed model), and the worker performance can also be more accurately estimated, such that factory productivity can be more accurately monitored when taking into account also the human factor. It is significant that this productivity estimation is reached on the basis of objective and repeatable measurements (of the worker activity) that an apparatus can perform, and on specific learning data. Details on the estimation are provided also below, but reference is also made to JP2016-252368 filed on 27 December 2016 as well as to the PCT application PCT/IB2017/055272 (reference/docket number 198 759) filed by same applicant and on the same date as the present one, as well as well as PCT application PCT/IB2017/058414 describing for instance how the emotional state can be estimated.

The first learning data preferably comprises data generated on the basis of information indicating emotion of at least one worker, information indicating cognition of the at least one worker, and information indicating activity of the at least one worker, wherein said information indicating emotion relate to at least one physiological parameter obtained by means of at least one first sensor (e.g. an emotion sensor as above illustrated or below further detailed), said information indicating cognition relate to at least one parameter obtained by means of at least one second sensor (e.g. one cognition sensor as above introduced and later further detailed), and said information indicating activity relate to at least one physiological parameter obtained by means of at least one third sensor (as above illustrated, or further below detailed). As above explained, the sensor(s) required to measure activity is less complex and/or less cumbersome than sensors used to measure emotion and cognition. Thus, emotion and cognition are measured accurately with respective suitable sensors, and the activity is also measured in correspondence of the measured emotion and measured cognition. The collected measurements are then used to generate the first learning data, and thus to generate the relationship between emotion and activity, and the relationship between cognition and activity. The learning data is then "used in the field", e.g. in the manufacturing line, in (or for) the car, or in a healthcare support device, depending on the application also below illustrated. In the field, it is then not necessary to perform the complex measurements on emotion and cognition; it suffices performing the easier measurements on activity, since the emotion and cognition can be estimated on the basis of the first learning data. The estimation is nevertheless accurate, since the first learning data is obtained from accurate measurements. Thus, it is possible to estimate emotion and cognition in the field by means of a reduced number of sensors, and by using simple and non-complex sensors. Once the emotion and cognition are estimated, it is also possible to estimate the performance of the subject in a very accurate manner, since not only cognition but also emotion is taken into account, and by using few and simple sensors. It follows that the estimation in performing an operation, like e.g. an operation relating to manufacturing, can be accurately obtained by few and simple sensors. In fact, the activity sensor(s) may also be a wearable sensor or included in a wearable device. As further examples, the activity information can be obtained, as also later discussed, by other measurements like for instance based on any one or any combination of:

Skin potential activity G, e.g. by measuring the galvanic skin response (GSR); this is a parameter easier to obtain, when compared to parameters used to measure an emotional state;

- The eye movement EM, e.g. by measuring the eye movement speed and the pupil size (e.g. based on captured images(s) or video(s) on a subject); in this case, when noting that the same or similar parameters can be used also for obtaining emotions (see (iii) above), the required TR may be equal to or lower than 50Hz (fluctuations or continuous variations of the sensed parameter is not obtained within this range of TR). Similarly to the case of heart activity, the EM measurements related to the activity of the subject is easier to obtain that the EM measurements related to emotions.

The motion BM, like e.g. the hand movement speed. This is also a parameter that is easier to obtain than parameters related to emotions.

In general, therefore, activity information are easier to obtain (than cognition or emotion) either because they can be obtained by less complex sensors than those required for measuring emotions or cognition, or - when the same type of sensors are used - the configuration of the sensor for acquiring activity information results in less computing resources than the configuration for acquiring emotions or cognition. Thus, by using learning data and the (easily) acquired activity information, it is possible to obtain the emotional state and cognitive state of a subject. As a consequence of obtaining the estimated emotional state and cognitive, safer driving, improved manufacturing, and improved health conditions can be conveniently achieved by easily taking into account the mental state of a subject interacting with a device. In anc controlled and improved. Also here, significantly, the better productivity/quality is achieved on the basis of objective and repeatable measurements, and on specific learning data. In other words, once the performance is obtained, it is optionally possible to apply a control on the functioning of system with which the subject interoperates, i.e. a feedback is applied on the system depending on the estimated performance. By way of the control/feedback, the overall efficiency of the system, which depends on the interactions between the subject and the system or its components, can be improved. It is therefore possible to improve the system, since the performance, on the basis of which the feedback is applied, can be more accurately estimated, and importantly by means of few and simple sensors.

Optionally, at least two amongst the cognitive, emotion, and activity sensors may be different from each other: for instance, as also evident from the present description, it is possible using a camera for measuring emotion (e.g. size of pupils) and cognition ), and a sensor for measuring blood pressure or skin potential activity G.

It is also possible that the three sensors are different from each other: e.g. a camera is used for determining cognition, an ECG is used for measuring emotion, and a skin potential activity sensor is used as activity sensor. Other configurations are evidently possible, as also explained in the present description.

Optionally, when at least two amongst the emotion, cognition, and activity sensors are substantially the same, then the sensors being substantially the same are set according to different respective configurations. By "substantially the same" (or by "the same") it is herein meant that the sensors are of the same type. The camera is one example of a "substantially the same" sensor used for measuring cognition and emotion: in fact, two distinct cameras being exactly the same can be provided, one for measuring emotion, the other for measuring cognition; alternatively, two different cameras can be used, e.g. with different resolutions, for measuring emotion and cognition, respectively. In such case, the configurations of the two cameras are differently set, so that one produces an emotion measurement, and the other a cognitive measurement. Further, it is also possible having one single camera for measuring emotion and cognition, in which case the processing unit and/or software used on combination with the camera is configured to differently process the image/video taken in order to produce an emotion or cognition measurement. Still further, the same picture(s) and/or video(s) taken by one single camera can be differently elaborated to produce emotion and cognition measurement, according to different configurations for elaborating the image/video. Reference is also made to the above example relating to the different configuration of an ECG for measuring activity or emotion. These are some of the examples of the same sensor being differently configured to produce several respective pieces of information relating to emotion, cognition, and/or activity.

As above summarized, the activity "in the field" is measured by means of an activity sensor. The first learning data is preferably obtained before the deployment of the solution "in the field", or separately (e.g. while the solution is running on an old set of first learning data, a newer set of first learning data is in the process of being separately generated). When obtaining the first learning data, the activity also needs to be measured by an activity sensor. The activity sensor used in the field and the activity sensor used for generating the learning data can be the same, but need not be necessarily the same. For instance, different level of accuracies may want to be used for measuring activity in the field or when collecting data for learning, or different sizes/types of devices depending on their size, complexity, etc.

In order to further illustrate the inter-relationship and differences amongst the different sensors herein discussed, the following non-limiting examples are also given: the emotion sensor can be any sensor suitable for measuring physiological parameters relating to emotion as above discussed, see also figure 20. The cognition sensor can be any sensor for measuring any parameter related to cognition, see e.g. the above discussion and/or figure 20 and/or also the below discussed monitoring camera CM for checking operation results (indicative in fact of the cognition, i.e. the level of skills and capabilities fo the worker). Thus, the cognition sensor is suitable for measuring physiological parameters relating to cognition (see e.g. figure 20), or any parameter relating to an activity and/or execution of an activity and/or result of execution of an activity by the subject (in general, a sensor capable of measuring a parameter indicative of the cognition state of the subject). The emotion sensor and cognition sensor are thus sensors suitable for providing correct values for an emotional state and a cognitive state, respectively, of a person, wherein "correct value" is used to indicate a preferably highly accurate measurement relating to emotion and cognition, respectively, as also further below illustrated. With regard to the activity sensor, as said, the activity sensor and/or the respective configuration used during the learning phase may be the same or different from the activity sensor and/or its configuration used during the estimation phase (i.e. in the field). Further, the activity sensor - either for the learning phase and/or for the estimation (in the field) phase - may be (but not necessarily) different and/or differently set (i.e. with a different configuration) in relation to emotion and cognition. For example, when used in the learning phase in order to gather measurement data for generating the emotion-activity relationship (the first learning data), the activity sensor may be e.g. a sensor(s) like a later described measuring device 3 (e.g. suitable for measuring the heart electrical activity H, the skin potential activity G, the motion BM, and/or the activity amount Ex), or like a later described eye movement (EM) monitoring camera 4. The same sensor may then be used in the field, i.e. in the estimation phase. In other words, the activity sensor may be a sensor suitable for obtaining activity parameters related to an emotional state, and this sensor may be used in the learning phase and/or estimation phase (in the field). Further as example, when used in the learning phase in order to gather measurement data for generating the cognition-activity relationship, the activity sensor may be a triaxial acceleration sensor included e.g. in the measuring device 3 (motion BM indicating e.g. hand movement), or an eye movement monitoring (EM) camera 4. In other words, the activity sensor may be a sensor suitable for obtaining activity parameters related to a cognitive state, and this sensor may be used in the learning phase and/or estimation phase (in the field). When the activity sensor and/or its configuration are different for measuring activities depending on whether the measurement is needed for emotion or cognition, respectively, higher accuracy is achieved. Further, the same two different sensors and/or respective configuration can be used both in the learning phase, and in the estimation phase (in the field): this is however not necessary, as in fact in the learning phase different sensors can be used, while in the field such differentiation may not be used so as to obtain an easier system to implement in the field. The opposite situation is also possible, i.e. different sensors are used in the field, but no in the learning phase. Still further, while the described option configuration may be advantageous, there is no need to differentiate between activity sensors relating to emotion and cognition, as in fact also the same activity sensor and/or respective configuration can be used regardless of whether emotion or cognition wants to be learned/estimated; in this case, a simpler system can be implemented.

Further, the second learning data may optionally and preferably comprise data generated on the basis of information indicating performance, the information indicating emotion of at least one worker, and the information indicating cognition of the at least one worker. The information indicating emotion and cognition may be the same as the ones used for generating the first learning data, i.e. it is not necessary to repeat the measurement. However, this is not strictly necessary, as in fact it is possible taking emotion and cognition measurement for the first learning data, and take emotion and cognition measurement separately for the second learning data. The information indicating performance indicate performance in correspondence of the information indicating emotion and the information indicating cognition, wherein the performance can be measured in known ways as also later explained (e.g. how many articles are manufactured in a unit of time, and/or quality level achieved in the manufacturing; accuracy in driving; level of health conditions, etc.).

The learning data herein discussed can be obtained on the basis of one subject, or of a plurality of subjects. In case the data are obtained on the basis of only one subject, the only one subject may be the same on which the later performance estimation is performed, but not necessarily. In addition, the activity information and emotion information (on which the learning process is then performed) can be obtained for a given subject, preferably when the subject is performing a certain task (or also herein operation). Further preferably, the certain task belongs to a set of tasks including at least one task characterized by interaction between the subject and a device. For instance, if the device is a vehicle, the task can be represented by a driving operation of the vehicle (a driving type of task), and the activity, cognition and emotion information (necessary for generating the learning data) are obtained when the subject is driving, e.g. by means of sensors and/or sensor configurations compatible with driving. In another example, the task relates to performing an operation on a production line (a manufacturing type of task), and the emotion, cognition and activity information are obtained while the subject(s) performs the task in the production line. In another example, the task relates to an action performed when the subject is coupled to a health care device (a healthcare related type of task), and the emotion, cognition and activity information are obtained when the user performs such action. The learning process can be performed on data referring to activity and emotion information for one or more subjects performing the same or different types of task.

Further, the line manufacturing apparatus may be included in a system, which also includes an article obtained by means of the manufacturing apparatus.

In other illustrative applications like for instance assisted driving or healthcare support, higher safety in driving, more accurate healthcare monitoring, or improved health conditions can be reached on the basis of objective and repeatable measurements, and on specific learning data. What has been said above applies also to the following embodiments, such that repetitions will be avoided.

Embodiments of the present invention will now be described with reference to the drawings.

Embodiment 1

Principle

For example and as anticipated, factors that may influence the productivity of a production line include 4M (machines, methods, materials, and men) factors. In the present embodiment, the factor "men", which may influence the productivity, may be defined as emotion and cognition based on the neural activity of the brain. The emotion is, for example, human motivation and mood (comfort or discomfort) for an operation, and varies during a relatively short period such as hours or days. The cognition is a human baseline ability. This ability is associated with, for example, human attention to and judgment about an operation, and varies during a relatively long period such as months or years.

[0026] In the present embodiment, information indicating the human activity correlated with the neural activity of the brain, such as vital signs and motion information, is used as a primary indicator (for example, when using regression analysis, an indicator as herein used can be represented by an independent variable; in other words, information indicating human activity may represent independent variable(s) when using regression analysis). The information indicating the activity and the emotion correct value, as for instance input by the worker, are used to estimate the emotion. Examples of the information indicating the activity include vital signs and motion information such as the heart electrical activity, the skin potential activity, the motion, and the amount of exercise. With emotion correct value it is herein meant a value indicating the emotional state of the person (e.g. worker, driver, person using a healthcare device), which value is considered correct or highly accurate. In other words, the emotion correct value is (preferably, highly) accurate information on the emotional state of a person. The emotion correct value can be obtained, in one example, by means of an emotion input device 2. For simplicity, as later described in the example referring to figure 2, the emotion input device 2 can be represented by a device to which the person (e.g. worker) can input his/her current emotion. However, the emotion input device 2 can be represented for instance by a measurement apparatus and/or sensor (or combination of a plurality of such measurement apparatuses and/or sensors) capable of acquiring an emotion correct value (i.e. a highly accurate information on the emotional state), i.e. by means of suitable measurements made on the subject, see also the above discussion in relation to Figures 20 and 21 . In particular and preferably, the correct emotion value is acquired by means of devices suitable for determining such state with high precision/accuracy (regardless of the size and complexity of the sensor or device used; preferably, such sensors are large and complex devices achieving higher accuracy than other sensors as those included in wearables). Also, a combination of both an indirect (by means of accurate measurements) and direct (e.g. by means of user inputting his/her own state into a device) determination of the emotional state is possible. The correct emotion value herein discussed can be acquired for each of a plurality of workers, as also further later illustrated. In general, the emotion correct value and the cognition correct value can be obtained by at least one emotion sensor, and, respectively, by a cognition sensor, wherein such sensors are as above explained.

[0027] The cognition is estimated using, as primary indicators (when using for example regression analysis, the independent variable(s) may be given by such indicator(s)), the feature quantities of, for example, eye movement and hand movement representing the attention and judgment in the information indicating the human activity. The feature quantities of eye movement and hand movement, and the cognition correct value are used to estimate the cognition. Examples of the feature quantities representing eye movement include the eye movement speed, the gaze coordinates and the gaze duration, the number of blinks, and changes in the pupil size. Examples of the feature quantities representing hand movement include triaxial acceleration. With cognition correct value it is herein meant (preferably highly, accurate) information indicative of the cognitive state of the person, which information is acquired by means of one of more apparatuses, devices and/or sensors capable of determining whether an operation by the person is as expected, e.g. whether a detected operation (as acquired by such device/apparatus/sensor) is according to a predetermined pattern and/or template for such operation. An example for such device/apparatus/sensor is given by a work monitoring camera CM also later described. Further examples are given above, see the discussion on cognition sensors. When using for example regression analysis, the cognition correct values may be represented as dependent variable(s). Thus, when using regression analysis for emotion or cognition, a relationship can be found between dependent variable(s) and independent variable(s), wherein the dependent variable(s) represent the correct values for emotion and, respectively, cognition, and the independent variable(s) represent indications of human activity as appropriately measured.

[0028] In the present embodiment, the emotion learning data and the cognition learning data are preliminarily generated for each worker. These learning data items are generated based on the above correct values (e.g. dependent variables) and primary indicators (e.g. independent variables). A change in the activity of the worker is measured during operation, and the measurement data is used as a primary indicator. This primary indicator and the learning data are used to estimate a change in each of the emotion and the cognition of the worker. In other words, (first) learning data is generated for instance by regression analysis between activity indication values (independent variables) and correct values (dependent variable) of emotion and, respectively, cognition - on the basis of data available for one or more persons, for instance. Once the learning data has been obtained, the emotion and/or cognition can be estimated on the basis of the (previously generated) learning data and the current activity as detected for a person at a certain point in time when the emotion/estimation wants or needs to be estimated.

[0029] In addition, relational expressions representing the correlation between the changes in the emotion and the cognition and a change in the performance (or, more in general, correlation between emotion and the cognition, and performance) in the operation by the worker are preliminarily generated for each worker as learning data for estimating the performance in the operation by the worker. In an example using regression analysis, the performance (or change in performance), may be represented as dependent variable(s). Information indicating performance or change in performance may be obtained for instance by measuring speed of producing an item, and/or how many items are produced per hour, and/or quality in producing item(s), etc. as also later explained. The estimated changes in the worker's emotion and cognition are used as secondary indicators; in the example of regression analysis, the secondary indicator(s) may be represented as independent variable(s). The secondary indicators and the relational expressions are used to estimate a change in the worker's current or future performance in the operation. In other words and as an example, (second) learning data is generated using regression analysis between performance information (as dependent variable(s)) and estimated emotion and/or cognition (as independent variable(s)). Once the (second) learning data is obtained, the actual performance can be estimated based on the emotion and/or cognition as estimated for a person at a certain point in time.

[0030] The information indicating the performance in an operation in a production line is typically defined by the quality and the number of products. In the present embodiment, this information is more specifically represented by skill level information and misoperation frequency information. The skill level information is represented by, for example, a difference between a standard operation time and an actual operation time. The misoperation frequency information is represented by, for example, deviations of the actual operation time from an average operation time. Thus, the information indicating the performance in an operation is defined as information representing the degrees of the worker's concentration, fatigue, and skill in the operation.

[0031] In the present embodiment, the information about the difference between the standard operation time and the actual operation time, and the information indicating deviations of the actual operation time from the average operation time are estimated for each worker as the information indicating the performance in the operation. The estimation results are used to control the line.

What has been explained above for a worker, equally applies to persons like a driver, or a person using a healthcare device.

In the case of a driver, for instance, correct values used for cognition estimation may be represented by how correctly the driving task is executed, which can be obtained e.g. by measuring certain driving parameters like how correctly the vehicle follows certain predetermined routes (e.g. comparing how smoothly the actual driving route correspond to an ideal route obtained from a a navigation system), how smooth the control of the vehicle is (e.g. whether or how often any sudden change of direction occurs), on the degree of the driver recognizing an obstacle, etc. The performance values of one driver (in the sense of performance in executing driving, to be used for obtaining learning data by way of regression analysis) can e.g. be obtained by comparing for instance the distance covered over a certain period over an expected distance for a given period, or whether in reaching two points a certain route has been followed compared to predetermined available routes, etc.

In the case of a person using a healthcare assistance device, the correct values for cognition estimation may be obtained by measuring how certain tasks are executed: for instance, how straight and balanced the person ' s body position is when walking, running or sitting (e.g. over predetermined patterns); how smoothly certain movements are made over predetermined patterns; etc. The performance values of the person (to be used for obtaining learning data by way of regression analysis) can e.g. be obtained by measuring efficiency and/or quality in completing a certain task of number of tasks, like for instance measuring the distance covered on foot over an expected distance; measuring the time for accomplishing a task over a predetermined time (e.g. completing a housecleaning or hobby-related operation, number of such operations performed in an hour or day), etc.

Other values and considerations apply as in the case of a worker.

System Configuration

[0032] A system according to an embodiment of the present invention is a cell production system. The cell production system divides the product manufacturing process into multiple sections. The production line has working areas, called cells, for these sections. In each cell, a worker performs the operation of the assigned section.

[0033] Fig. 1 shows an example cell production system, which includes a U-shaped production line CS. The production line CS includes, for example, three cells C1 , C2, and C3 corresponding to different sections on the course of the products. Workers WK1 , WK2, and WK3 are assigned to the cells C1 , C2, and C3, respectively. In addition, a skilled leader WR is placed to supervise the overall operation on the production line CS. The leader WR has a portable information terminal TM, such as a smartphone or a tablet terminal. The portable information terminal TM is used to display information for managing the production operation provided to the leader WR.

[0034] A part feeder DS and a part feeder controller DC are located most upstream of the production line CS. The part feeder DS feeds various parts for assembly onto the line CS at a specified rate in accordance with a feed instruction issued from the part feeder controller DC. Additionally, the cell C1 , which is a predetermined cell in the production line CS, has a cooperative robot RB. In accordance with an instruction from the part feeder controller DC, the cooperative robot RB assembles a part into a product B1 in cooperation with the part feed rate.

[0035] The cells C1 , C2, and C3 in the production line CS have monitors MO1 , M02, and M03, respectively. The monitors M01 , M02, and M03 are used to provide the workers WK1 , WK2, and WK3 with instruction information about their operations and other messages.

[0036] A work monitoring camera CM is installed above the production line CS. The work monitoring camera CM captures images to be used for checking the results of the production operations for the products B1 , B2, and B3 performed by the workers WK1 , WK2, and WK3 in the cells C1 , C2, and C3. The results of the production operations are used as correct values when learning data for cognition estimation is generated. The numbers of monitors, sections, and workers, and the presence or absence of a leader may not be limited to those shown in Fig. 1 . The production operation performed by each worker may also be monitored in any manner other than to use the work monitoring camera CM. For example, the sound, light, and vibrations representing the results of the production operation may be collected, and the collected information may be used to estimate the results of the production operation.

[0037] To estimate the emotion and the cognition of each of the workers WK1 , WK2, and WK3, the workers WK1 , WK2, and WK3 have input and measurement devices SS1 , SS2, and SS3, respectively. The input and measurement devices SS1 , SS2, and SS3 each include an emotion input device 2 for receiving an emotion correct value, a measurement device 3 and/or an eye movement monitoring camera 4 for measuring the worker's activity used as a primary indicator for estimating the emotion and the cognition.

[0038] The emotion input device 2, which is for example a smartphone or a tablet terminal as shown in Fig. 2, displays an emotion input screen under control with application programs. The emotion input screen shows emotions using a two-dimensional coordinate system with emotional arousal on the vertical axis and emotional valence on the horizontal axis. When a worker plots the position corresponding to his or her current emotion on the emotion input screen, the emotion input device 2 recognizes the coordinates indicating the plot position as information indicating the emotion of the worker.

[0039] This technique of expressing the emotions using arousal and valence on the two-dimensional coordinate system is known as the Russell's circumplex model. Fig. 15 schematically shows this model. Fig. 16 is a diagram showing example input results of emotion at particular times obtained through the emotion input device 2. The arousal indicates the emotion either being activated or deactivated and the degree of activation to deactivation, whereas the valence indicates the emotion either being comfortable (pleasant) or uncomfortable (unpleasant) and the degree of being comfortable to uncomfortable.

[0040] The emotion input device 2 transforms the position coordinates detected as the emotion information to the arousal and valence values and the information about the corresponding quadrant of the two-dimensional arousal-valence coordinate system. The resultant data, to which the time stamp data indicating the input date and time is added, is transmitted as emotion input data (hereinafter referred to as scale data) to a production management apparatus 1 , which is a work management apparatus, through a network NW using a wireless interface. However, as above explained, the emotional state can be obtained by other emotion sensors (as above explained) than the device 2, or in combination with device 2.

[0041] The measurement device 3 (an example of an activity sensor, in particular an example of an activity sensor used when learning and/or an example of an activity sensor used when estimating) is, for example, incorporated in a wearable terminal, and is mounted on a wrist of the worker as shown in Fig. 3. The measurement device 3 may not be incorporated in a wearable terminal, and may be mountable on clothes, a belt, or a helmet. The measurement device 3 measures information indicating human activity correlated with human emotions and cognition. The information indicating human activity includes vital signs and motion information. To measure the vital signs and the motion information, the measurement device 3 includes various vital sign sensors and motion sensors. Examples of the vital sign sensors and the motion sensors include sensors for measuring heart electrical activity H, skin potential activity G, motion BM, and an activity amount Ex.

[0042] The heart electrical activity sensor measures the heart electrical activity H of the worker in predetermined cycles or at selected timing to obtain the waveform data, and outputs the measurement data. The skin potential activity sensor, which is for example a polygraph, measures the skin potential activity G of the worker in predetermined cycles or at selected timing, and outputs the measurement data. The motion sensor, which is for example a triaxial acceleration sensor, measures the motion BM, and outputs the triaxial acceleration measurement data indicating hand movement of the worker. The sensor for measuring the activity amount Ex, which is an activity sensor, outputs the measurement data indicating the intensity of physical activity (metabolic equivalents, or METs) and the amount of physical activity (exercise). Another example of the vital sign sensors may be an electromyograph for measuring electric charge in the muscle.

[0043] The eye movement monitoring camera 4 (an example of an activity sensor, in particular an example of an activity sensor used when learning and/or an example of an activity sensor used when estimating cognition; also, the EM camera 4 can be optionally and preferably used as an activity sensor for detecting activity information relating to cognition, as above explained) is a small image sensor, and is mounted on, for example, the cap worn by each of the workers WK1 , WK2, and WK3 as shown in Fig. 3, or on the frame of glasses or goggles. The eye movement monitoring camera 4 captures the eye movement (EM) of the worker, and transmits the captured image data to the production management apparatus 1 as measurement data.

[0044] Each of the measurement device 3 and the eye movement monitoring camera 4 adds the time stamp data indicating the measurement date and time to its measurement data. The measurement device 3 and the eye movement monitoring camera 4 each transmit the measurement data to the production management apparatus 1 through the network NW using a wireless interface.

[0045] The wireless interface complies with, for example, low-power wireless data communication standards such as wireless local area networks (WLANs) and Bluetooth (registered trademark). The interface between the emotion input device 2 and the network NW may be a public mobile communication network, or a signal cable such as a universal serial bus (USB) cable.

[0046] The structure of the production management apparatus 1 will now be described. Fig. 4 is a functional block diagram of the apparatus. The production management apparatus 1 is, for example, a personal computer or a server computer, and includes a control unit 11 , a storage unit 12, and an interface unit 13.

[0047] The interface unit 13, which allows data communication in accordance with a communication protocol defined by the network NW, receives the measurement data transmitted from the input and measurement devices SS1 , SS2, and SS3 through the network NW. The interface unit 13 transmits display data output from the control unit 1 1 to the portable information terminal TM and the monitors M01 , M02, and M03, and also transmits a control command for the production line CS output from the control unit 1 1 to the part feeder controller DC.

[0048] The interface unit 13 may also include a man-machine interface function. The man-machine interface function receives, for example, data input from an input device, such as a keyboard or a mouse, and outputs display data input from the control unit 1 1 to a display (not shown) on which the data will appear. The man-machine interface function may additionally capture the voice of the worker or output sound, or may have other functions.

[0049] The storage unit 12 is a storage medium, and is a readable and writable non-volatile memory, such as a hard disk drive (HDD) or a solid state drive (SSD). The storage unit 12 includes a sensing data storage 121 , a learning data storage 122, and a control history storage 123 as storage areas used in the embodiment.

[0050] The sensing data storage 121 stores data transmitted from the input and measurement devices SS1 , SS2, and SS3 in a manner associated with the identifiers of the workers WK1 , WK2, and WK3 that have transmitted the corresponding data. The transmitted and stored data includes scale data indicating the worker's emotion input through the emotion input device 2, measurement data obtained through the sensors of the measurement device 3, and image data input from the eye movement monitoring camera 4. The sensing data storage 121 also stores image data about the results of the operation for a product transmitted from the work monitoring camera CM.

[0051] The learning data storage 122 stores learning data to be used for emotion estimation, learning data to be used for cognition estimation, and learning data to be used for productivity estimation, which are generated by the control unit 1 1 for each of the workers WK1 , WK2, and WK3.

[0052] The control history storage 123 stores information indicating the productivity estimation results generated by the control unit 11 for each of the workers WK1 , WK2, and WK3 and the results of the corresponding control for the production line CS as a control history event.

[0053] The control unit 1 1 includes a central processing unit (CPU) and a working memory. The control unit 1 1 includes a sensing data obtaining controller 1 1 1 , a feature quantity extraction unit 1 12, a productivity estimation unit 1 13, a line controller 1 14, and a learning data generation unit 1 15 as control functions used in the embodiment. Each of these control functions is implemented by the CPU executing the application programs stored in program memory (not shown).

[0054] The sensing data obtaining controller 1 1 1 obtains, through the interface unit 13, data transmitted from each of the input and measurement devices SS1 , SS2, and SS3, or scale data output from the emotion input device 2, measurement data output from the measurement device 3, and image data output from the eye movement monitoring camera 4, and stores the obtained data into the sensing data storage 121 . The sensing data obtaining controller 11 1 also obtains, through the interface unit 13, work monitoring image data about the results of the operations performed by the workers WK1 , WK2, and WK3 transmitted from the work monitoring camera CM, and stores the obtained data into the sensing data storage 121 .

[0055] In a learning mode, the feature quantity extraction unit 112 reads, from the sensing data storage 121 , the scale data, the measurement data, and the image data for each of the workers WK1 , WK2, and WK3 within each of the windows that are arranged at time points chronologically shifted from one another. The feature quantity extraction unit 1 12 extracts the feature quantities (extracted data, or extracted sensing data) from the read scale data, measurement data, and image data, calculates the variation between the feature quantities, and transmits the calculation results to the learning data generation unit 115.

[0056] The windows each have a predetermined unit duration. The windows are defined in a manner shifted from one another by the above unit duration to avoid overlapping between chronologically consecutive windows, or in a manner shifted by a time duration shorter than the above unit duration to allow overlapping between chronologically consecutive windows. The unit duration of each window may be varied by every predetermined value within a predetermined range.

[0057] The learning data generation unit 1 15 performs multiple regression analysis for each of the workers WK1 , WK2, and WK3 with correct values (supervisory data) being the variations among the feature quantities in the scale data for arousal and for valence that are extracted by the feature quantity extraction unit 1 12 and variables being the variations among the feature quantities of the measurement data. This generates regression equations for arousal and for valence representing the relationship between the emotion and the feature quantities of measurement data. The learning data generation unit 1 15 associates the generated regression equations with window identifiers that indicate the time points of the corresponding windows, and stores the equations into the learning data storage 122 as learning data to be used for emotion estimation. [0058] The learning data generation unit 1 15 also performs multiple regression analysis for each of the workers WK1 , WK2, and WK3 with correct values being the data indicating whether the operation results extracted from the captured image data obtained through the work monitoring camera CM suggest a correctly performed operation (e.g. whether the images acquired by the camera are according to a predetermined pattern or template, as also below further illustrated), and with variables being the eye movement data and hand movement data. The eye movement data is extracted by the feature quantity extraction unit 1 12 from the captured image data obtained through the eye movement monitoring camera 4. The hand movement data is extracted by the feature quantity extraction unit 1 12 from the measurement data obtained through the triaxial acceleration sensor included in the measurement device 3. In this manner, the learning data generation unit 1 15 generates a regression equation for each of the workers WK1 , WK2, and WK3 representing the relationship between the cognition, and the eye movement and hand movement of each worker. The learning data generation unit 1 15 stores the generated regression equation into the learning data storage 122 as learning data to be used for cognition estimation.

[0059] The learning data generation unit 1 15 further uses the estimated changes in the emotion and the cognition of each of the workers WK1 , WK2, and WK3 as secondary indicators, and generates a relational expression for each worker representing the correlation between each secondary indicator and a change in the productivity of each worker. The learning data generation unit 1 15 stores the generated relational expressions into the learning data storage 122 as learning data to be used for productivity estimation.

[0060] More specifically, skill level information and misoperation frequency information are defined as productivity information. The skill level information is represented by, for example, a difference between a standard operation time and an actual operation time. The misoperation frequency information is represented by, for example, deviations of the actual operation time from an average operation time. The learning data generation unit 1 15 generates relational expressions for estimating the skill level information and the misoperation frequency information based on the estimates of the changes in the emotion and the cognition, and stores the relational expressions into the learning data storage 122.

[0061] In a productivity estimation mode, the feature quantity extraction unit 112 reads, from the sensing data storage 121 , the measurement data and the image data for each of the workers WK1 , WK2, and WK3 within each of the windows that are arranged at time points chronologically shifted from one another. The feature quantity extraction unit 1 12 extracts the changes in the feature quantities from the read measurement data and image data for emotion and cognition estimation, and transmits the changes in the feature quantities to the productivity estimation unit 113.

[0062] For each of the workers WK1 , WK2, and WK3, the productivity estimation unit 1 13 receives the changes in the feature quantities for emotion and cognition estimation extracted by the feature quantity extraction unit 1 12, and reads the emotion learning data and the cognition learning data from the learning data storage 122. The productivity estimation unit 113 uses the changes in the feature quantities and the emotion and the cognition learning data to estimate a change in each of the emotion and the cognition.

[0063] For each of the workers WK1 , WK2, and WK3, the productivity estimation unit 1 13 also reads the relational expressions for productivity estimation from the learning data storage 122. The productivity estimation unit 1 13 uses the read relational expressions and the estimates of the changes in the emotion and the cognition to estimate the productivity of each of the workers WK1 , WK2, and WK3. More specifically, the productivity estimation unit 1 13 estimates the skill level represented by the difference between the standard operation time and the actual operation time, and the misoperation frequency represented by the deviations of the actual operation time from the average operation time. [0064] Based on the productivity estimation results obtained for each of the workers WK1 , WK2, and WK3 by the productivity estimation unit 1 13, the line controller 1 14 determines whether the speed of the production line CS needs to be regulated, and whether the worker WK1 , WK2, or WK3 needs to be replaced or rest. When determining that the speed regulation is needed, the line controller 1 14 outputs a line speed control instruction to the part feeder controller DC. When determining that the replacement or the rest is needed, the line controller 1 14 transmits the replacement or rest instruction information to the portable information terminal TM held by the leader WR or the monitor M01 , M02, or M03.

Operation

[0065] The operation of the production management apparatus 1 with the above structure will now be described in association with the operation of the overall system. (1 ) Learning Data Generation

Before the process for estimating the productivity of the workers WK1 , WK2, and WK3, the production management apparatus 1 generates, for each of the workers WK1 , WK2, and WK3, the learning data to be used for productivity estimation in the manner described below.

1 -1 : Generation of Learning Data for Emotion Estimation

[0066] The production management apparatus 1 generates, for each of the workers WK1 , WK2, and WK3, the learning data to be used for emotion estimation in the manner described below. Fig. 5 is a flowchart showing the procedure and its details.

[0067] More specifically, each of the workers WK1 , WK2, and WK3 inputs his or her current emotions with the emotion input device 2 at predetermined time intervals or at selected timing while working.

[0068] As described above, the emotion input device 2 displays the emotion of the worker in the two-dimensional coordinate system for emotional arousal and emotional valence, and detects the coordinates of a position plotted by the worker WK1 , WK2, or WK3 on the two-dimensional coordinate system. The two-dimensional coordinate system used in the emotion input device 2 has the four quadrants indicated by 1 , 2, 3, and 4 as shown in Fig. 17, and the arousal and valence axes each representing values from -100 to +100 with the intersection point as 0 as shown in Fig. 18. The emotion input device 2 transforms the detected coordinates to the information about the corresponding quadrant and to the corresponding values on both the arousal and valence axes. The emotion input device 2 adds the time stamp data indicating the input date and time and the identifier (worker ID) of the worker WK1 , WK2, or WK3 to the resultant information, and transmits the data to the production management apparatus 1 as scale data. As above illustrated, the emotion input device 2 is not limited to a device to which the worker inputs his/her emotion (which is herein described for simplicity only), but includes in fact also devices capable of accurately determining an emotion state on the basis of accurate measurement(s).

[0069] In parallel with this, the measurement device 3 measures the heart electrical activity H, the skin potential activity G, the motion BM, and the activity amount Ex of the worker WK1 , WK2, or WK3 at predetermined time intervals. The measurement data is transmitted to the production management apparatus 1 together with the time stamp data indicating the measurement time and the worker ID of the worker WK1 , WK2, or WK3. Additionally, the eye movement EM of the worker WK1 , WK2, or WK3 is captured by the eye movement monitoring camera 4. The image data is also transmitted to the production management apparatus 1 together with the time stamp data and the identifier (worker ID) of the worker WK1 , WK2, or WK3.

[0070] In step S1 1 , the production management apparatus 1 receives, for each of the workers WK1 , WK2, and WK3, the scale data transmitted from the emotion input device 2 though the interface unit 13 as controlled by the sensing data obtaining controller 1 1 1 , and stores the received scale data into the sensing data storage 121 .

[0071] In step S12, the production management apparatus 1 also receives, for each of the workers WK1 , WK2, and WK3, the measurement data transmitted from the measurement device 3 and the image data transmitted from the eye movement monitoring camera 4 through the interface unit 13 as controlled by the sensing data obtaining controller 1 1 1 , and stores the received measurement data and image data into the sensing data storage 121 .

[0072] In step S13, when the scale data, the measurement data, and the image data accumulate for a predetermined period (e.g., one day or one week), the production management apparatus 1 generates learning data to be used for emotion estimation, as controlled by the feature quantity extraction unit 112 and the learning data generation unit 1 15 in the manner described below. Figs. 7 and 8 are flowcharts showing the procedure and its details.

[0073] In step S131 , the unit duration of the window Wi (i = 1 , 2, 3, ...) is set at an initial value. In step S132, the first window (i = 1 ) is selected. In step S133, the feature quantity extraction unit 1 12 reads a plurality of sets of scale data within the first window from the sensing data storage 121 . In step S134, the feature quantity extraction unit 1 12 calculates the variations among the feature quantities for arousal and for valence.

[0074] For example, when scale data K1 and scale data K2 are input within the unit duration of one window as shown in Fig. 18, the variations are calculated as the change from the third to the fourth quadrant, and as the increment of 20 (+20) for arousal and the increment of 50 (+50) for valence. For a change to a diagonally opposite quadrant, for example, for a change from the third to the second quadrant, the variations among the resultant feature quantities may be calculated for arousal and for valence.

[0075] In step S135, the feature quantity extraction unit 1 12 reads the measurement data and image data obtained within the unit duration of the first window, which are the measurement data about the heart electrical activity H, the skin potential activity G, the motion BM, and the activity amount Ex, and the image data about the eye movement EM, from the sensing data storage 121 . In step S136, the feature quantity extraction unit 112 extracts the feature quantities from the measurement data and the image data.

[0076] For example, the heart electrical activity H has the feature quantities that are the heartbeat interval (R-R interval, or RRI), and the high frequency components (HF) and the low frequency components (LF) of the power spectrum of the RRI. The skin potential activity G has the feature quantity that is the galvanic skin response (GSR). The motion BM has feature quantities including the hand movement directions and speed. The hand movement directions and speed are calculated based on, for example, the triaxial acceleration measured by the triaxial acceleration sensor. The activity amount Ex has the feature quantities that are the intensity of physical activity (METs) and the exercise (EX). The exercise (EX) is calculated by multiplying the intensity of physical activity (METs) by the activity duration. The eye movement EM has the feature quantities including the eye movement speed, the gaze coordinates and the gaze duration, the number of blinks, and changes in the pupil size.

[0077] The feature quantity extraction unit 1 12 calculates the variations among the extracted feature quantities that are the heart electrical activity H, the skin potential activity G, the motion BM, the activity amount Ex, and the eye movement EM within the unit duration of the window.

[0078] In step S137, the learning data generation unit 1 15 generates learning data for arousal and learning data for valence based on the variations calculated in step S134 among the scale data feature quantities and the variations calculated in step S136 among the measurement data and image data feature quantities.

[0079] For example, the learning data generation unit 1 15 performs multiple regression analysis using the variations among the scale data feature quantities for arousal and for valence as supervisory data, and the variations among the measurement data and image data feature quantities as independent variables, which are primary indicators. The learning data generation unit 1 15 then generates a regression equation for each of the workers WK1 , WK2, and WK3 for arousal and for valence representing the relationship between the change in the emotion of each worker and the changes in the measurement data and image data feature quantities.

[0080] The regression equations corresponding to the i-th window are as follows:

XAi = f(a1 Hi, a2Gi, a3EMi, a4BMi, a5Exi), and

XVi = f(a1 Hi, a2Gi, a3EMi, a4BMi, a5Exi) (1 )

where XAi is the estimate of the arousal change, XVi is the estimate of the valence change, a1 , a2, a3, a4, and a5 are the weighting coefficients for the feature quantities of the measurement data items Hi, Gi, EMi, BMi, and Ex, and f is the sum of the indicators obtained from the feature quantities of the measurement data items Hi, Gi, EMi, BMi, and Ex, which are primary indicators. The weighting coefficients may be determined by using, for example, the weighted average based on the proportions in the population data obtained in the learning stage. Equations (1 ) are an example of a relationship between activity and emotion of a person. In one example, first learning data (also above discussed) may include, indicate or be based on equations (1 ) above, representing a relationship between activity and emotion.

[0081] In step S138, the learning data generation unit 1 15 stores the generated regression equations for arousal and for valence corresponding to the i-th window into the learning data storage 122. In step S139, the learning data generation unit 1 15 determines whether all the windows Wi have been selected for generating regression equations. When any window remains unselected, the processing returns to step S132, where the unselected window is selected, and the processing in steps S133 to S139 for generating the learning data for emotion estimation is repeated for the next selected window.

[0082] The feature quantity extraction unit 1 12 and the learning data generation unit 1 15 change the window unit duration by every predetermined value and the chronological shift of the window by every predetermined amount to determine the optimum window unit duration and the optimum shift. Of all the combinations of the unit durations and the shifts, the learning data generation unit 1 15 selects a combination that minimizes the difference between the emotion estimates obtained using the regression equations and the emotion information correct values input through the emotion input device 2. The learning data generation unit 1 15 then sets, for the emotion estimation, the selected window unit duration and the selected shift, as well as the regression equations generated for this combination. The window unit duration and the shift may be fixed.

[0083] An example of the processing of selecting the optimum window will now be described. Fig. 8 is a flowchart showing the procedure and its details.

In step S141 , the learning data generation unit 1 15 calculates the emotion estimates XAi and XVi using the regression equations generated for each window Wi, and calculates the sum of the calculated estimates XAi as XA and the sum of the calculated estimates XVi as XV. In step S142, the learning data generation unit 1 15 calculates the differences between the sums of the emotion estimates XA and XV, and the sums of the true values XA and XV of the emotion information input through the emotion input device 2 in the manner described below.

∑(XA - XA) and∑(XV - XV)

The calculation results are stored into the learning data storage 122. For simplifying the flowchart, Fig. 8 only shows∑(XA - XA).

[0084] In step S143, the learning data generation unit 1 15 determines whether changing the window unit duration and the shift has been complete, or in other words, whether regression equations have been generated for all combinations of the window unit durations and the shifts. When this process is incomplete, the processing advances to step S144, in which the unit duration and the shift of the window Wi is changed by the predetermined amount. The processing then returns to step S132 shown in Fig. 7, and then the processing in steps S132 to S143 is performed. In this manner, the processing in steps S132 to S144 is repeated until the regression equations are generated for all the combinations of the window unit durations and the shifts.

[0085] When the regression equations have been generated for all the combinations of the window unit durations and the shifts, the learning data generation unit 1 15 compares the differences, calculated for all the combinations of the window unit durations and the shifts, between the sums of the emotion information true values XA and XV, and the sums of the emotion estimates XA and XV, which are∑(XA - XA) and∑(XV - XV), in step S145. The learning data generation unit 1 15 then selects the combination of the window unit duration and the shift that minimizes the values of ∑(XA - XA) and∑(XV - XV).

[0086] In step S146, the learning data generation unit 1 15 sets the selected combination of the window unit duration and the shift in the feature quantity extraction unit 1 12. In step S147, the learning data generation unit 1 15 stores the regression equations corresponding to the selected combination into the learning data storage 122. The process of generating the learning data to be used for emotion estimation ends.

1 -2: Generation of Learning Data for Cognition Estimation

[0087] The learning data generation unit 1 15 generates the learning data to be used for cognition estimation in the manner described below. Fig. 6 is a flowchart showing the procedure and its details.

[0088] More specifically, the motion BM of each of the workers WK1 , WK2, and WK3 indicating hand movement is measured by the triaxial acceleration sensor included in the measurement device 3. The measurement data is then transmitted to the production management apparatus 1 . In parallel with this, the eye movement EM indicating eye movement during operation is captured by the eye movement monitoring camera 4. The captured image data is transmitted to the production management apparatus 1 .

[0089] In step S14, the production management apparatus 1 receives, for each of the workers WK1 , WK2, and WK3, the measurement data about the motion BM indicating the hand movement transmitted from the measurement device 3 and the image data about the eye movement EM transmitted from the eye movement monitoring camera 4 through the interface unit 13 as controlled by the sensing data obtaining controller 1 1 1 , and stores the received measurement data and image data into the sensing data storage 121 . The measurement data about the motion BM and the image data about the eye movement EM may be the corresponding data obtained during the process of generating the learning data to be used for emotion estimation.

[0090] In the cells C1 , C2, and C3 of the production line CS, the results of the operations performed by the workers WK1 , WK2, and WK3 are captured by the work monitoring camera CM. The captured image data is transmitted to the production management apparatus 1 . In step S15, the production management apparatus 1 receives the image data transmitted from the work monitoring camera CM through the interface unit 13 as controlled by the sensing data obtaining controller 1 1 1 , and stores the received image data into the sensing data storage 121 .

[0091] In step S16, the production management apparatus 1 generates the learning data to be used for cognition estimation as controlled by the feature quantity extraction unit 1 12 and the learning data generation unit 1 15 in the manner described below. Fig. 9 is a flowchart showing the procedure and its details.

[0092] In step S161 , the production management apparatus 1 selects an operation time period (e.g., one day or one week). In step S162, the feature quantity extraction unit 1 12 reads the image data indicating the operation results from the sensing data storage 121 . In step S163, the feature quantity extraction unit 1 12 extracts the feature quantities indicating the success or failure in the operation from the read image data indicating the operation results by, for example, pattern recognition (i.e. this is an example of obtaining correct values indicating whether the operation results suggest a correctly performed operation, wherein images taken by a camera are compared to a pattern to establish whether the operation was correctly performed or not). The feature quantities are, for example, represented by the number or incidence of misoperations during the selected time period. The feature quantity extraction unit 1 12 uses the extracted feature quantities as correct values of the cognition.

[0093] In step S164, the feature quantity extraction unit 1 12 reads the measurement data obtained by the triaxial acceleration sensor included in the measurement device 3. In step S165, the feature quantity extraction unit 1 12 extracts the feature quantities indicating the hand movement of the worker from the read measurement data. In parallel with this, the feature quantity extraction unit 1 12 reads the image data obtained through the eye movement monitoring camera 4 in step S164, and extracts the feature quantities indicating the eye movement of the worker (eye movement EM) from the read image data in step S165. The extracted eye movement EM is represented by, for example, the eye movement speed, the gaze coordinates and the gaze duration, the number of blinks, and changes in the pupil size as described above. The feature quantities of the motion BM and the eye movement EM may be the corresponding feature quantities extracted during the process of generating the learning data to be used for emotion estimation.

[0094] In step S166, the learning data generation unit 1 15 performs multiple regression analysis with correct values (supervisory data) being the feature quantities indicating the success or failure in the operation and variables being the feature quantities indicating the hand movement and the feature quantities indicating the eye movement EM. This generates a regression equation. The learning data generation unit 1 15 stores the generated regression equation into the learning data storage 122 as learning data to be used for cognition estimation. An example regression equation used for cognition estimation is as follows:

Yi = f(pi EMi, β2ΒΜί) (2)

where Yi is the estimate of the cognition change, β1 is the weighting coefficient for the feature quantities of the eye movement EMi, β2 is the weighting coefficient for the feature quantities of the motion BMi, and f is the sum of the indicators obtained from the feature quantities of the eye movement EMi and the motion BMi, which are primary indicators. The weighting coefficients may be determined by using, for example, the weighted average based on the proportions in the population data obtained in the learning stage. Equation (2) is an example of a relationship between activity and cognition. In one example, first learning data (also above discussed) may include, indicate or be based on equation (2) above, indicating in fact a relationship between activity and cognition. In a further example, first learning data (also above discussed) may include, indicate or be based on equations (1 ) and equation (2) above.

[0095] In step S167, the learning data generation unit 1 15 determines whether all the operation time periods have been selected for generating regression equations. When any operation time period remains unselected, the processing returns to step S161 , and the regression equation generation process is repeated. When the regression equations have been generated for all the operation time periods, the learning data generation unit 1 15 associates, in step S168, the generated regression equations with the information indicating their corresponding operation time periods, and stores the regression equations into the learning data storage 122.

1 -3: Generation of Learning Data for Productivity Estimation

[0096] When the learning data for emotion estimation and the learning data for cognition estimation have been generated for each of the workers WK1 , WK2, and WK3, the learning data generation unit 115 generates the learning data to be used for productivity estimation in the manner described below.

[0097] More specifically, the learning data generation unit 115 defines the productivity information by using skill level information and misoperation frequency information. The skill level information is represented by, for example, a difference between a standard operation time and an actual operation time. The misoperation frequency information is represented by deviations of the actual operation time from an average operation time. [0098] The learning data generation unit 115 uses the emotion estimates and the cognition estimates as secondary indicators, and generates a relational expression for estimating the skill level of the worker based on the difference between the current and past secondary indicators. An example of the relationship is described below.

[0099] A skill level Quality-A is expressed using the formula below.

Quality-A = V{(va1 (X2 - x1 )) 2 } + V{(ya2(Y2 - y1 )) 2 } (3)

In the formula, x1 is the current emotion estimate, y1 is the current cognition estimate, X2 is the average of past emotion estimates, Y2 is the average of past cognition estimates, va1 is the weighting coefficient for emotion, and va2 is the weighting coefficient for cognition.

[0100] The learning data generation unit 115 also uses the emotion estimates and the cognition estimates as secondary indicators, and generates a relational expression for estimating the misoperation frequency of the worker based on the variations among the past and current secondary indicators. An example of the relationship is described below.

[0101] A misoperation frequency Quality-B is expressed using the formula below.

Quality-B = vbl {((X1 - x1 ) /∑(X - xi)) 2 } + yb2V{((Y1 - y1 ) /∑(Y - yi)) 2 } (4) In the formula, x1 is the current emotion estimate, y1 is the current cognition estimate, X1 is the average of historical emotion estimates, Y1 is the average of historical cognition estimates, vb1 is the weighting coefficient for emotion, and vb2 is the weighting coefficient for cognition.

[0102] The weighting coefficients va1 , va2, vb1 , and vb2 may be determined for each of the workers WK1 , WK2, and WK3 by using, for example, multiple regression analysis or questionnaires to the workers WK1 , WK2, and WK3. In one example, each or both equations (3) and (4) indicate a relationship between performance, and emotion and cognition. In a further example, second learning data (also above discussed) may include, indicate or be based on equation (3) and/or (4) above, indicating in fact a relationship between performance and activity. (2) Productivity Estimation

[0103] After the learning data for productivity estimation is generated, the production management apparatus 1 uses the learning data to estimate the productivity of the workers WK1 , WK2, and WK3 during operation in the manner described below. Fig. 1 1 is a flowchart showing the estimation process and its details.

2-1 : Collecting Worker's Sensing Data

[0104] When detecting an input operation start command in step S21 , the production management apparatus 1 specifies an initial part feed rate in the part feeder controller DC in accordance with the preliminarily input information specifying the production amount (e.g., 100 products/day) in step S22. The part feeder controller DC then instructs the part feeder DS to feed the sets of parts for the products to be manufactured to the production line CS at the specified rate. In response to the fed sets of parts, the workers WK1 , WK2, and WK3 in their assigned cells start their operations for assembling products.

[0105] During the operation, the measurement device 3 in each of the input and measurement devices SS1 , SS2, and SS3 of the workers WK1 , WK2, and WK3 measures the heart electrical activity H, the skin potential activity G, the motion BM, and the activity amount Ex of the worker at predetermined time intervals or at selected timing. The measurement data is transmitted to the production management apparatus 1 . The eye movement EM of each of the workers WK1 , WK2, and WK3 is also captured by the eye movement monitoring camera 4. The captured image data is transmitted to the production management apparatus 1 .

[0106] In step S23, the production management apparatus 1 receives the measurement data and the image data transmitted from the input and measurement devices SS1 , SS2, and SS3 through the interface unit 13 as controlled by the sensing data obtaining controller 1 1 1 . The production management apparatus 1 stores the received data into the sensing data storage 121 . 2-2: Estimating Worker's Emotion

[0107] When determining that a predetermined time (e.g., one hour) has passed in step S24, the production management apparatus 1 selects one of the workers WK1 , WK2, and WK3 in step S25. The feature quantity extraction unit 1 12 then reads the measurement data and the image data associated with the selected worker from the sensing data storage 121 , and extracts the feature quantities from both the measurement data and the image data.

[0108] For example, the feature quantity extraction unit 1 12 extracts the feature quantities of the heart electrical activity Hi, the skin potential activity Gi, the motion BMi, the activity amount Exi, and the eye movement EMi, which are correlated with emotional changes, from the measurement data for the heart electrical activity H, the skin potential activity G, the motion BM, and the activity amount Ex and the image data for the eye movement EM. In parallel with this, the feature quantity extraction unit 1 12 extracts the feature quantities correlated with cognition changes from the motion BM measurement data and the eye movement EM image data. The extracted feature quantities are the same as those extracted in the learning data generation process described above, and will not be described in detail.

[0109] In step S26, the production management apparatus 1 estimates emotional changes in the worker as controlled by the productivity estimation unit 1 13. Fig. 12 is a flowchart showing the procedure and its details.

[01 10] In step S261 , the productivity estimation unit 113 receives the feature quantities to be used for emotion estimation from the feature quantity extraction unit 1 12. In step S262, the productivity estimation unit 1 13 reads, from the learning data storage 122, the regression equations (1 ) for emotion estimation for arousal and for valence corresponding to the predetermined time period described above. In step S263, the productivity estimation unit 1 13 calculates the estimates of emotional changes XAi and XVi for the worker in the predetermined time period described above using the feature quantities to be used for the emotion estimation and the regression equations for arousal and for valence.

2-3: Estimating Worker's Cognition

[01 1 1] The feature quantity extraction unit 1 12 included in the production management apparatus 1 extracts the feature quantities correlated with cognition from each of the motion BMi measurement data and the eye movement EMi image data obtained during the predetermined time described above.

[01 12] In step S27, the production management apparatus 1 estimates the cognition of the worker as controlled by the productivity estimation unit 1 13. Fig. 13 is a flowchart showing the procedure and its detail.

[01 13] In step S271 , the productivity estimation unit 1 13 receives, from the feature quantity extraction unit 1 12, the feature quantities of the eye movement EMi and the motion BMi to be used for cognition estimation corresponding to the predetermined time period described above. In step S272, the productivity estimation unit 113 reads, from the learning data storage 122, the regression equation (2) for cognition estimation corresponding to the predetermined time period described above. In step S273, the productivity estimation unit 1 13 calculates the cognition estimate Yi for the worker using the feature quantities of the eye movement EMi and the motion BMi to be used for the cognition estimation and the regression equation for the cognition estimation.

(2-4) Productivity Estimation

[01 14] In step S28, the production management apparatus 1 estimates the productivity of the worker in the manner described below using the calculated emotional change estimates and the cognition estimates, and the relational expressions (3) and (4) for productivity estimation stored in the learning data storage 122, as controlled by the productivity estimation unit 1 13.

[01 15] In step S281 shown in Fig. 14, the production management apparatus 1 first calculates the difference between the standard operation time and the actual operation time using the relational expression (3), and outputs the calculated difference in operation time as information indicating the skill level Quality-A of the worker. In step S282, the production management apparatus 1 calculates the deviations of the actual operation time from the average operation time using the relational expression (4), and outputs the calculated values as information indicating the misoperation frequency Quality-B of the worker. The skill level Quality-A and the misoperation frequency Quality-B are estimates of the productivity of the worker. (3) Controlling Production Line Based on Worker Productivity Estimates

[01 16] When obtaining the productivity estimates, the production management apparatus 1 controls the production line CS based on the worker productivity estimates in step S29, as controlled by the line controller 1 14 in the manner described below by way of non-limiting example.

[01 17] In step S291 shown in Fig. 14, the line controller 1 14 compares the calculated difference in operation time (skill level Quality-A) with a predetermined first threshold. When the comparison shows that the difference in operation time (skill level Quality-A) is larger than the first threshold, the line controller 1 14 transmits a speed change command to the part feeder controller DC to reduce the rate of feeding parts to the production line CS in step S292 (or command to change speed of functioning of any component of the line, like a tooling machine, etc.). This lowers the rate of feeding parts from the part feeder DS to the line, as controlled by the part feeder controller DC. In this manner, the speed of the production line CS is adjusted in accordance with the productivity of the workers. This adjustment allows the workers to have enough time, and prevents quality deterioration.

[01 18] When the comparison in step S291 shows that the difference in operation is equal to or smaller than the first threshold, the line controller 1 14 compares the calculated deviations in operation time (misoperation frequency Quality-B) with a predetermined second threshold in step S293. When the comparison shows that the deviations in operation (misoperation frequency Quality-B) are larger than the second threshold, the line controller 1 14 determines that the productivity of the worker is unstable. The line controller 1 14 then generates message information about a misoperation alert in step S294, and transmits the message information to the portable information terminal TM held by the leader WR (the message information can be also provided directly to the worker(s), e.g. to a terminal of the worker, or visible by the worker on the line, or to an audio reproducing apparatus for reproducing such message information, or for providing other types of stimuli). The leader WR receiving the message information checks the physical and mental health of the worker, and for example, replaces or instructs the worker to rest.

[01 19] The production management apparatus 1 evaluates the estimates of the productivity of each worker using a plurality of thresholds, and controls the operation of the production line based on the evaluation results. In other words, the production management apparatus 1 stratifies (or standardizes) the operation of the production line in accordance with the estimates of the productivity of the workers. This allows easier control of the operation of the production line in accordance with a decrease in the productivity of the workers while maintaining the intended productivity.

[0120] When the production management apparatus 1 completes the processing from the emotion estimation to the line control for one worker, the production management apparatus 1 determines, in step S30, whether all the workers have been selected for the processing. When any worker remains unselected, the processing returns to step S25, in which the unselected worker is selected, and the processing in steps S25 to S29 is repeated for the next selected worker.

[0121] When the processing has been completed for all the workers WK1 , WK2, and WK3, the production management apparatus 1 determines whether it has reached the closing time for the production line CS in step S31 . At the closing time, the production management apparatus 1 stops the production line CS in step S32.

[0122] When the line control is performed, the line controller 114 generates information indicating the date and time and details of the line control, and stores the information associated with the worker ID into the control history storage 123. The information indicating the line control history stored in the control history storage 123 is, for example, used for the production management including a change in the number of products to be manufactured.

Thus, a unit (e.g. a controller) within or coupled to the manufacturing line controls a functioning of the at least a component (any of the machines or

apparatuses included in the line, like e.g. a part feeder, part feeder controller, tooling machine, etc) of the manufacturing line based on an estimation result of performance of the worker in the operation obtained by the second estimation unit. Advantageous Effects of Embodiment

[0123] As described in detail in the above embodiment, vital sign measurement data and motion measurement data obtained from the workers WK1 , WK2, and WK3 during operation are used as primary indicators. The primary indicators and the learning data generated separately are used to estimate the emotion and the cognition of the worker. The estimated emotion and cognition are used as secondary indicators. The secondary indicators and the relational expressions generated separately are used to estimate the productivity of the worker.

[0124] The productivity of a worker can thus be estimated based on vital sign measurement data and motion measurement data obtained from the worker during operation. This enables production management in accordance with human performance corresponding to emotion and cognition, which has not been used before.

[0125] The measurement data about the heart electrical activity H, the skin potential activity G, the eye movement EM, the motion BM, and the activity amount Ex is used as vital sign measurement data and motion measurement data from the workers WK1 , WK2, and WK3. Thus, data used as primary indicators is obtained from each worker in a noninvasive manner.

[0126] Emotional changes are expressed as arousal and valence variations and the quadrants of the two-dimensional arousal-valence coordinate system. This allows the emotional changes to be estimated easily and accurately.

[0127] The learning data for cognition estimation is generated with correct values (supervisory data) being the feature quantities indicating the success or failure in the operation extracted from the image data obtained by the work monitoring camera CM, and variables being the feature quantities indicating hand movement and the feature quantities indicating eye movement EM. This allows the worker's cognition about the production operation to be estimated more accurately.

[0128] In one example, a worker is currently connecting parts. The image data about the operation results is as shown in Fig. 10. In this example, the operation ends with a terminal 53 and a terminal 63 unsuccessfully connected using a lead 73, and a terminal 58 and a terminal 68 unconnected. In the present embodiment, supervisory data indicating the worker's cognition is the feature quantities indicating the success or failure in the operation, and variables are primary indicators associated with the worker's cognition obtained in parallel within the same time period, or in other words, the feature quantities indicating the hand movement of the worker and the feature quantities indicating the eye movement (EM). The supervisory data and the variables are used to generate a relational expression for estimating the cognition. With the measurement data including the feature quantities indicating hand movement and the feature quantities indicating eye movement, the estimation of the worker's cognition using the relational expressions enables the estimation of the possibility of misoperation by the worker as shown in Fig. 10.

[0129] The information indicating the productivity of the worker is defined by the skill level represented by a difference between a standard operation time and an actual operation time, and the misoperation frequency represented by deviations of the actual operation time from an average operation time. The worker productivity is estimated with learning data prepared for both the skill level and the misoperation frequency. This allows the productivity of the worker to be accurately estimated in accordance with the assessment indicator at a production site.

Other Embodiments

[0130] The relationship between human emotions and vital signs, or the relationship between human emotions and motion information may change depending on the date, the day of the week, the season, the environmental change, and other factors. The learning data to be used for emotion estimation may thus be updated regularly or as appropriate. When the difference calculated between a correct value of an emotion and an estimate of the emotion obtained by the productivity estimation unit 1 13 exceeds a predetermined range of correct values, the learning data stored in the learning data storage 122 may be updated. In this case, the correct value can be estimated based on the trends in the emotion estimates. In another embodiment, the correct value of the emotion may be input regularly by the subject through the emotion input device 2, and the input value may be used.

[0131] Similarly, when the difference calculated between the correct value of cognition and the estimate of the cognition obtained by the productivity estimation unit 1 13 exceeds a predetermined range of correct values, the learning data stored in the learning data storage 122 may be updated. In this case, the correct value can be estimated based on the trends in the cognition estimates.

[0132] The relational expression representing the relationship between the productivity, and the emotion and the cognition may also be modified based on the productivity estimate. In this case as well, the correct value can be estimated based on the trends in the cognition estimates.

[0133] In the embodiment described above, the information indicating the emotion of the worker is input into the production management apparatus 1 through the emotion input device 2, which is a smartphone or a tablet terminal. The information may be input in any other manner. For example, the worker may write his or her emotion information on print media such as a questionnaire form, and may use a scanner to read the emotion information and input the information into the production management apparatus 1 .

[0134] Further, a camera may be used to detect the facial expression of the worker. The information about the detected facial expression may then be input into the production management apparatus 1 as emotion information. A microphone may be used to detect the worker's voice. The detection information may then be input into the production management apparatus 1 as emotion information. Emotion information may be collected from a large number of unspecified individuals by using questionnaires, and the average or other representative values of the collected information may be used as population data to correct the emotion information from an individual. Any other technique may be used to input the information indicating human emotions into the production management apparatus 1 .

[0135] The above embodiment describes the two-dimensional arousal-valence system for expressing the information about the worker's emotion. Another method may be used to express the worker's emotion information.

[0136] In the embodiment described above, the measurement data items, namely, the heart electrical activity H, the skin potential activity G, the eye movement EM, the motion BM, and the activity amount Ex are input into the production management apparatus 1 as information indicating the activity of the worker, and all these items are used to estimate the emotions. However, at least one item of the measurement data may be used to estimate the emotions. For example, the measurement data about the heart electrical activity H, which is highly contributory to emotions among the other vital signs, may be solely used to estimate the emotions. Vital signs other than the items used in the embodiment may also be used.

[0137] Additionally, measurement data other than the hand movement and the eye movement may also be used as a primary indicator to estimate the cognition.

[0138] In addition, the number of cells in the production line CS and the types of products assembled in each cell may also be modified variously without departing from the scope and spirit of the invention. [0139] The examples described above include a production line involving an operation performed by a worker. In this production line, the performance of the worker in the operation is estimated based on the worker's emotion and cognition. However, the present invention is not limited to such examples, but is applicable to any system involving a human operation.

[0140] For example, the present invention is applicable to cars, ships, airplanes, or other vehicles operated by a driver or a navigator corresponding to a worker, and the performance of the operation is estimated based on the emotion and the cognition of the driver or navigator. In this case, the cognition of the driver or navigator can be measured based on the degree of his or her cognition about an obstacle visible in the outside view. The present invention is also applicable to construction machinery, power generation equipment, electrical transformers, or medical devices operated by an operator corresponding to a worker, as well as control systems for various plants, airplanes, or trains. By way of example, further embodiments are below described.

EMDODIMENT 2

In embodiment 1 , a work management apparatus has been presented, which is suitable to manage an operation performed by a worker in a system including at least partially a manufacturing line, and/or for estimating performance in executing by a worker an operation within a manufacturing line, and/or for controlling at least a part of a manufacturing line. Present embodiment 2 is directed to a drive assisting apparatus for providing vehicle driving assistance, wherein driving assistance is provided, when the driver is driving the vehicle, based on the estimation result of the performance of the driver. The estimation result of the performance of the driver can be obtained as described in embodiment 1 , and for instance as represented in figure 4 (wherein, in the case of the present embodiment, the productivity estimation unit 113 is substituted by a driving performance estimation unit 113; the line controller 114 by a controller for providing driving assistance; the same sensors or devices SS1 to SS3 can be used, when conveniently installed in view of the driver position etc.). As an example, in the present embodiment, correct values used for cognition estimation may be represented by how correctly the driving task is executed, which can be obtained e.g. by measuring certain driving parameters like how correctly the vehicle follows certain predetermined routes (e.g. comparing how smoothly the actual driving route correspond to an ideal route obtained from a ma), how smooth the control of the vehicle is (e.g. whether or how often any sudden change of direction occurs), on the degree of the driver recognizing an obstacle, etc. Suitable sensors could be provided (as represented by CM in figure 4), including for instance positioning measurement systems, camera for recognizing driving paths or patterns, vehicle speed sensors, vehicle inertial systems for obtaining information on current driving parameters, etc. The performance values of one driver (in the sense of performance in executing driving, to be used for obtaining learning data by way of regression analysis) can e.g. be obtained by comparing for instance the distance covered over a certain period over an expected distance for a given period, or whether in reaching two points a certain route has been followed compared to predetermined available routes, etc. The controller providing assistance may provide driving assistance to either one or both of the driver and the vehicle. For example, driving assistance may include an active control of the vehicle by an assisting unit during driving: in one example, if the estimated performance is found to be associated to a given performance value, the control unit (or any other unit suitable for automatically or semi automatically driving the vehicle) may act on components of the vehicle like the brakes or accelerator to adapt the speed of the vehicle to the current performance of the driver; in case the estimated performance is determined to have another predetermined value, indicating for instance a performance associated to a potentially hazardous situation, the control unit may act on the brake and/or on the steering wheel to take over control (e.g. an automatic pilot) or to stop the vehicle. Preferably, the driving assistance may include providing the driver of the vehicle with at least a feedback during driving depending on the performance level estimated. For instance, the message may include a message (as an example of the feedback) to the driver suggesting to make a stop and take a rest. Another example of driving assistance (or driving assistance feedback) is represented by a sound, melody, music, or audio message in general; in this way, the driver may be alerted so that the hazardous situation is avoided, and alerted in a way that is appropriate to the estimated performance level. Other types of driving assistance feedback are of course suitable. The controller providing driving assistance may be installed in the vehicle. However, the calculation or determination of the driving assistance based on the estimated result may be indifferently performed within the vehicle or outside the vehicle; in the latter case, the calculated driving assistance is communicated to the control unit within the vehicle, which provides the (outside calculated) driving assistance to other parts of the vehicle and/or to the driver. Reference is also made to embodiment 1 (and corresponding figures), illustrating details that are equally and optionally applicable to the present embodiment.

EMDODIMENT 3

Present embodiment 3 is directed to an apparatus for healthcare support of a subject, wherein the device is preferably coupled to the subject. By coupled to the subject it is meant that the device is within range of interaction with the subject, e.g. capable of making measurements on the subject, and/or providing feedback or stimulus to the subject, and/or receiving inputs from (e.g. commands) and providing output to the subject. The healthcare support apparatus includes a controller for providing the subject with a healthcare support feedback based on an estimated performance of the subject. The estimated performance refers to the performance in executing an operation by the person. Preferably, the operation includes an operation of a device by the person; the operation includes however also a physical or intellectual exercise of the subject. Thus, the operation refers to an action executed by the subject. The estimated performance may be an estimation result of the performance (of the subject when executing the operation), the result obtained by the a performance estimation unit, represented for instance by the second estimation unit illustrated also above. More in particular, the estimation result of the performance of the subject can be obtained as described in embodiment 1 , and for instance as represented in figure 4 (wherein, in the case of the present embodiment, the productivity estimation unit 113 is substituted by a performance estimation unit 113; the line controller 114 by a controller for providing healthcare support feedback; the same sensors or devices SS1 to SS3 can be used, when conveniently installed in view of the subject, and preferably when having regard of one or more types of operation/action executed by the subject. As an example, in the present embodiment, correct values for cognition estimation may be obtained by measuring how one or more task (i.e. an operation or action) is executed by the subject: for instance, how straight and balanced the person ' s body position is when walking, running or sitting (e.g. over predetermined patterns); how smoothly certain movements are made over predetermined patterns; etc. This can be obtained for instance by comparing an image (obtained e.g. via camera CM) with a predetermined pattern, or by making other suitable measurements and comparing the same with predetermined values and/or pattern of values. The performance values of the person (to be used for obtaining learning data by way of regression analysis) can e.g. be obtained by measuring efficiency and/or quality in completing a certain task (i.e. the operation or action above explained) or number of tasks, like for instance measuring the distance covered on foot over an expected distance; measuring the time for accomplishing a task over a predetermined time (e.g. completing a housecleaning or hobby-related operation, number of such operations performed in an hour or day), etc.

The healthcare support feedback (also feedback, in short) may be represented for instance by one or more messages (in the form of text, audio, and/or video, etc.) suggesting certain activities to undertake or lifestyle to follow, or one or more stimuli signals induced on the subject (for instance, audio/video signal to induce stimulation on the subject, and/or an electric signal inducing stimulation on the subject, etc. ). Other types of feedback are of course suitable. Since the performance can be accurately estimated, a healthcare feedback can be accurately provided for instance when it is really needed (e.g. in correspondence of a predetermined performance value, which can herein be accurately estimated), or chosen in dependence of the estimated performance; for instance, if the performance decreases, a particular feedback can be chosen for prompting an improvement of health condition; when performance increases, another type of feedback may be given to maintain the same level of performance, and for prompting maintenance of good health conditions also in the long term. In this way, it is possible to improve health conditions of a person, or maintain a (e.g. good) health condition. Also, when omitting the control device, it is possible to more accurately estimate the performance of the person, which is an index of the health condition of the person. Thus, the device also allows a better and more accurate monitoring of the health conditions of a person. Reference is also made to embodiment 1 (and corresponding figures), illustrating details that are equally and optionally applicable to the present embodiment.

OTHER EMDODIMENTS

In general, a performance estimation apparatus is provided comprising a first estimation unit capable of estimating emotion and cognition on the basis of the discussed first learning data and of information obtained (e.g. measured) about the activity of a person while the person executes a certain action or task. The apparatus also includes a second estimation unit for estimating performance of the subject (in executing the action) based on second learning data and on estimated cognition and emotion. In this way, the performance estimation apparatus can accurately monitor the performance of the subject when executing a certain action or task. Embodiments 1 to 3 provide examples of action/task, including an operation on a system like a production line, operation of a vehicle, or performing a task when using a healthcare supporting device, though any other types of task is included in the present embodiment, in particular e.g. when the subject interacts with any device.

Optionally, a controller can be included in the performance estimation apparatus, wherein the controller provides an intervention to the person and/or the device with which the person is interacting (or with the system including the device). The intervention is based on the performance estimation result. The intervention includes an intervention acting on a user (such as providing stimuli, and message/guidance), and/or an intervention acting on the device with which the user is interacting (or system including the device). In this way, the interaction between person and device can be improved, since the intervention can be provided at the correct time depending on the accurately estimated performance, or the type of intervention can be accurately chosen depending on the accurately estimated performance. Thus, the interaction between machine and person can be improved in an objective and repeatable way, and autonomously by the apparatus. [0141] The present invention is not limited to the embodiment described above, but may be embodied using the components modified without departing from the scope and spirit of the invention in its implementation. An appropriate combination of the components described in the embodiment may constitute various aspects of the invention. For example, some of the components described in the embodiment may be eliminated. Further, components from different embodiments may be combined as appropriate. Also, even if certain features have been described only with reference to a device, the same feature can also be described in terms of a method (e.g. according to which the same device operated) or of a program (for programming a computer so as to function like the described apparatus features). Similarly, even if a certain feature is described only with reference to a method, the same feature can also be described in terms of a unit or of a device means (or of computer program instructions) configured to perform the same described method feature.

Further, in the above and other (see also below) methods herein described, steps are defined like obtaining, estimating, controlling, providing driving assistance, providing healthcare support feedback, etc. It is however noted that such steps (or any combination of them) may also be caused or induced by a remote device, like for instance by a client computer or a portable terminal, on another device (like for instance a server, localized or distributed) that correspondingly performs the actual step. Thus, the mentioned steps are to be understood also as causing to obtain, causing to estimate, causing to control, causing to provide driving assistance, causing to provide healthcare support feedback, etc., such that any of their combination can be caused or induced by a device remote to the device actually performing the respective step.

[0142] The above embodiments may be partially or entirely expressed in, but not limited to, the following forms.

Appendix 1 :

A work management apparatus for managing an operation performed by a worker in a system involving the operation performed by the worker, the apparatus comprising at least one hardware processor and a memory,

the memory including

a first storage configured to store first learning data indicating a relationship between an activity and emotion of the worker and a relationship between the activity and cognition of the worker, and

a second storage configured to store second learning data indicating a relationship between the emotion of the worker, the cognition of the worker, and performance of the worker in the operation; and

the at least one hardware processor being configured to

obtain information indicating the activity of the worker during the operation,

estimate the emotion and the cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and the first learning data indicating the relationship between the activity and the emotion of the worker and the relationship between the activity and the cognition of the worker, and

estimate the performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and the second learning data indicating the relationship between the emotion, the cognition, and the performance of the worker in the operation.

[0143]

Appendix 2:

A work management method that is implemented by an apparatus including at least one hardware processor and a memory, the method comprising:

at least one hardware processor obtaining information indicating the an activity of the worker during the operation;

the at least one hardware processor estimating emotion and cognition of the worker during the operation based on the obtained information indicating the activity used as a primary indicator, and first learning data indicating a relationship between the activity and the emotion of the worker and a relationship between the activity and the cognition of the worker; and

the at least one hardware processor estimating performance of the worker in the operation based on the estimated emotion and cognition each used as a secondary indicator, and second learning data indicating a relationship between the performance of the worker in the operation, the emotion of the worker, and the cognition of the worker. REFERENCE SIGNS LIST

[0144] CS production line

B1 , B2, B3 product

C1 , C2, C3 cell WR leader

WK1 , WK2, WK3 worker

M01 , M02, M03 monitor

TM portable information terminal

DC part feeder controller

DS part feeder

RB cooperative robot

CM work monitoring camera

NW network

SS1 , SS2, SS3 input and measurement device

1 production management apparatus

2 emotion input device

3 measurement device

4 eye movement monitoring camera 1 1 control unit

1 1 1 sensing data obtaining controller

1 12 feature quantity extraction unit

1 13 productivity estimation unit

1 14 line controller

1 15 learning data generation unit

12 storage unit

121 sensing data storage

122 learning data storage

123 control history storage

13 interface unit




 
Previous Patent: PEDAL EMULATOR FOR A MOTOR VEHICLE

Next Patent: DISPLAY UNIT