Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR CONTROLLING THE ACTIVITIES OF A ROBOT
Document Type and Number:
WIPO Patent Application WO/2018/233856
Kind Code:
A1
Abstract:
The invention relates to a method for controlling the activities of a robot whereby the robot comprises a situation manager which is divided into a situation network for determining needs and an action network for determining the actions for satisfying the needs, a planner for prioritizing actions proposed by the situation manager and optionally from an input device, and a sensor for detecting an event. Both, the situation network and the action network are based on probability models. Subdividing the situation manager into a situation network and an action network has the effect that the calculation of the proper action for a given situation is not directly based on the actual data, rather it is based on the calculation of the needs of the given situation.

Inventors:
FRÜH HANS RUDOLF (CH)
KEUSCH DOMINIK (CH)
VON RICKENBACH JANNIK (CH)
Application Number:
PCT/EP2017/075574
Publication Date:
December 27, 2018
Filing Date:
October 06, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZHONGRUI FUNING ROBOTICS SHENYANG CO LTD (CN)
International Classes:
B25J13/08; B25J11/00; B25J15/00; B25J15/02; B25J19/06
Foreign References:
US20150314454A12015-11-05
US20160375578A12016-12-29
EP2933064A12015-10-21
EP2933065A12015-10-21
Other References:
None
Attorney, Agent or Firm:
JALINK, Cornelis et al. (CH)
Download PDF:
Claims:
Claims

1 . Method for controlling the activities of a robot whereby the robot comprises

- a situation manager which is divided into a situation network for determining needs and an action network for determining the actions for satisfying the needs

- a planner for prioritizing actions proposed by the situation manager and optionally from an input device

- a sensor for detecting signals comprising the following steps:

Step 1 : detect a signal by means of the sensor

Step 2: analyze the signal

Step 3: classify the signal

Step 4: determine the needs by means of the situation network

Step 5: determine the actions for satisfying the needs determined by the situation network

Step 6: determine the actions triggered by the input device Step 7: prioritize the actions by the planner Step 8: execute action with highest priority Step 9: repeat step (1 ) to (9)

2. Method according to claim 1 whereby the input device is a user input device and/or a scheduler and/or an emergency controller.

3. Method according to claim 1 or 2 whereby the situation network and/or the action network is based on a probability model.

4. Method according to the previous claims whereby the situation manager receives information from an information pool whereby the information pool refers to a sensor and/or internet of things and/or to a user database and /or to a history and/or Open Platform Communication channels.

5. Method according to claim 4 whereby the information received by the situation manager from the information pool is classified by a feature preparation task.

6. Robot for performing the method according to claims 1 to 5 whereby the robot comprises a planner for prioritizing tasks received from a situation manager and optionally from an input device characterized in that the situation manager is divided into a situation network for determining needs and an action network for determining the actions for satisfying the needs.

7. Robot according to claim 6 whereby the input device is a user input device and/or a scheduler and/or an emergency controller.

8. Robot according to claim 6 or 7 whereby the situation network and/or the

action network is based on a probability model.

9. Robot according to claims 6 to 8 whereby the situation manager receives

information from an information pool whereby the information pool refers to a sensor and/or internet of things and/or to a user database and /or to a history and/or Open Platform Communication channels.

10. Robot according to claim 9 whereby the information received by the situation manager from the information pool is classified by a feature preparation task.

Description:
Method for controlling the activities of a robot

Background of the invention

Human tasks in personal care are becoming more and more replaced by

autonomous care robots which assist in satisfying the needs of everyday life in hospital or home-care settings. This holds particularly for the care of persons with psychical or cognitive insufficiencies or illnesses, for instance in dementia. Care robots are equipped with devices for gathering information about the person of care and the service environment, i.e. sensors, microphone, camera or smart devices related to the internet of things, and means for executing actions, i.e. devices for gripping, moving, communication. Human robot interaction is achieved by intelligent functions for instance voice recognition or recognition of facial expression or tactile patterns. These functions can also be imitated by a robot in the care situation for instance by speech or gesture generation, or the generation emotional feedback.

For robot assisted care it is challenging to determine the actual needs of the person of care and of the service environment and to execute the appropriate actions. Needs of the person are for instance hunger, thirst, the want for rest, for emotional attention or social interaction. Needs of the service environment are for instance the

requirement to clear the table or to tidy up the kitchen or to refill the refrigerator. The appropriate actions are those which satisfy the needs. In general, the needs and actions cannot be determined only on the basis of the actual situation, rather they depend on the history of needs.

Summary of the invention

The invention relates to a method for controlling the activities of a robot whereby the robot comprises a situation manager which is divided into a situation network for determining needs and an action network for determining the actions for satisfying the needs, a planner for prioritizing actions proposed by the situation manager and optionally from an input device, and a sensor for detecting an event. Both, the situation network and the action network are based on probability models.

Subdividing the situation manager into a situation network and an action network has the effect that the calculation of the proper action for a given situation is not directly based on the actual data, rather it is based on the calculation of the needs of the given situation.

Needs of the person of care are for instance hunger, thirst, want to rest or want for emotional attention. Needs of the service environment are for instance to clear the table, to tidy up the kitchen or to refill the refrigerator.

Actions for satisfying the needs are for instance to bring an object to the person, to take it away from the person, to give emotional feedback by voice generation or emotional image display, to clear the table or to tidy up the kitchen.

The situation manager according to the present invention is subdivided into a situation network and an action network. The situation network is designed as an artificial neural network for decision making about the situation needs, i.e. the needs in a given situation. The situation needs represent the cumulated needs of the person of care and of the service environment over the time, which means the situation needs are based on the history of needs.

The action network is an artificial neural network which derives the proper actions for the situation needs. Both, the situation network and the action network are based on a probability model.

Subdividing the situation manager into a situation network and an action network has the effect that the calculation of the proper actions for a given situation is not directly based on the actual data, rather it is based on the calculation of the needs of the given situation.

The situation manager obtains input from an information pool. The information pool comprises signals from sensors and Internet of Things (loT-devices), a user database and a history. Sensors according to the present invention are for instance a microphone, for instance for detecting voice patterns, a camera, for instance for detecting the facial expression patterns, or a touch pad with tactile sensors, for instance for detecting tactile patterns of the person. The signals detected by the sensor can be analyzed through voice recognition, facial expression recognition or recognition of tactile patterns.

An loT device for instance is a refrigerator with sensors for controlling the expiry date of its content. The user-DB is a repository of information about the persons of care, for instance his/her names, current emotional state or position in the room. The history holds the historical data of the sensors and loT channels but also personal data, for instance the history of the emotional state and the history of actions of the robot. In addition, the information pool has access to Open Platform Communication channels, for instance for getting information about the battery status of the robot.

Before information from the information pool can be used by the situation manager it is has to get through feature preparation. Feature preparation regards the

classification of analyzed patterns, for instance by comparing the patterns with personalized patterns in the user-DB in order to derive the emotional state of the person, or for recognizing temporal trends of signals from loT-devices.

For prioritizing actions the planner takes decisions by the situation manager and/or data from input devices like a user input device, a scheduler or an emergency controller into account. An input device is a device for ordering an action directly by the user, for instance a button for ordering a specific care action. The scheduler is a timetable of actions which have to be executed on a regular date and time basis, for instance to serve the meal, to bring the medication. The emergency controller is able to recognize undesirable or adverse events, for instance signs of refusing or resisting the care robot, or a low battery status. The emergency controller has access to the information pool.

Prioritizing by the planner has for instance the effect to pursue the current action, i.e. to assign it furthermore the highest priority, to suspend the current action, i.e. to assign it a lower priority, to cancel the current action i.e. to delete it from the action list, to start a new action or to resume an action that has been previously suspended.

The method for controlling the activities of a robot according to the present invention comprises the following steps:

Step 1 : detect a signal by means of a sensor. By this step a signal or pattern related to the patient or to the service environment is captured. The signals or signal patterns refer for instance to a position signal, a voice pattern, an image pattern, a tactile pattern. In case the signal patterns refer to a tactile pattern the sensor is a tactile sensor, which is for instance located in a touch pad of the robot. In case an emotional state pattern is detected by means of the sensor the sensor is a microphone for detecting a voice pattern and /or a camera for detecting a facial expression pattern.

Step 2: analyze the signal. By this step the detected signal or pattern is interpreted or aggregated analysis in order to extract features, for instance by means of time series. In case the signal patterns refer to a tactile pattern, by this step the detected tactile pattern is interpreted in order to extract features, for instance by means of time series. In case an emotional state pattern is detected by this step the detected emotional state pattern is interpreted in order to extract features, for instance by means of time series.

Step 3: classify the signal. By this step analyzed features are classified, for instance by comparing the patterns with personalized patterns in the user-DB in order to derive the emotional state of the person, or for recognizing temporal trends of signals from loT-devices. In case the signal patterns refer to a tactile pattern the tactile pattern is classified by means of personalized tactile patterns. Thus, by this step the extracted features are classified, for instance by comparing the tactile patterns with personalized tactile patterns in the user-DB. In case an emotional state pattern is detected the emotional state pattern is classified by means of personalized emotional state patterns. Thus, by this step the extracted features are classified, for instance by comparing the emotional state patterns with personalized emotional state patterns in the user-DB.

Step 4: determine the needs of the person and of the service environment by means of the situation network. By this step the needs of the situation are calculated based on information of the information pool. The situation network is designed as an artificial neural network which is based on a probability model. The situation needs represent the cumulated needs of the person of care and of the service environment over the time. Therefore, the calculation of the situation needs by the artificial neural network is not only based on actual needs, but also on the history of needs.

Step 5: determine the actions for satisfying the needs determined by the situation network. By this step the proper actions for the needs of the situation are calculated. The action network is designed as an artificial neural network which is based on a probability model.

Step 6: determine actions triggered by an input device. By this step the actions triggered by an input device are determined. An input device is for instance a button for ordering a specific care action, or a scheduler for triggering actions which have to be executed on a regular date and time basis, or an emergency controller.

Step 7: prioritize the actions by the planner. By this step actions are prioritized according to an urgency measure, for instance from highest to lowest priority: (1 ) emergency actions, (2) action ordered by input device (3) scheduled action (4) action proposed by the situation manager.

Step 8: Execute action with highest priority. By this step the most urgent action will be executed.

Step 9: Repeat step (1 ) to (9) until a stop condition is reached. This step has the effect, that the robot always does anything until it is stopped by an external command for stopping.

According to an embodiment of the invention the input device is a user input device and/or a scheduler and/or an emergency controller.

According to a preferred embodiment of the invention the situation network and/or the action network is based on a probability model.

According to an important embodiment of the invention the situation manager receives information from an information pool whereby the information pool refers to a sensor and/or internet of things and/or to a user database and /or to a history and/or Open Platform Communication channels.

According to a further embodiment of the invention the information received by the situation manager from the information pool is classified by a feature preparation task.

The invention also refers to a robot for performing the described method whereby the robot comprises a planner for prioritizing tasks received from a situation manager and optionally from an input device. The situation manager is divided into a situation network for determining needs and an action network for determining the actions for satisfying the needs.

According to an embodiment the input device is a user input device and/or a scheduler and/or an emergency controller.

According to a preferred embodiment the situation network and/or the action network is based on a probability model.

According to an important embodiment the situation manager receives information from an information pool whereby the information pool refers to a sensor and/or internet of things and/or to a user database and /or to a history and/or Open Platform Communication channels.

According to another embodiment the information received by the situation manager from the information pool can be classified by a feature preparation task.

According to a very important embodiment the sensor has an area of at least 16mm 2 . By this e.g. the tactile pattern can be well captured by the sensor.

Finally, the sensor can be embedded into a soft tactile skin of the robot. Also by this e.g. the tactile pattern can be well captured by the sensor.

Brief description of drawings

Fig. 1 is a graph diagram showing the information flow and decision flow of the robot in accordance to the present invention.

Fig. 2a is a flow chart showing the flow of operations of the robot in the supervising mode.

Fig. 2b is a flow chart showing the flow of operations of the robot in the tactile interaction mode.

Fig. 2c is a flow chart showing the flow of operations of the robot in the social interaction mode.

Fig. 1 shows the information flow and decision flow of the personal care robot. The core component of the personal care robot is a planner. The task of the planner is to prioritize actions and to invoke the execution of actions in a given care situation. Actions are for instance to change the position, to bring an object or to take it away, or to tidy up the kitchen). For prioritizing actions the planner takes decisions by the situation manager and/or by input devices like a user input device, a scheduler or an emergency controller into account.

The task of the situation manager is to provide the planner with the actions that satisfy the needs of the person, i.e. hunger, thirst, stress reduction, of care and the service environment in a given situation. The situation manager reacts on request by the planner. The situation manager according to the present invention is subdivided into a situation network and an action network. The situation network is designed as an artificial neural network for decision making about the situation needs, i.e. the needs in the given situation. The situation needs represent the cumulated needs of the person of care and of the service environment over the time, which means the situation needs are based on the history of needs.

The action network is an artificial neural network which derives the proper actions for the situation needs. Both, the situation network and the action network are based on a probability model.

Subdividing the situation manager into a situation network and an action network has the effect that the calculation of the proper actions for a given situation is not directly based on the data of the information pool, rather it is based on the separate calculation of the needs for the given situation.

The situation manager obtains input from an information pool. The information pool comprises information from sensors and loT-devices, a user-DB and a history.

Sensors according to the present invention are for instance a microphone, a camera, a tough pad (;). An loT device can be a refrigerator or other smart devices. The user- DB is a repository of information about the persons of care, for instance his/her names, current emotional states or current positions in the room. The history holds the history of data of the sensors and loT channels as well as the history of states of the persons of care and the history of actions of the robot. In addition, the information pool has access to Open Platform Communication channels, for instance for getting information about the battery status of the robot.

Before information from the information pool can be used by the situation manager it is has to get through feature preparation. Feature preparation regards the classification or aggregation of information, for instance the classification of voice signals via voice recognition, the classification of touching via tactile recognition, the classification of emotional states via facial expression recognition, the aggregation of information from smart devices for recognizing trends .

An input device can be a button with an associated function, a touch screen. The scheduler is a timetable of action which have to be executed on a regular date and time basis, for instance to bring the meal, to provide the medication. The emergency controller is able to recognize undesirable or adverse events, for instance actions of refusing or resisting the care robot, or a low battery status. The emergency controller has access to the information pool.

Prioritizing by the planner has for instance the effect to pursue the current action, i.e. to assign it furthermore the highest priority, to suspend the current action, i.e. to assign it a lower priority, to cancel the current action i.e. to delete it from the action list, to start a new action or to resume an action that has been previously suspended.

Fig. 2a shows a flow chart showing the flow of operations of the robot in the supervising mode. The method comprises the following steps:

Step 1 : detect a signal by means of a sensor. By this step a signal or pattern related to the patient or to the service environment is captured. The signals or signal patterns refer for instance to a position signal, a voice pattern, an image pattern, a tactile pattern.

Step 2: analyze the signal. By this step the detected signal or pattern is interpreted or aggregated analysis in order to extract features, for instance by means of time series.

Step 3: classify the signal. By this step analyzed features are classified, for instance by comparing the patterns with personalized patterns in the user-DB in order to derive the emotional state of the person, or for recognizing temporal trends of signals from loT-devices.

Step 4: determine the needs of the person and of the service environment by means of the situation network. By this step the needs of the situation are calculated based on information of the information pool. The situation network is designed as an artificial neural network which is based on a probability model. The situation needs represent the cumulated needs of the person of care and of the service environment over the time. Therefore, the calculation of the situation needs by the artificial neural network is not only based on actual needs, but also on the history of needs.

Step 5: determine the actions for satisfying the needs determined by the situation network. By this step the proper actions for the needs of the situation are calculated. The action network is designed as an artificial neural network which is based on a probability model.

Step 6: determine actions triggered by an input device. By this step the actions triggered by an input device are determined. An input device is for instance a button for ordering a specific care action, or a scheduler for triggering actions which have to be executed on a regular date and time basis, or an emergency controller.

Step 7: prioritize the actions by the planner. By this step actions are prioritized according to an urgency measure, for instance from highest to lowest priority: (1 ) emergency actions, (2) action ordered by input device (3) scheduled action (4) action proposed by the situation manager.

Step 8: Execute action with highest priority. By this step the most urgent action will be executed.

Step 9: Repeat step (1 ) to (9) until a stop condition is reached. This step has the effect, that the robot always does anything until it is stopped by an external command for stopping.

Fig. 2b shows a flow chart showing the flow of operations of the robot in the tactile interaction mode. The method comprises the following steps:

Step 1 : detect a tactile pattern by a sensor. By this step a tactile pattern related to the patient is captured.

Step 2: analyze tactile pattern by an analyzing unit. By this step the detected tactile pattern is interpreted or aggregated analysis in order to extract features, for instance by means of time series.

Step 3: classify tactile pattern by means of personalized tactile patterns. By this step analyzed features are classified, for instance by comparing the patterns with personalized patterns in the user-DB in order to derive the emotional state of the person, or for recognizing temporal trends of signals from loT-devices.

Step 4: determine the needs of the person by means of the situation network. By this step the needs of the situation are calculated based on information of the information pool. The situation network is designed as an artificial neural network which is based on a probability model. The situation needs represent the cumulated needs of the person of care and of the service environment over the time. Therefore, the calculation of the situation needs by the artificial neural network is not only based on actual needs, but also on the history of needs.

Step 5: determine the actions for satisfying the needs determined by the situation network. By this step the proper actions for the needs of the situation are calculated. The action network is designed as an artificial neural network which is based on a probability model.

Step 6: determine actions triggered by an input device. By this step the actions triggered by an input device are determined. An input device is for instance a button for ordering a specific care action, or a scheduler for triggering actions which have to be executed on a regular date and time basis, or an emergency controller.

Step 7: prioritize the actions by the planner. By this step actions are prioritized according to an urgency measure, for instance from highest to lowest priority: (1 ) emergency actions, (2) action ordered by input device (3) scheduled action (4) action proposed by the situation manager.

Step 8: Execute action with highest priority. By this step the most urgent action will be executed.

Step 9: Repeat step (1 ) to (9) until a stop condition is reached. This step has the effect, that the robot always does anything until it is stopped by an external command for stopping.

Fig. 2c shows a flow chart showing the flow of operations of the robot in the social interaction mode. The method comprises the following steps:

Step 1 : detect an emotional state pattern by a sensor. By this step an emotional state pattern related to the patient is captured. Step 2: analyze emotional state pattern by an analyzing unit. By this step the detected emotional state pattern is interpreted or aggregated analysis in order to extract features, for instance by means of time series.

Step 3: classify emotional state pattern by means of personalized emotional state patterns. By this step analyzed features are classified, for instance by comparing the patterns with personalized patterns in the user-DB in order to derive the emotional state of the person, or for recognizing temporal trends of signals from loT-devices.

Step 4: determine the needs of the person by means of the situation network. By this step the needs of the situation are calculated based on information of the information pool. The situation network is designed as an artificial neural network which is based on a probability model. The situation needs represent the cumulated needs of the person of care and of the service environment over the time. Therefore, the calculation of the situation needs by the artificial neural network is not only based on actual needs, but also on the history of needs.

Step 5: determine the actions for satisfying the needs determined by the situation network. By this step the proper actions for the needs of the situation are calculated. The action network is designed as an artificial neural network which is based on a probability model.

Step 6: determine actions triggered by an input device. By this step the actions triggered by an input device are determined. An input device is for instance a button for ordering a specific care action, or a scheduler for triggering actions which have to be executed on a regular date and time basis, or an emergency controller.

Step 7: prioritize the actions by the planner. By this step actions are prioritized according to an urgency measure, for instance from highest to lowest priority: (1 ) emergency actions, (2) action ordered by input device (3) scheduled action (4) action proposed by the situation manager.

Step 8: Execute action with highest priority. By this step the most urgent action will be executed.

Step 9: Repeat step (1 ) to (9) until a stop condition is reached. This step has the effect, that the robot always does anything until it is stopped by an external command for stopping.