Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ROBOT TRAJECTORY OR PATH LEARNING BY DEMONSTRATION
Document Type and Number:
WIPO Patent Application WO/2017/088888
Kind Code:
A1
Abstract:
A system and method for trajectory learning of an associated robot by demonstration from a user. The method comprises operation by the user in a real- time learning session: controlling movement of position in space of the tool center point (TCP) of the robot by operating by the user's one hand a first control element, e.g. a joystick mounted near the TCP, and connected to the controller of the robot. Further, controlling orientation of the TCP by operating by the user's second hand a second control element, e.g. a second joystick, connected to the controller of the robot. Data are logged in real-time during the learning session in response to the user's operation of the control elements, so as to allow later control of the robot in response to the data logged during the learning session. The splitting of position and orientation control to the user's two hands provides an intuitive control of the robot, which allows precise an fast trajectory tracking even in complicated geometries. Preferably, this is even more pronounced in embodiments where the control elements are two three-axis joysticks mounted at different position on the robot.

Inventors:
CORTSEN JENS (DK)
Application Number:
PCT/DK2016/050391
Publication Date:
June 01, 2017
Filing Date:
November 23, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SCIENCE VENTURES DENMARK AS (DK)
International Classes:
G05B19/423; G05B19/421; G05B19/427
Domestic Patent References:
WO2015090324A12015-06-25
Foreign References:
US6385508B12002-05-07
US6285920B12001-09-04
DE202008014481U12009-01-15
US6212443B12001-04-03
DE202008014481U12009-01-15
EP2342608A12011-07-13
Attorney, Agent or Firm:
PLOUGMANN VINGTOFT A/S (DK)
Download PDF:
Claims:
CLAIMS

1. A robot learning system for trajectory learning of an associated robot (RB) comprising a robot arm between a base and a tool center point (TCP), by demonstration from a user, the system comprising

- a user interface arranged for connection to a controller of the robot (CTL), so as to allow the user to control the robot arm, wherein the user interface comprises

- a first control element (Jl) arranged to control position in space of the tool center point (TCP) of the robot (RB), and

- a second control element (J2), wherein the first and second control elements (Jl, J2) are arranged so as to allow the user to operate the first control element (Jl) by one hand, and to simultaneously operate the second control element (J2) by the user's second hand,

- a processor (CTL) arranged to log data (DT) in response to the user's operation of the first and second control elements (Jl, J2) during the learning session, so as to allow later control of the robot (RB) in response to the data (DT) logged during the learning session characterized in that the second control element (J2) is arranged to control orientation of the tool center point (TCP) of the robot (RB), and wherein the user interface allows a user control the robot arm in order to make the tool center point (TCP) follow a desired trajectory during a real-time learning session.

2. Robot learning system according to claim 1, wherein at least one of the first and second control elements (Jl, J2) is arranged for mounting on the robot arm.

3. Robot learning system according to claim 1 or 2, wherein both of the first and second control elements (Jl, J2) are arranged for mounting on respective positions on the robot arm.

4. Robot learning system according to any of the preceding claims, wherein the first control element (Jl) is arranged for mounting at or near the tool center point (TCP) of the robot (RB), and wherein the second control element (J2) is arranged for mounting at another position on the robot arm, at a distance of 10-150 cm away from the first control element (Jl), and closer to the base of the robot (RB) than the first control element (Jl).

5. Robot learning system according to any of the preceding claims, wherein the first and second control elements (Jl, J2) are positioned with a mutual distance of 10 cm to 150 cm, preferably 20 cm to 100 cm, preferably 45 cm to 80 cm.

6. Robot learning system according to any of the preceding claims, wherein the first control element (Jl) comprises a first three-axis joystick (Jl) arranged for operation by the user's one hand, and wherein the second control (J2) element comprises a second three-axis joystick (J2) for simultaneous operation by the user's second hand.

7. Robot learning system according to claim 6, wherein the first joystick (Jl) is arranged for connection to the controller of the robot (RB), so as to move the tool center point (TCP) of the robot (RB) in at least two orthogonal directions (X, Y) upon activation of the first joystick (Jl) in corresponding directions.

8. Robot learning system according to claim 6 or 7, wherein the second joystick (J2) is arranged for connection to the controller of the robot (RB), so as to tilt or rotate the tool center point (TCP) upon activation of the second joystick (J2) in corresponding directions.

9. Robot learning system according to any of the preceding claims, comprising a probe sensor (PS) arranged to be mounted on the tool center point (TCP) during the real-time learning session, wherein the probe sensor (PS) is arranged to measure at least one parameter indicative of position of the tool center point (TCP) and a surface forming the trajectory to be followed, and wherein the probe sensor (PS) is arranged to generate a signal (PTL_IN) corresponding to said at least one parameter, and wherein the processor (CTL) is arranged to log said data representing said signal (PTL_IN) from the probe sensor (PS) during the learning session.

10. Robot learning system according to claim 9, wherein the probe sensor (PS) is arranged to measure at least a parameter indicative of a distance between the tool center point (TCP) and a tip (PT) of the probe sensor (PS), and wherein the probe sensor (PS) is arranged to measure at least a parameter indicative of an orientation between the tool center point (TCP) and a tip (PT) of the probe sensor (PS).

11. Robot learning system according to claims 9 or 10, wherein the processor (CTL) is programmed to calculate a transformation of the robot coordinates in response to the logged data representing said signal (PTL_IN) from the probe (PS) sensor during the learning session, such as to calculate a transformation of the robot coordinates in response to an input regarding physical properties of a tool to be mounted on the tool center point (TCP) during control of the robot (RB) in response to the data logged during the learning session.

12. Robot learning system according to any of the preceding claims, comprising a safety system arranged to cause the controller of the robot to stop movement of the robot (RB) during a learning session, unless the safety system senses that the user touches both of the first and second control elements (Jl, J2).

13. Robot learning system according to any of the preceding claims, wherein the processor (CTL) is arranged to control the robot (RB) in response to the data (DT) logged during the learning session, and wherein the processor (CTL) is

programmed to calculate a transformation of the robot coordinates in response to an input regarding physical properties, such as length, of a tool to be mounted on the tool center point (TCP) during control of the robot (RB) in response to the data (DT) logged during the learning session.

14. A robot system comprising - a robot (RB) comprising a robot arm with a plurality of moveable arm elements arranged between a base and a tool center point (TCP), wherein the tool center point (TCP) is arranged for mounting of a tool, such as a welding tool,

- a controller (CTL) arranged to control movement of the robot (RB), and

- a robot learning system according to any of claims 1-13.

15. A method for trajectory learning of an associated robot (RB) by demonstration from a user to make the tool center point (TCP) follow a desired trajectory, the method comprising

- controlling movement of position in space (C_P) of the tool center point of the robot (RB) by operating by the user's one hand a first control element (Jl) connected to the controller of the robot (RB), and

- logging data (LG_D) in response to the user's operation of the first and second control elements (Jl, J2) during the learning session, so as to allow later control of the robot (RB) in response to the data (DT) logged during the learning session characterized in that the method comprises controlling, during a real-time learning session, orientation (C_OR) of the tool center point (TCP) of the robot (RB) by operating by the user's second hand a second control element (J2) connected to the controller of the robot (RB).

Description:
ROBOT TRAJECTORY OR PATH LEARNING BY DEMONSTRATION

FIELD OF THE INVENTION

The present invention relates to the field of robots, especially to a method and system for trajectory or path learning of a robot by human demonstration, e.g. for learning of a welding operation by human demonstration.

BACKGROUND OF THE INVENTION

Robots are suited for repeating tasks with a high precision. Thus, robots are suited for performing working processes, e.g. welding, laser cutting, spraying etc., where the same sequence of motion for following a trajectory or path must be precisely for each object to be processed. I.e. performing a sequence of leading a process tool through the same trajectory, both with respect to position in space of the tip of the process tool, and with respect to orientation of the process tool.

Different methods exist for trajectory learning of robots. Some methods are based on programming, whereas other methods are based on a human user guiding the robot through the trajectory. Such methods are suited for robot learning in case of robots which are supposed to carry out simple tasks without high demands on precision of the trajectory to be followed. Thus, human demonstration methods are often complicated for high precision tasks, such as following a complicated shape of an object, e.g. for welding a traced. For such tasks, it is often necessary to move the robot in small steps to ensure for each step that the process tool tip is in the right position, and that the orientation of the process tool is also correct for the process to be performed.

Thus, such sequence of manually recording single steps, until the complete trajectory has been followed, is complicated and very time consuming, especially in case of complicated shapes to be followed. Further, since the trajectory has been recorded in single steps, there may be problems in the translation into a continuous sequence to be carried out by the robot, due to its limitations with respect to maximum speed and accelerations etc. This has to be done by off-line processing. US 6,212,443 describes a robot learning system with one single combined position and orientation controller in the form of a handle which is operated by the user's one hand. This handle is fixed to a force sensor which senses the operator's movements and controls the robot accordingly, thereby allowing the user to control position and orientation of a welding torch positioned on the tool center point of the robot. The user holds in her/his other hand a teaching apparatus which has a servo power switch to be hold down by the user, due to safety reason.

DE 20 2008 014481 Ul describes a robot controller with a control device separate from the robot, wherein the control device has two joysticks for controlling position of a robot arm.

EP 2 342 608 describes a method for programming an industrial robot. The industrial robot is moved manually in particular along a space curve. A distance between the industrial robot and an object is determined, and the drives of the industrial robot are controlled such that the industrial robot can be moved manually only at a maximum speed, which is dependent on the distance determined from the object.

WO 2015/090324 Al describes a system for switching between control points of a robotic system involving an industrial robot including a robot arm with a number of joints and provided with a tool interest point movable in a plurality of degrees of freedom.

SUMMARY OF THE INVENTION

Thus, according to the above description, it is an object of the present invention to provide an efficient robot learning system and method which allows fast robot learning of complicated shaped trajectories.

In a first aspect, the invention provides a robot learning system for trajectory or path learning of an associated robot comprising a robot arm, preferably a robot arm with a plurality of moveable arm elements, between a base and a tool center point (TCP), by demonstration from a user, the system comprising - a user interface arranged for connection to a controller of the robot, so as to allow the user to control the robot arm in order to make the TCP follow a desired trajectory during a real-time learning session, wherein the user interface comprises

- a first control element arranged to control position in space of the TCP of the robot, and

- a second control element arranged to control orientation of the TCP of the robot,

wherein the first and second control elements are arranged so as to allow the user to operate the first control element by one hand, and to simultaneously operate the second control element by the user's second hand, and

- a processor arranged to log data in response to the user's operation of the first and second control elements during the learning session, so as to allow later control of the robot in response to the data logged during the learning session.

Such robot learning system is advantageous, since the splitting of control of position and oriention to each hand of the user allows the user to intuitively control movement of the robot TCP for following a desired trajectory, e.g. with a dummy tool or a probe sensor mounted on the TCP of the robot. Especially, it has been found that the system is suited for the user to demonstrate complex shaped trajectories to the robot in a continuous manner, e.g. for learning a welding operation to the robot. The system can thus provide a fast and rather easy trajectory learning by demonstration, even in case of complex trajectories or paths to be tracked.

Since the intuitive user interface allows the trajectory to be followed in real-time, the task of off-line processing the recorded sequence to take into account limitations by the robot are eliminated, or at least significantly reduced.

It is to be understood that the first and second control elements may be designed in many different ways using different technologies, e.g. using push knobs, dial wheels, or the like, giving the user a three-axis control. In a preferred

implementation, three-axis joysticks are used for both position and orientation control, preferably the position joystick is connected to control horizontal position by moving the joystick horizontally, and rotating the joystick for vertical position control.

By means of the two-hand robot control principle allows the user to demonstrate the desired trajectory in an easy way in a real-time learning session, and by logging data from the control elements during the real-time learning session at a desired sampling rate, the robot can be controlled to follow the trajectory in a later playback session with a real tool mounted on the TCP, e.g. a welding tool or the like.

An especially intuitive control can be obtained using joysticks for both position and orientation control, where the position control joystick is mounted at or near the TCP of the robot, and connected to the controller of the robot such that the TCP follows horizontal movements of the user on the joystick, and where the orientation joystick is mounted on the robot arm, at a distance from the position joystick for comfortable operation, e.g. a distance comparable with the width of the user's shoulder or within the range of the user's shoulder width +/- 30 cm. In such embodiment, the user intuitive control of both position and orientation of the TCP, and thus the task of demonstrating a desired trajectory in real-time becomes a rather easy task for the user. Even though the user is close to the robot, safety features can be easily incorporated to stop motion of the robot when the user does not touch both control elements simultaneously, and using joysticks, the joysticks are preferably supplied with springs to enter a neutral position to when not being actively operated by the user.

For control of smaller robots, the first and second control elements can

advantageously be mounted on the robot so as to provide an intuitive control during the learning process. For control of large robots, a separate control console with the two control elements, e.g. joysticks, mounted thereon may be preferred.

In the following, preferred features and embodiments of the invention will be described.

In some embodiments, at least one of the first and second control elements is arranged for mounting on the robot arm, to provide an intuitive feeling upon operation by the user during the learning session. Especially, both of the first and second control elements are arranged for mounting on respective positions on the robot arm. The first control element may be arranged for mounting at or near the TCP of the robot, and wherein the second control element is arranged for mounting at another position on the robot arm, e.g. at a distance of 10-150 cm away from the first control element, and closer to the base of the robot than the first control element. A special intuitive control is obtained when the user can control position of the TCP by operating a control element mounted at or near the TCP, thus having the feeling that the TCP follows operation movements by the user. Specifically, the second control element may be arranged for mounting at another arm element of the robot than the first control element. The first control element may be arranged for mounting at or near the TCP of the robot, and wherein the second control elemement is arranged for mounting at another position on the robot arm. The second control element may be arranged for mounting at a joint (an elbow) between two arm elements of the robot. The first and second control elements may be positioned with a mutual distance of 10 cm to 150 cm, preferably 20 cm to 100 cm, more preferably 45 cm to 80 cm, so as to provide a comfortable position of the hands of the user during the learning session.

The first control element may comprise a first three-axis joystick arranged for operation by the user's one hand, and wherein the second control element comprises a second three-axis joystick for simultaneous operation by the user's second hand. Such joysticks can provide an intuitive feeling in the control of movements of the robot in the learning session, especially with respect to horizontal position of the TCP, but also with respect to orientation, e.g. tilting of the TCP. Especially, the first joystick may be arranged for connection to the controller of the robot, so as to move the TCP of the robot in at least two orthogonal directions upon activation of the first joystick in corresponding directions. Especially, the second joystick may be arranged for connection to the controller of the robot, so as to tilt or rotate the TCP upon activation of the second joystick in corresponding directions. Preferably, if the first joystick is arranged for connection to the controller of the robot, so as to move the TCP of the robot in at least two orthogonal directions upon activation of the joystick in corresponding directions, and the second joystick is arranged for connection to the controller of the robot, so as to tilt or rotate the TCP upon activation of the joystick in corresponding directions. Hereby, a highly intuitive control of the robot TCP movements during the learning session can be obtained. Industrial three-axis joysticks exist which are suited for this purpose.

It is to be understood that other available and known user interface control elements may be used instead of joysticks for the first and second control elements, e.g. push knobs, dial elements, or a combination of these etc.

The system may comprise a probe sensor arranged to be mounted on the TCP during the real-time learning session, wherein the probe sensor is arranged to measure at least one parameter indicative of position of the TCP and a surface forming the trajectory to be followed. The probe sensor is preferably arranged to generate a signal corresponding to said at least one parameter, e.g. distance, and wherein the processor is arranged to log said data representing said signal from the probe sensor during the learning session. This allows input for calculation of a transformation of robot coordinates from the learning session, together with the other logged data, so as to adapt the motion sequence of the robot in order to match its motion to a tool of a given length when mounted on the TCP during the actual operation to be performed by the robot. Especially, the probe sensor may be arranged to measure at least a parameter indicative of a distance between TCP and a tip of the probe sensor. E.g. the probe sensor may comprise a spring loaded rod arranged to be compressed, and it preferably is arranged to sense a parameter indicative of its length or distance between the tip of the rod and the TCP. This allows the task of controlling motion sequence of the robot to follow a trajectory to be easier, since the position precision is relaxed, as the tip of the rod to follow the trajectory is flexibly arranged in relation to the TCP. Hereby, especially a trace, e.g. a welding trace, can very easily followed by the tip of the rod. Especially, the probe sensor may be arranged to measure at least a parameter indicative of an orientation between the TCP and a tip of the probe sensor, e.g. in the form of a sensor mounted at the TCP and connected to a rod with a flexible length. In embodiments with a probe sensor as described, the processor is preferably programmed to calculate a transformation of the robot coordinates in response to the logged data representing said signal from the probe sensor during the learning session, such as to calculate a transformation of the robot coordinates in response to an input regarding physical properties of a tool to be mounted on the TCP during control of the robot in response to the data logged during the learning session.

The processor of the system may be implemented in various ways, as known by the skilled person. E.g. it may form part of the controller of the robot, or it may form part of a computer system separate from the controller of the robot, such as a laptop computer with a suitable software. In the same way, the logged data may be stored or recorded in the memory contained within the controller of the robot, and/or on a separate computer or data storage system being in a wireless of wired connection with the user interface and/or the controller of the robot so as to log and store the sampled data for later use.

The system preferably comprises a safety system in order to protect the user from injuries during the learning session, especially in cases where the user is within the reach of the robot arm during the learning session. Such safety system may be arranged to cause the controller of the robot to stop movement of the robot during a learning session, unless the safety system senses that the user touches both of the first and second control elements. Especially, the safety system may comprise a sensor positioned on each of the first and second control elements for sensing contact with the user's hand, such as a sensor comprising a push button, such as a sensor arranged to sense electrical contact to the user's hand.

To allow utilization of the learned trajectory in a working process, the processor is preferably arranged to control the robot in response to the data logged during the learning session. The processor may be programmed to calculate a transformation of the robot coordinates in response to an input regarding physical properties, such as length, of a tool to be mounted on the TCP during control of the robot in response to the data logged during the learning session.

The processor may be arranged to log data from the first and second control elements directly, and/or the processor may be arranged to indirectly log data in response to the user's operation of the first and second control elements by logging data from the controller of the robot, during the learning session. It is be understood that the robot may be a robot based on any known robot actuator technology arranged for position in order to follow a trajectory or path in space. The robot arm may have two, three, four, five or more joint arm elements between the base and the TCP. Some existing robots allow for logging of control data from its controller during an operation sequence of the robot, which can be used in a system according to the first aspect, e.g. for logging on a general purpose computer. The 'controller of the robot' is to be understood as the processing means which generates the (electrical) control signals to be applied to the various actuators of the robot for controlling the motion of the robot.

The skilled person will know how to program the controller of the robot to provide a desired translation of the signals from the control element (e.g. joystick), so as to have a suitable translation between movements of the control element (e.g. joystick) and robot movement. Especially, it may be preferred that the position and orientation of the TCP is controlled such that a tip of a sensor probe mounted on the TCP, which may be in contact with an object to follow, can be controlled independently with respect to position and orientation by the first and second control elements, respectively. This may facilitate the trajectory following task further for the user, during the learning session.

It is to be understood that the system according to the first aspect can be provided as a stand alone system to be mounted on existing robot and robot controllers. However, the system may as well be integrated robot and robot controller systems.

In a second aspect, the invention provides a robot system comprising

- a robot comprising a robot arm with a plurality of moveable arm elements arranged between a base and a TCP, wherein the TCP is arranged for mounting of a tool, such as a welding tool,

- a controller arranged to control movement of the robot, and

- a robot learning system according to the first aspect.

Especially, the robot may be arranged to be controlled in response to the data logged during the learning session, and to perform a welding process accordingly, such as the robot being a dedicated welding robot. Apart from a robot for welding, the robot may be arranged for e.g. cutting, spraying, painting, milling, drawing, grinding, and chamfering. Further, the invention may also be suitable for robot learning of pick and place robots, where the robot learning can be used to demonstrate the continuous trajectory motion sequence involved in a pick and placed routine.

In a third aspect, the invention provides a method for trajectory learning of an associated robot by demonstration from a user to make the TCP follow a desired trajectory during a real-time learning session, the method comprising

- controlling movement of position in space of the TCP of the robot by operating by the user's one hand a first control element connected to the controller of the robot,

- controlling orientation of the TCP of the robot by operating by the user's second hand a second control element connected to the controller of the robot, and

- logging data in response to the user's operation of the first and second control elements during the learning session, so as to allow later control of the robot in response to the data logged during the learning session.

In a fourth aspect, the invention provides a computer program product having instructions which when executed cause a computing device comprising a processor or a computing system, such as the apparatus according to the first or second aspect, to perform the method according to the third aspect. Especially, the computer program product may be one of: an part of a robot controller software product, and a stand-alone software product for a general computer. It is to be understood that the computer program product instructions in the form of program code which may be implemented on any processing platform, e.g. a robot controller, a general processor in a computer device, e.g. in the form of a downloadable application for a programmable device.

In a fifth aspect, the invention provides a computer readable medium having stored thereon a computer program product according to the fourth aspect.

It is appreciated that the same advantages and embodiments described for the first aspect apply as well for the second, third, and fourth aspects. Further, it is appreciated that the described embodiments can be intermixed in any way between all the mentioned aspects.

BRIEF DESCRIPTION OF THE FIGURES

The invention will now be described in more detail with regard to the

accompanying figures of which

Figs, la and lb illustrate an overview of a system embodiment during a learning session (la), and in an execution session (lb) where the robot performs a prerecorded task,

Figs. 2a and 2b illustrate the concept of independently controlling orientation or rotation (2a) and position (2b) of the robot with two joysticks,

Fig. 3 illustrates movements of a three-axis joystick, as an example of a control element,

Fig. 4 illustrates an example of a 3D probe sensor for placing at the TCP during the learning session,

Fig. 5 illustrates an example of a flow chart for a robot control loop state diagram during learning (to the left), and during execution (to the right),

Figs. 6a, 6b, and 6c show photos of an implementation with two three-axis joysticks mounted on a six axes CRS A465 robot, and Figs. 6a and 6c further show a probe sensor mounted on the TCP of the robot, and

Fig. 7 illustrate steps of a method embodiment.

The figures illustrate specific ways of implementing the present invention and are not to be construed as being limiting to other possible embodiments falling within the scope of the attached claim set. DETAILED DESCRIPTION OF THE INVENTION

Fig. la shows basic parts of a robot learning system system embodiment during a real-time trajectory learning session, while Fig. lb shows the same robot RB in an execution process, e.g. a welding process, based on data DT logged during the learning session.

The robot RB has in the example an arm formed by 5 joint arm elements between a base and a tool center point TCP. A controller of the robot CTL serves to control actuators for moving the arm elements. A probe sensor PS with a probe tip PT is mounted on the TCP of the robot RB, so as to sense preferably at least a distance between the probe tip PT and the TCP during the real-time trajectory learning session. This allows contact with the probe tip during the trajectory following on an object, e.g. a welding trace. The user interface comprises two three-axis joysticks Jl, J2 both mounted on the robot arm, namely on the TPC, and on an elbow between two arm elements, a suitable distance (e.g. 30-80 cm) away from each other to allow the user to comfortably operate both joysticks Jl, J2 simultaneously. A specific example of a low cost joystick is Apem 3140SAL600, which is based on Hall effect sensing technology.

In Fig. la, a human user (not shown) operates the two joysticks Jl, J2

simultaneously, by respective hands, to cause the robot RB to move the position and orientation of its TCP so that the probe tip PT follows contact with a desired trajectory on an object. The splitting of the position and orientation tasks between the user's respective hands provides, especially in combination with the mounting points on the robot, provides an intuitive user interface. Preferably, the probe sensor PS is arranged such that the probe tip PT is in flexible connection with the TCP, thus allowing even easier following of a complicated shaped trajectory. The joystick Jl at the TCP controls position of the TCP, thus also position of the probe tip PT, while the other joystick J2 at the elbow controls orientation, i.e. rotation of the TCP, and thus also orientation or tilting of the probe sensor PS. This allows the user with an intuitive control of the robot movements during following of e.g. a welding trace in real-time. The two joysticks Jl, J2 are arranged to generate respective electrical input signals P_IN, R_IN to a controller CTL of the robot RB, which generates control signals C_S to control actuators in robot RB accordingly. Further, an electrical signal PTL_IN from the probe sensor PS is also applied to the controller CTL of the robot with the purpose of logging data indicative of at least a distance between the probe tip PT and the TCP.

In the shown embodiment in Figs, la and lb, signals P_IN, R_IN, PTL_IN from the joysticks Jl, J2 and the probe sensor PS are applied to the processor in the controller CTL of the robot RB, while control data DT are stored for later use, i.e. an indirect way of storing the signals P_IN, R_IN, PTL_IN from the joysticks Jl, J2 and the probe sensor PS. However, these signals P_IN, R_IN, PTL_IN may alternatively be directly logged by a separate processor, e.g. a laptop computer. However, in both cases, the purpose is to sample the signals P_IN, R_IN, PTL_IN with a suitable time and magnitude resolution, and to preferably store such data DT in a digital format at a storage medium STM for later use, so as to allow later control of the robot RB in response to the data DT logged during the learning session.

In Fig. lb, data DT stored on a storage medium STM during the real-time trajectory learning are played back to the controller CTL of the robot RB which now has a process tool PTL mounted on its TCP. The joysticks and now not in use, since the robot RB is now controlled by the stored data DT so as to cause the process tool PTL to follow the learned trajectory with respect to both position in space and orientation, i.e. tilting.

In the system, key elements are two three axes joysticks Jl, J2 mounted in this example on a six axes industrial robot body, and a preferably three axes probe sensor PS. In the shown example the position controlling joystick Jl is mounted near to the 6'th joint of the robot, i.e. near the TCP. The orientation controlling joystick J2 is mounted near to the robot elbow, or at distance of human shoulder width, e.g. around 45-80 centimeters, in case of a large industrial robot. The three axes probe sensor PS is mounted on the TCP.

Figs. 2a and 2b show the intuitive learning method by means of the two joysticks Jl, J2 mounted on the robot as in Figs, la, lb, i.e. the relation between the joystick Jl, J2 axes and the robot Cartesian axes are shown. The probe sensor tip is shown in contact with the surface of a work piece WP.

In the three sketches of Fig. 2a the orientation controlling by means of joystick J2 is illustrated, where the arrows on the sketches to the right indicate the preferred effect of movement of the orientation joystick J2 and its orientation effect on the robot, where the x and y movable axes of the joystick J2 controls the x or y rotation of the TCP. By rotating the joystick J2 handle the z axis rotation can be controlled by the user.

In Fig. 2b, the arrows on the sketches to the right indicate the preferred effect of movement of the position joystick Jl and its position effect on the robot TCP, where the x and y movable axes of the joystick Jl controls the x or y position of the TCP. By rotating the joystick Jl handle the z axis position can be controlled by the user.

Fig. 3 illustrates operation of a preferred control element, namely a three-axis joystick, seen from above. The x and y axes controls the x or y position or x or y rotation of the robot, while the z axis position or orientation is controlled by rotation the joystick handle. In an alternative implementation, the z axis may be controlled by moving the joystick handle up and down. Preferably, the joysticks are designed to enter a neutral center position when no force is applied to it, and preferably the robot is designed to stop moving, when this neutral center position is sensed.

Fig. 4 shows a probe sensor PS embodiment. Preferably, the length of the probe sensor PS should be designed to be similar, or at least approximately, to the desired process tool to be used on the robot in the working situation. The design is based on a two axes 2A_S joystick sensor with a linear movably probe rod P_R mounted on it, and with a sensor (not visible) capable of sensing the linear displacement of the probe rod P_R. All the axes must have a spring design so the axis will center the position when no forces are applied on the probe. The probe tip PT must be designed for the actual workpiece and therefore must be easily replaceable. E.g. the probe tip PT may be formed as a small ball to be able to easly slide on a surface. Equation (1) describes the transformation defined in ηε{1,...,η} where n is the number of records. The transformation in equation is calculated by using the rotation and position part in a homogeneous coordinate description. Equation (2) describes the transformation from the robot base to the end of the last joint for the i'th transformation. This equation is input to the robot controller for moving the robot around the scene. Equation (3) describes the transformation from the robot base to the end of the probe sensor for the i'th transformation. During the learning process, the two transformations on the right side are recorded for calculating the desired path during execution described in equation (4) :

ψ Pro-foe (1) i 1 eiomt

« * 6Joint (2) Base

Prob γ»6| β ί J* Probe 3) Base ~ i ^ B se i 6Jomt

* r»:/0j0.int rr 6]omt .-pProfoe Tool ^ ^ i l Base ½ase £ 1 6Jomt 1 6}omt

Fig. 5 shows examples of basic parts of the algorithm which can be implemented in software. The the left, a state diagram for the learning part is shown, and to the right, a state diagram for the execution part is shown, until (user) stop STP.

The learning part, to the left, comprises a main loop while the learning is active: #1 : Loop start. #2: Read from analog to digital converter data from position joystick Jl, rotation joystick J2, and probe sensor PS. #3: Calculate a

transformation from the probe sensor PS equation (1) with a Feedback-Assisted algorithm to avoid outer mechanic limits for the probe sensor PS that adjusts the three probe sensor PS axes so they will be close to a center position during the learning process. Calculate transformations based thereon according to equations (1) and (2), and save them. #4: Determine probe sensor correction. #5:

Calculates the new transformation from equation (2) with adjustment calculated in state #4, and send it to the robot controller CTL and return to state #1. The execution part, to the right, comprises two states in a main loop until (user) stop STP. #6: Calculate equation (4) and send it to the robot controller CTL and return to state #6, as long as there is recorded transformation left.

Fig. 6a, 6B, and 6c show photos of a prototype of an implementation of the invention on a CRS robot with two joysticks Jl, J2, and a probe sensor PS mounted thereon. An A/D board is used for logging the analog signals from the joysticks Jl, J2 and probe sensor PS into a separate computer with the regulation software running. However, as mentioned, a modern robot controller and program can directly log the data without the need for a separate computer.

Fig. 7 illustrates steps of a method embodiment. First, the robot is controlled to start S_T1 with a tip of a probe sensor mounted on its TCP so as to start on one end of the desired trajectory to be followed. In real-time, the robot is then controlled to follow the trajectory by simultaneously:

C_P controlling movement of position in space of the tool center point of the robot by operating by the user's one hand a first three-axis joystick mounted near the robot TCP and connected to the controller of the robot, and

C_OR controlling orientation of the tool center point of the robot by operating by the user's second hand a second three-axis joystick mounted on another part of the robot, and connected to the controller of the robot.

This is performed until the end E_T2 of the desired trajectory has been reached. During the controlling, C_P, C_OR, a logging of data LG_D from the two three-axis joysticks as well as from the probe sensor is performed with a suitable temporal and spatial precision in response to the user's operation of the joysticks during the learning session. Finally, a calculation CLC_T of a transformation is performed, so as to adapt the robot movement to a process tool with a given length, and this transformation calculation is performed in response to data logged from the probe sensor. To sum up: the invention provides a system and method for trajectory learning of an associated robot by demonstration from a user. The method comprises operation by the user in a real-time learning session : controlling movement of position in space of the tool center point (TCP) of the robot by operating by the user's one hand a first control element, e.g. a joystick mounted near the TCP, and connected to the controller of the robot. Further, controlling orientation of the TCP by operating by the user's second hand a second control element, e.g. a second joystick, connected to the controller of the robot. Data are logged in real-time during the learning session in response to the user's operation of the control elements, so as to allow later control of the robot in response to the data logged during the learning session. The splitting of position and orientation control to the user's two hands provides an intuitive control of the robot, which allows precise an fast trajectory tracking even in complicated geometries. Preferably, this is even more pronounced in embodiments where the control elements are two three-axis joysticks mounted at different position on the robot.

Although the present invention has been described in connection with the specified embodiments, it should not be construed as being in any way limited to the presented examples. The scope of the present invention is to be interpreted in the light of the accompanying claim set. In the context of the claims, the terms "including" or "includes" do not exclude other possible elements or steps. Also, the mentioning of references such as "a" or "an" etc. should not be construed as excluding a plurality. The use of reference signs in the claims with respect to elements indicated in the figures shall also not be construed as limiting the scope of the invention. Furthermore, individual features mentioned in different claims, may possibly be advantageously combined, and the mentioning of these features in different claims does not exclude that a combination of features is not possible and advantageous.