Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR DETERMINING A TRANSFORMATION REPRESENTATION
Document Type and Number:
WIPO Patent Application WO/2019/120481
Kind Code:
A1
Abstract:
A method and a system for determining a transformation representation between a robot coordinate system of a robot and an augmented reality, AR, coordinate system of an AR system. The method comprises providing (S1) a graphical representation of a virtual frame via a display of the AR system, the virtual frame being a coordinate system known with respect to the AR coordinate system, the graphical representation of the virtual frame being provided in a work space of the robot shown via the display. The method also comprises positioning (S3) a movable element of the robot in at least three different points with respect to the virtual frame, determining (S4) the position of the movable element with respect to the robot coordinate system at each of the respective at least three points, and determining (S5) the virtual frame in the robot coordinate system based on the determined positions.

Inventors:
WILLFÖR PER (SE)
LINDQVIST DANIEL (SE)
Application Number:
PCT/EP2017/083473
Publication Date:
June 27, 2019
Filing Date:
December 19, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ABB SCHWEIZ AG (CH)
International Classes:
B25J9/16; G05B19/408; G06T7/80
Foreign References:
EP2783812A22014-10-01
US20140222025A12014-08-07
EP1849566A22007-10-31
US20080150965A12008-06-26
US20170115656A12017-04-27
Other References:
A. AMERI E ET AL.: "Augmented Reality Meets Industry: Interactive Robot Programming", 2010, MALARDALEN UNIVERITY
Attorney, Agent or Firm:
SAVELA, Reino (SE)
Download PDF:
Claims:
Claims

1. A method for determining a transformation representation between a robot coordinate system of a robot (1 ) and an augmented reality, AR, coordinate system of an AR system (10), the method comprising:

- providing (S1 ) a graphical representation of a virtual frame via a display (12) of the AR system, the virtual frame being a coordinate system known with respect to the AR coordinate system, the graphical representation being provided in a work space of the robot shown via the display;

- positioning (S3) a movable element (5) of the robot in at least three different points with respect to the virtual frame;

- determining (S4) the position of the movable element with respect to the robot coordinate system at each of the respective at least three points;

- determining (S5) the virtual frame in the robot coordinate system based on the determined positions.

2. The method according to claim 1 , comprising determining (S6) the

transformation representation based on the determined virtual frame in the robot coordinate system and the virtual frame in the AR coordinate system.

3. The method according to claim 1 or 2, comprising providing (S1 ) a movable virtual frame.

4. The method according to claim 3, comprising fixating (S2) the movable virtual frame to the AR coordinate system.

5. The method according to any of the preceding claims, comprising determining (S4) the position of the movable element as a response to user input.

6. The method according to any of the preceding claims, comprising positioning (S3) the movable element of the robot in two points along a first axis of the virtual frame, and in one point along a second axis of the virtual frame.

7. The method according to any of the preceding claims, comprising positioning (S3) the movable element of the robot in at least four points with respect to the virtual frame, of which three span a plane and the fourth is not in the plane, and determining (S4) the position of the movable element with respect to the robot coordinate system in the respective at least four positions.

8. The method according to any of the preceding claims, comprising storing (S7) the transformation representation in a memory accessible by the AR device.

9. A system (20) comprising

- a robot (1 );

- an AR system (10) including a display (12) and a sensor system (13),

- a control system (4) configured to

- provide a graphical representation of a virtual frame via the display (12) of the AR system (10), the virtual frame being a coordinate system known with respect to the AR coordinate system, and the graphical representation of the virtual frame is provided in a work space of the robot (1 ) shown via the display (12);

- determine a position of a movable element (5) of the robot (1 ) with respect to the robot coordinate system, at respective at least three different points with respect to the virtual frame;

- determine the virtual frame in the robot coordinate system based on the determined positions.

10. The system according to claim 9, wherein the control system (4) is configured to determine the transformation representation based on the determined virtual frame in the robot coordinate system and the virtual frame in the AR

coordinate system.

11.The system (20) according to claim 9 or 10, wherein the AR system (10) is configured to provide a movable virtual frame via the display (12) of the AR system (10). 12. The system (20) according to any of the claims 9 to 11 , wherein the AR

system (10) is configured to fixate the movable graphical representation of the virtual frame to the AR coordinate system.

13. The system (20) according to any of the claims 9 to 12, wherein the control system (4) is configured to determine the position of the movable element (5) as a response to user input.

14. The system (20) according to any of the claims 9 to 13, wherein the control system (5) is configured to determine a position of the movable element (5) of the robot (1 ) with respect to the robot coordinate system, in two points along a first axis of the virtual frame, and in one point along a second axis of the virtual frame.

15. The system (10) according to any of the claims 9 to 14, wherein the control system (4) is configured to determine the position of the movable element (5) with respect to the robot coordinate system in respective at least four positions with respect to the virtual frame, of which three span a plane and the fourth is not in the plane.

Description:
System and method for determining a transformation representation

Technical field

The present disclosure relates to technology for robots, and in particular how to determine a transformation representation between coordinate systems of an augmented reality system and a robot.

Background

A robot is often used in operations requiring precise positioning. The coordinates of an object that the robot should perform work on, must then be related to the coordinate system of the robot.

Robot programs are frequently taught by teaching methods where the robot is moved manually to target positions and the corresponding path points are stored. The robot is controlled manually, for example by a user using movement keys, a 2D joystick or a 6D mouse. Path points of a simulation program do not always correspond to the real world, because geometries of the simulation model and the real world differ as a result of manufacturing tolerances, requiring the program to be taught in situ. The robot is only allowed to move very slowly when a person is in the vicinity, making the teaching slow.

The teaching requires experience and is often performed by trained robot engineers. The engineer does not get any feedback of the teaching before the program is executed, making corrections of the program complicated.

Robots have historically been used in industry for performing same kind of tasks. However, with new applications the demand for simplified user interfaces for programming and controlling robots has become an issue to increase the usability.

Augmented reality (AR) provides a simplified and intuitive frame work for controlling robots. AR is a term used for overlaying computer generated graphics, text and three dimensional (3D) models on e.g. a video stream.

Virtual information is embedded onto an image or display of the real world, thereby augmenting the image or display with additional information. By means of AR, the user can ask the robot to perform an operation, and as a response get the robot’s simulated plan for performing the command, see for example “Augmented Reality Meets Industry: Interactive Robot Programming”, A. Ameri E et al, Malardalen Univerity, Sweden, SIGRAD 2010.

The AR system generally comprises a sensor system with a camera, a display and a processor. In order for the AR system to function properly, the coordinates of the sensor system should be related to the coordinate system of the robot.

A common approach to this problem is to use a physical object with a pattern recognizable by the AR system, e.g. a QR code that is located in the environment of the robot. Then the user has to calibrate the robot against the pattern. Another method is to have a physical object and a virtual

representation of the physical object, and then move the virtual world so that the virtual object overlays the physical object. It is however hard to gain

sufficient accuracy with this method.

Summary

It is thus an object of the disclosure to alleviate at least some of the drawbacks with the prior art. It is an object to provide a method for aligning a coordinate system of the AR system with a coordinate system of the robot which does not require any additional physical objects or markers for the calibration. It is a further object to provide a simple and user friendly method for the aligning. It is a further object to provide a method for enabling the operator to visualize robot information with the AR system. It is a still further objective to provide a method for enabling the operator to accurately instruct the robot via the AR system in a fast and simple way.

These objects and others are at least partly achieved by the method and the system according to the independent claims, and by the embodiments according to the dependent claims.

According to a first aspect, the disclosure relates to a method for determining a transformation representation between a robot coordinate system of a robot and an augmented reality, AR, coordinate system of an AR system. The method comprises providing a graphical representation of a virtual frame via a display of the AR system, the virtual frame being a coordinate system known with respect to the AR coordinate system, the graphical representation being provided in a work space of the robot shown via the display. The method further comprises positioning a movable element of the robot in at least three different points with respect to the virtual frame, determining the position of the movable element with respect to the robot coordinate system at each of the respective at least three points, and determining the virtual frame in the robot coordinate system based on the determined positions.

The method utilizes the robot to do high accuracy calculations of

where the coordinate system of the augmented reality system is located in relation to the robot coordinate system. The method does not require any additional physical object for the calibration and has high accuracy. The presented method is easy to perform and does not require any specialist knowledge.

According to some embodiments, the method comprises determining the transformation representation based on the determined virtual frame in the robot coordinate system and the virtual frame in the AR coordinate system. Thus, if the virtual frame does not directly correspond to the AR coordinate system, the position and rotation of the virtual frame in the AR coordinate system are considered.

According to some embodiments, the method comprises providing a movable virtual frame. Thereby, the placement of the virtual frame is facilitated as the user can look at the virtual frame while moving the virtual frame to an appropriate position in the work space of the robot.

According to some embodiments, the method comprises fixating the movable virtual frame to the coordinate system of the AR system. Thereby, there is no risk that the user accidently moves the virtual frame during performance of the method.

According to some embodiments, the method comprises determining the position of the movable element as a response to user input. Thus, the user can indicate when the robot is accurately positioned and which position that should be retrieved.

According to some embodiments, the method comprises positioning the movable element of the robot in two points along a first axis of the virtual frame, and in one point along a second axis of the virtual frame. Thus, three points may be sufficient to determine a transformation representation.

According to some embodiments, the method comprises positioning the movable element of the robot in at least four points with respect to the virtual frame, of which three spans a plane and the fourth is not in the plane, and determining the position of the movable element with respect to the robot coordinate system in the respective at least four positions. In order to increase the accuracy of the transformation representation, more points for the robot to be positioned in may be used.

According to some embodiments, the method comprises storing the transformation representation in a memory accessible by the AR device. Then, the AR device may directly transform positions in the AR coordinate system into corresponding positions in the robot coordinate system using the transformation representation.

According to some embodiments, the method comprises determining a transformation representation comprising a translational representation. Thereby, translational transformation between the coordinate systems can be handled.

According to some embodiments, the method comprises determining a transformation representation comprising a rotational representation. Thereby, rotational transformation between the coordinate systems can be handled.

According to some embodiments, at least one of the at least three points in the virtual frame is located a predetermined length from a point of origin of the graphical representation of the virtual frame, and the method comprises determining a transformation representation comprising a scaling representation using the predetermined length. As a length is known, this length can be used as a reference for scaling up or scaling down distances in the robot coordinate system. According to a second aspect, the disclosure relates to a system for determining a transformation representation between a robot coordinate system and an augmented reality, AR, coordinate system. The system comprises a robot, an AR system including a display, and a sensor system and a control system. The control system is configured to provide a graphical representation of a virtual frame via the display of the AR system. The virtual frame is a coordinate system known with respect to the AR coordinate system. The graphical representation of the virtual frame is provided in the work space of the robot shown via the display. The control system is further configured to determine a position of a movable element of the robot with respect to the robot coordinate system, at respective at least three different points with respect to the virtual frame, and to determine the virtual frame in the robot coordinate system based on the determined positions. Thus, an easy to use method for aligning the AR coordinate system with the robot coordinate system is provided.

According to some embodiments, the control system is configured to determine the transformation representation based on the determined virtual frame in the robot coordinate system and the virtual frame in the AR coordinate system.

According to some embodiments, the AR system is configured to provide a movable graphical representation of the virtual frame via the display of the AR system.

According to some embodiments, the AR system is configured to fixate the movable graphical representation of the virtual frame to the AR coordinate system.

According to some embodiments, the control system is configured to determine the position of the movable element as a response to user input.

According to some embodiments, the control system is configured to determine a position of the movable element of the robot with respect to the robot coordinate system, in two points along a first axis of the virtual frame, and in one point along a second axis of the virtual frame.

According to some embodiments, the control system is configured to determine the position of the movable element with respect to the robot coordinate system in respective at least four positions with respect to the virtual frame, of which three span a plane and the fourth is not in the plane.

According to some embodiments, the control system is configured to store the transformation representation in a memory accessible by the AR device.

According to some embodiments, the control system is configured to determine a transformation representation comprising a translational

representation.

According to some embodiments, the control system is configured to determine a transformation representation comprising a rotational representation.

According to some embodiments, at least one of the at least three points in the virtual frame is located a predetermined length from a point of origin of the graphical representation of the virtual frame, wherein the control system is configured to determine a transformation representation comprises a scaling representation using the predetermined length.

According to a third aspect, the disclosure relates to a computer program comprising instructions which, when the program is executed by a control system, cause the control system to carry out the method as described herein.

According to a fourth aspect, the disclosure relates to a computer-readable medium comprising instructions which, when executed by a control system, cause the control system to carry out the method as described herein.

Brief description of the drawings

Fig. 1 illustrates a system for determining a transformation representation according to some embodiments.

Fig. 2 illustrates an AR system according to some embodiments.

Fig. 3 illustrates a method for determining a transformation representation according to some embodiments.

Detailed description

A robot is often related to information expressed in the coordinate system of the robot. Such information may be robot paths in a robot program, work objects, excluded work space etc. This information is for example included in a robot controller. Also, to be able to control a robot accurately, the robot has to be able to relate given commands to its own coordinate system. By using augmented reality (AR), the information can be expressed virtually in the robot coordinate system together with the real robot via the display of the AR system. Also, the robot can be programmed and controlled via the AR system. As the user relates the robot to the environment via the AR system, the robot coordinate system has to be aligned with the coordinate system of the AR system. The disclosure provides a method and system for aligning these coordinate systems in a straightforward, user friendly way.

Fig. 1 illustrates a system 20 where the proposed method for determining a transformation representation between a robot coordinate system and an augmented reality, AR, coordinate system is implemented. The system 20 comprises a robot 1 , an AR system 10 and a control system 4.

The depicted robot 1 is an industrial six degrees of freedom (DOF) robot. Flowever, other kinds of robots with less or more DOFs may also be used in the system. An industrial robot is defined in ISO 8373:2012 to be an automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation appliances.

The robot 1 includes a kinematic chain including a plurality of axes. The base of the robot 1 is the link zero and represents the base coordinate system with the axes Xi, Yi, Zi with origin Oi of the robot 1. The base coordinate system of the robot 1 is defined by the robot manufacturer. The last link or wrist of the robot 1 has a mechanical interface where a tool or end effector is attached. The mechanical interface represents a mechanical interface coordinate system with the axes X m , Y m , Z m with origin O m of the robot 1. This coordinate system is depicted in Fig. 1. The mechanical interface coordinate system may also be referred to as the tool frame. The tool or end effector represents a tool center point (TCP). The described coordinate systems are defined with the orthogonal right hand rule. Alternatively, the described coordinate systems are defined with the orthogonal left hand rule. The rotation about respective axis X, Y, Z is called roll (A or y), pitch (B or b) or yaw (C or a). The position and rotation of the TCP or mechanical interface is for example defined in the base coordinate system by appropriate transformation between the coordinate systems. The mechanical interface or end effector may herein be referred to as a movable element 5 of the robot 1.

The robot 1 is able to perform work in a three dimensional space around the robot 1 referred to as the work space of the robot 1. The work space is defined to be the total volume swept out by the end effector as the robot executes all possible motions.

The real world of the robot is defined in a world coordinate system with origin Ow and axes X w , Yw and Z w . The world coordinate system may coincide with the base coordinate system. A work object may be defined in a work object

coordinate system.

The AR system 10 includes an AR device 11 with a display 12, an AR control unit 18 and a sensor system 13. The AR device 11 may be a wireless device such as a mobile phone, a headset, wearable glasses, a tablet or other kind of computer. The AR device 11 is configured for wireless communication, and includes a wireless communication unit for sending and receiving information wirelessly.

The sensor system 13 may be included in the AR device 11 , or partly be arranged to the robot 1. The sensor system 13 includes at least one camera. In one example embodiment, the camera is a three-dimensional (3D) camera, and the image captured with this camera is a 3D image. The 3D images may be shown on the display 12 such that the user can see the real world through the AR device 11. The camera is for example a stereo camera or a plenoptic camera, based on light field sensors. The camera may be a video camera. The images captured by the video camera may be shown as a video stream on the display 12. In case the sensor system 13 includes several cameras, the cameras may be of the same kind or different kinds. The sensor system 13 may also include other kinds of sensors such as laser sensors, infrared sensors, accelerometers, gyroscopes, magnetometers, microphones etc. The sensor system 13 provides data such that the AR system 10 by means of the data from the sensors can know what the real world, that is the environment, looks like and where the AR device 11 is in the real world. The sensor system 13 may also collect data from gestures and voice commands from the user, for controlling the AR system 10. The AR system 10 may also include loud speakers for enhancing the AR effect by sounds, or simply for messages.

The display 12 is AR-compatible, and is thus designed to be used in AR applications. The display 12 may be configured to display the sensor output from the sensor system 13. For example, the display 12 may display a video stream of images captured by the sensor system 13. The display 12 may also comprise an image synthesizing part that synthesizes graphics, text or other kinds of information with images captured by the sensor system 13. The display 12 is for example a head mounted display, a display of a portable device or a display of another kind of computer. The display 12 may be a touch sensitive display, such that the user can interact with the control system 4 by touching on the display 12. The display 12 may be part of a graphical user interface (GUI), controlled by the AR control unit or a separate GUI control unit. In one embodiment, the display 12 is translucent, and configured to display pictures e.g. graphics overlaid the real world shown through the display 12. For example, the display 12 comprises lenses for projecting pictures overlaid the real world to the user. One example of such an AR system 10 is the“FloloLens” provided by Microsoft. In such a wearable AR system, actually a headset, the user sees the real world through clear lenses and images are projected in front of the user. The lenses, one for each eye, are made of three layers of glass (blue, green and red). A light engine above the lenses projects light into the headset and TIR (Total Internal Reflection) makes the light bounce internally in the lenses. The light is out coupled and exits the lenses towards the eyes. The user will then see the projected light as overlaid pictures to the real world. The effect is often referred to as holographic, but is actually an optical combiner effect. The AR device of such a system will include a micro display, imaging optics, waveguide, combiner and gratings. The gratings cause effective beam expansion so that the image is visible over a wider area when looking at the out coupled light than when looking at light engine directly. Data from the sensor system is used to create a digital model of the real world and to know where the AR device is in the digital model. It can then be determined where graphics such as a virtual frame should appear, e.g. in the work space of the robot 1. In this disclosure, graphical objects also encompass so called“holograms” or projections created with an AR system such as the “HoloLens”.

The AR system 10 defines one or several internal coordinate systems. For example, the sensor system 13 may define a coordinate system of a lens of a sensor device such as a camera. The AR system 10 may also define coordinate systems of other sensor devices of the sensor system 13. The AR system 10 may additionally combine information from several sensors, and define a common coordinate system of the several sensors. However, in this disclosure, it is assumed that transformation between such sensor coordinate systems is known, and that the images and overlaid graphics etc. are known in an AR coordinate system of the AR system 10. Thus, the position and orientation of any information displayed on or via the display 12 is known in the coordinate system of the AR device 11 , herein referred to as the AR coordinate system with origin O a and axes X a , Y a and Z a . The AR system 10 defines the AR coordinate system to be in a fixed relation to the real world. For example, the AR coordinate system is a coordinate system of a digital model of the real world created by the AR system 10.

The control system 4 may comprise one or several control units, a robot controller 7 and/or one or several computers 6. The control system 4 ensures that the robot 1 and the AR system 10 can exchange information such as data and instructions, and is capable of controlling the robot 1 , the AR device 11 and the sensor system 13. The robot controller 7 is configured to control the robot 1 according to a program, and comprises all necessary modules and information needed to control the robot 1. The control system 4 further comprises an AR control unit 18. The AR control unit 18 may be included in the AR device 11. The AR control unit 18 includes all necessary modules and information for controlling the AR device 11 and the sensor system 13, and to display information on or via the display 12. For example, the AR control unit 18 is configured to display the information retrieved by the sensor system 13 on the display 12. Further, the AR control unit 18 is configured to provide graphics, e.g. 2D or 3D graphics, on the display 12 that are overlaid the information retrieved by the sensor system 13, or the translucent display 12. Thereby, an augmented reality scene of the

environment may be shown via the display 12. The AR control unit 18 is further configured to send information, such as positions, via the wireless communication device, to another computer 6 or control unit of the control system 4. The AR control unit 18 is further configured to receive information, such as the

transformation representation, via the wireless communication device, from another computer 6 or control unit of the control system 4. The other computer 6 or control unit may be connected by wire to the robot controller 7, and configured to exchange information with the robot controller 7. In one embodiment, the other computer 6 is not present and the AR control unit 18 and the robot controller 7 are configured to communicate directly, by wire or wirelessly. In a further

embodiment, at least one of the devices in the control system 4 is configured to communicate with a cloud service for exchange of data etc.

The system 20 further comprises a control device 17 arranged to receive input from a user. For example, the control device 17 is a jogging device or a computer mouse such as a 2D or 6D mouse, for controlling the robot 1 to certain points or along a certain path. The control device 17 may also be referred to as a teach pendant. In the figures 1 and 2, the AR device 1 1 comprises a built-in control device 17 in the form of a jogging device for controlling the robot 1 . The control device 17 is also configured to receive input such as a confirmation that a certain retrieved position and/or coordinate of the robot 1 shall be determined and optionally saved/recorded. A confirmation may include that a button is pushed down, e.g. that the jogging device is pushed down. The control device 17 may alternatively be a separate device that is connected to the control system 4 by wire or wirelessly, for example to the robot controller 7.

The AR system 10 is generally configured to provide a graphical

representation of a virtual frame e.g. on the display 12, as illustrated in Fig. 1 and 2. Fig. 2 illustrates the AR device 1 1 in greater detail. The virtual frame is a coordinate system known with respect to the AR coordinate system, thus a 3D virtual frame. The virtual frame here has the origin O v and axes X v , Y v and Z v . The virtual frame is known with respect to the AR coordinate system. In some embodiments, it coincides with the AR coordinate system. The virtual frame is here illustrated with three explicit coordinate axes, but alternatively, the virtual frame may be visualized as a box or similar, where the edges of the box make up the coordinate axes. In Figs. 1 and 2, an image or snapshot of a video stream of the robot 1 and the work space of the robot 1 is presented on the display 12. The virtual frame is superimposed over the image on the display 12. The work space and the robot 1 shown on the display 12 is thus an image of the real work space and the real robot 1. The graphical representation of the virtual frame is thus provided in an image of the work space of the robot 1 displayed on the display 12. Alternatively, the display 12 is translucent and the virtual frame is overlaid the translucent display 12.

In an exemplary embodiment, the AR system 10 is configured to provide a movable graphical representation of the virtual frame. The virtual frame may thus be moved in relation to the real world by the user by, for example, dragging and dropping the virtual frame on the display 12. The control system 4 is configured to receive an input from the user that the virtual frame should be fixed. In response to such input, the AR system 10 is configured to fixate the movable graphical representation of the virtual frame to the AR coordinate system. The virtual frame is in one embodiment fixed to the AR coordinate system by the AR system 10 by default. To fixate the virtual frame means to lock the virtual frame from further movement by the user. To fixate may also be referred to as“pinning”, thus to“pin” the virtual frame to the real world or an object in the real world. The real world is here the world as the AR system 10 pictures it.

The control system 4 is configured to determine a position of the movable element 5 of the robot 1 with respect to a robot coordinate system. The robot coordinate system is for example the base coordinate system, a work object coordinate system or the world coordinate system. The control system 4 may further be configured to determine a rotation of the movable element 5 of the robot 1 with respect to the robot coordinate system.

The control system 4 is configured to receive input from the user that the position of the movable element 5 should be determined, and optionally recorded or saved. The control system 4 is further configured to determine the position of the movable element 5 as a response to obtaining an input from a user. For example, the user may provide information to the control system 4 where the certain positions are in relation to the virtual frame, for example along a certain axis of the virtual frame, a point of origin etc. Especially, the control system 4 is configured to determine a position of the moveable element 5, with respect to the robot coordinate system, at at least three different points with respect to the virtual frame. In Fig. 2, an example of such three points P is illustrated in greater detail. The control system 4 is further configured to determine the virtual frame in the robot coordinate system based on the determined positions. Thus, the control system 4 will construct the virtual frame in the robot coordinate system. In the special case that the virtual frame coincides with the AR coordinate system, thus, the virtual frame has the same orientation and position as the AR coordinate system, the control system 4 may need to use only the virtual frame in the robot coordinate system for transformation purposes. The transformation representation will then be based on the position and orientation of the virtual frame in the robot coordinate system.

Alternatively, the control system 4 is configured to determine a

transformation representation between the robot coordinate system and the AR coordinate system based on the determined virtual frame in the robot coordinate system and the virtual frame in the AR coordinate system. Thus, the control system 4 uses the position and rotation of the virtual frame in the AR coordinate system and the position and rotation of the virtual frame in the robot coordinate system to determine the transformation representation.

By means of the transformation representation, virtual information can be expressed directly in the robot coordinate system together with the real robot via the AR system. Also, by means of the transformation representation, points in the AR coordinate system may be used to directly control the robot 1.

The control system 4 is configured to store the transformation representation in a memory accessible by the AR device 11. For example, the control system 4 is configured to provide the transformation representation to the AR device 11 , and the AR device 11 stores the transformation representation in a memory of the AR device 11. The transformation representation may alternatively be saved to a memory of the robot controller 7, saved to a memory of an external computer 6 or saved to the cloud, i.e. a memory provided by a cloud service. The control system 4 is in one embodiment configured to determine a transformation representation comprising a translational matrix. In one embodiment, the control system 4 is configured to determine a transformation representation comprising a rotational matrix.

Generally, the control system 4 comprises one or several processors and one or several memories. A processor may be a processor module such a CPU (Central Processing Units) or a microcontroller. A memory may comprise a non- volatile memory and/or a removable memory such as a USB (Universal Series Bus) memory stick. For example, each control unit, controller or computer comprises at least one memory and one processor. Each control unit, controller or computer also comprises appropriate I/O technology for handling communication.

The disclosure also relates to a method for determining a transformation representation between a robot coordinate system of a robot and an augmented reality, AR, coordinate system of an AR system. The robot and the AR system may thus be the robot 1 and the AR system depicted in the figures 1 and 2. The steps of the method may be defined in a computer program, comprising

instructions which, when the program is executed by the control system 4, cause the control system to carry out the method. The steps of the method may also be defined in a computer-readable medium, e.g. a removable memory such as a USB memory stick. The computer-readable medium then comprises instructions, which, when executed by a control system 4, cause the control system 4 to carry out the method.

The method will now be described with reference to the flow chart in Fig. 3, and to the Figs. 1 and 2. The control system 4 is configured to perform all embodiments of the method described in relation to Fig. 3. The method comprises providing S1 a graphical representation of a virtual frame via a display 12 of the AR system, for example on the display or projected via the display. The virtual frame is a coordinate system known with respect to the AR coordinate system.

The graphical representation of the virtual frame is provided in a work space of the robot 1 shown via the display 12. For example, the graphical representation of the virtual frame is provided in an image of the work space displayed on the display

12, or the graphical representation of the virtual frame is projected as a picture onto the real world seen through a translucent display.

In an example embodiment, the method comprises providing S1 a movable graphical representation of the virtual frame. The display 12 is for example a touch sensitive display, and the movable graphical representation of the virtual frame is for example movable by a drag and drop feature. Thus, the user may touch on the location of the graphical representation of the virtual frame on the display 12 with a finger and move it by dragging the finger along the display 12 to a new location on the display 12. When the user removes its finger from the display 12, the graphical representation of the virtual frame will remain at the new location. This is useful if the robot 1 has a limited work space, and the user wants to move the virtual frame to a location where the robot 1 can reach the virtual frame. In one example embodiment, the method comprises fixating S2 the movable graphical representation of the virtual frame. For example, the user may fixate the graphical representation of the virtual frame by making a certain input to the display 12, e.g. by touching two times with a short interval between onto the display 12. Thereby, the user cannot move the virtual frame anymore without making another kind of certain input to the display 12. The user may alternatively move the virtual frame be making a certain gesture that is recognizable by the AR system 10. The display may then be a translucent display. As a response to a positive recognition of the gesture, the graphical representation of the virtual frame may follow the movement of the user’s hand. By making another gesture recognizable by the AR system 10, the graphical representation of the virtual frame may become fixated. The gestures are recognized by the sensor system

13. The position and rotation of the virtual frame and thus its axes are all the time known with respect to the AR coordinate system. The virtual frame is thus fixed to the real world as pictured by the AR device 11 on the display 12, such that the AR device 11 can be moved around the virtual frame and picture the graphical representation of the virtual frame from different views.

When the user is satisfied with the location of the virtual frame, the user should position the real robot 1 in relation to the virtual frame. More in detail, the user should position a moveable element 5 of the robot 1 in relation to the virtual frame in a plurality of points. The moveable element 5 is for example the mechanical interface or an end effector of the robot 1. The method comprises positioning S3 the movable element 5 of the robot in at least three different points with respect to the virtual frame, see Fig. 2. The user should precisely position the movable element 5 of the robot 1 in the at least three points, by using the jogging device or other control feature for moving the robot 1. The method further comprises determining S4 the position of the movable element 5 with respect to the robot coordinate system at each of the respective at least three points. In one example embodiment, the determining S4 comprises determining the position of the movable element as a response to obtaining user input. In other words, when the user is satisfied with the position of the robot 1 , the user makes an input to the control system 4, e.g. by pressing a button on the jogging device. The position of the robot 1 will then be determined by the robot controller 7, and optionally saved by the control system 4. Thus, the position of the moveable element 5 is determined by the robot controller 7 in the at least three positions, for example the position of the TCP with respect to the base coordinate system of the robot 1. By knowing what kind of points the at least three points are, e.g. what axis of the virtual frame they are taken along, the control system 4 can reconstruct the virtual frame in the coordinate system of the robot 1. Thus, the method further comprises determining S5 the virtual frame in the robot coordinate system based on the determined positions. The determined virtual frame in the robot coordinate system may thus be directly used to determine the transformation representation, if the virtual frame in the AR coordinate system directly corresponds to the AR coordinate system. The virtual frame in the AR coordinate system would then be represented as a zero or null matrix. Additionally, the method may comprise determining S6 the transformation representation based on both the determined virtual frame in the robot coordinate system and the virtual frame in the AR coordinate system. Thereby, if the virtual frame in the AR coordinate system is not directly corresponding to the AR coordinate system, the position and rotation of the virtual frame in the AR coordinate system will be considered when determining the transformation representation. In one example embodiment, two points are positioned along one axis of the virtual frame, and one point is positioned along a second axis of the virtual frame. The user should then position S3 the movable element of the robot in the two points along the first axis of the virtual frame, and in one point along a second axis of the virtual frame. The control system 4 is thus configured to determine a position of the movable element 5 of the robot 1 with respect to the robot coordinate system, in two points along a first axis of the virtual frame, and in one point along a second axis of the virtual frame. The control system 4 is thus configured to determine the position of the movable element 5 with respect to the robot coordinate system in respective at least three different points. The positions of a movable element 5 of the robot are then determined in these points. By aligning a line through the two positions, a first axis may be determined. By aligning another line through the third point and perpendicular to the first axis, a second axis can be determined. The intersection point between the two lines makes up the origin point. A third axis can be determined by arranging a third line orthogonal to both the first line and the second line, in their intersection point. Its direction can be determined by setting in beforehand that the coordinate system is defined with the orthogonal right hand rule. Thus, by determining three points, the virtual frame can be determined in the robot coordinate system.

In another embodiment, the method comprises positioning S3 the movable element of the robot in at least four points with respect to the virtual frame, of which three span a plane and the fourth is not in the plane, and determining S4 the position of the movable element with respect to the robot coordinate system in the respective at least four positions. The three points will then define a plane and thus define two axes. The third axis is defined by the fourth point that will be defined by aligning a line orthogonal to the plane. The direction of the third axis will then be given depending on which side the fourth point is. The intersection of the plane and the orthogonal fourth line will make up the point of origin for the virtual frame in the robot coordinate system. More points may be used to further improve the preciseness of the method. The control system 4 is thus configured to determine the position of the movable element 5 with respect to the robot coordinate system in respective at least four positions (or more) with respect to the virtual frame, of which three span a plane and the fourth is not in the plane.

In an example embodiment, the method comprises storing S7 the

transformation representation in a memory accessible by the AR device 11.

Thereby, the AR device 11 can be used to position and orient the robot 1 directly. Alternatively, the transformation representation is saved to the robot controller 7, an external computer 6 or the cloud. By means of the transformation

representation, points in the AR coordinate system can be transformed to corresponding points in the robot coordinate system, and positions of the robot 1 in the robot coordinate system can be correctly transformed to points in the AR coordinate system.

The method comprises, for example, determining S6 a transformation representation comprising a translational representation. In another example, the method comprises determining S6 a transformation representation comprising a rotational representation. The method may comprise determining S6 a

transformation representation comprising both a translational representation and a rotational representation. The transformation representation may be a

translational matrix. The transformation matrix may thus include both translational and rotational matrixes. The translational matrix is for example a 4 x 4 matrix, with the translation along X, Y and Z. The rotation may be represented by one matrix for each rotation around an axis, e.g. rotation around the X-axis, rotation around the Y-axis and rotation around the Z-axis. These matrixes may of course be put together in one greater matrix.

In an additional example, the method comprises determining S6 a

transformation representation comprising a scaling representation. The

transformation matrix may thus also comprise a scaling part, e.g. a scaling matrix. For determining a scaling representation, in one embodiment, at least one point of the at least three points in the virtual frame is located a predetermined length from the point of origin of the virtual frame. The point will then define a distance from the point of origin. The distance will also be determined in the robot coordinate system, as the at least one point also will be determined there. As the distance to the point of the virtual frame is known, and the distance to the point in the robot coordinate system also is known, the transformation between the coordinate systems can be scaled. For example, each distance in the virtual frame may be scaled by a scaling factor into corresponding scaled distances in the robot coordinate system. The scaling factor may be an upscaling, thus a value greater than 1 , or a downscaling, thus a value smaller than 1.

An example of the method of transformation will now be explained with reference to the figures. An operator holds an AR compatible tablet in his hand, for example the AR device 11 illustrated in Figs. 1 and 2. The AR compatible tablet is configured for wireless communication, and includes a wireless

communication unit for sending and receiving information. The AR compatible tablet initially estimates its position and orientation in an AR coordinate system defined to be in a fixed relation to the environment. This is done by using the sensor system of the AR compatible tablet to identify certain objects or features in the environment. The AR compatible tablet has an in-built control device 17 that enables control of a movable element 5 of the robot. The operator starts the program in which a transformation can be determined by touching an icon on the touch sensitive display of the tablet. In response, a graphical representation of a virtual frame is visualized on the display. The user holds the AR compatible tablet towards the working area of the robot 1. The camera of the AR compatible tablet pictures the scene of the work space and the robot 1 on the display. The graphical representation of the virtual frame is overlaid the scene. Thus, the graphical representation of the virtual frame is now shown together with the real

environment on/through the AR compatible tablet. The operator moves the virtual frame until it is in an appropriate location in the work space of the robot 1 , where the virtual frame is easily reached by the tool frame of the robot 1. The operator now gives input to the AR compatible tablet that the virtual frame should now be fixed, e.g. by a double-click on the graphical representation of the virtual frame.

As a result, the virtual frame becomes fixed to a certain place in the work space of the robot. The operator can now move around with the AR compatible tablet, but the graphical representation of the virtual frame will stay at the same place on the display with respect to the real world. The operator now uses the build-in control device 17 to move the tool frame of the robot 1 to three points on the axes of the virtual frame, e.g. two points on the Xa-axis and one point on the Y a -axis. When the tool frame is positioned correctly at a point, the operator makes an input to the build-in control, e.g. press a button on it. In response to such an input, the position of the robot 1 , thus the position of the tool frame with respect to the robot coordinate system, is

determined by the robot controller 7 and may be saved. At each of the three points, the position of the tool frame is determined in the robot coordinate system. The positions are for example sent to, or retrieved by, a computer 6 connected to the robot controller 7. The computer 6 interpolates a straight line crossing the two positions retrieved from the corresponding two points of the X a -axis of the virtual frame, whereby the X a -axis is reconstructed in the robot coordinate system. The computer 6 further aligns a line perpendicular to the reconstructed X a -axis and crossing the third point on the Y a -axis of the virtual frame, whereby the Y a -axis is reconstructed in the robot coordinate system. The computer 6 now constructs a Z- axis that is orthogonal to the reconstructed X a -axis and the Y a -axis. Thereby, a reconstruction of the virtual frame in the robot coordinate system has been made.

The computer 6 retrieves the position and rotation of the virtual frame in the AR coordinate system, from the AR compatible tablet. The computer 6 now uses the position and rotation of the virtual frame in the AR coordinate system and the position and rotation of the virtual frame in the robot coordinate system, to calculate a transformation matrix. The transformation matrix comprises both a translational matrix and a rotational matrix. The transformation matrix is sent to the AR compatible tablet and saved in the memory of the same. The AR

compatible tablet may now be able to visualize virtual information expressed in the robot coordinate system correctly in the AR device 11. Further, the AR compatible tablet may control the position of the robot 1 directly using the transformation matrix for transforming position and rotation in the AR coordinate system to position and rotation in the robot coordinate system. And the position and rotation of the robot 1 in the robot coordinate system may be transformed into position and rotation in the AR coordinate system. The present invention is not limited to the above-described preferred embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the invention, which is defined by the appending claims.




 
Previous Patent: SOLUBILITY OF GLP-1 PEPTIDE

Next Patent: OPTOELECTRONIC PACKAGE