Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ROBOT SYSTEM COMPRISING ROBOT AND METHOD OF CALIBRATING ROBOT
Document Type and Number:
WIPO Patent Application WO/2023/136743
Kind Code:
A1
Abstract:
A robot system comprising: a robot comprising a movable portion having a calibration marker and at least a first joint and a second joint; a camera to capture images of the calibration marker; and a computation unit comprising a robot control layer to control rotation of the first and the second joints based on a kinematic model of the robot to position the calibration marker in a plurality of positions; an image processing layer to process images received from the camera to determine, for each image, marker position data; an axes and reduction ratio estimation layer to establish a first error function based on the marker position data to determine a first rotational axis of the first joint; and a kinematic parameter update layer to establish a second error function based on at least the first rotational axis to determine a set of kinematic parameters for the kinematic model.

Inventors:
LOGINS ALVIS (RU)
PARAMONOV KIRILL BORISOVICH (RU)
YANG SHIHUA (CN)
Application Number:
PCT/RU2022/000015
Publication Date:
July 20, 2023
Filing Date:
January 17, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
LOGINS ALVIS (RU)
International Classes:
B25J9/16
Other References:
JUSTIN W HART ET AL: "A robotic model of the Ecological Self", HUMANOID ROBOTS (HUMANOIDS), 2011 11TH IEEE-RAS INTERNATIONAL CONFERENCE ON, IEEE, 26 October 2011 (2011-10-26), pages 682 - 688, XP032070354, ISBN: 978-1-61284-866-2, DOI: 10.1109/HUMANOIDS.2011.6100897
"Fundamentals of manipulator calibration", 31 December 1991, WILEY-INTERSCIENCE JOHN WILEY & SONS, INC., ISBN: 978-0-471-50864-9, article BENJAMIN W. MOORING ET AL: "Fundamentals of manipulator calibration", XP055768850
HAIXIA WANG ET AL: "A vision-based fully-automatic calibration method for hand-eye serial robot", INDUSTRIAL ROBOT: AN INTERNATIONAL JOURNAL, vol. 42, no. 1, 19 January 2015 (2015-01-19), Bradford, pages 64 - 73, XP055711296, ISSN: 0143-991X, DOI: 10.1108/IR-06-2014-0352
Attorney, Agent or Firm:
SOJUZPATENT (RU)
Download PDF:
Claims:
CLAIMS

1. A robot system (200) comprising: a robot (100A, 100B) comprising a movable portion (104), the movable portion (104) being provided with at least one calibration marker thereon and comprises at least a first joint and a second joint arranged at either end of at least one link, the first and second joints being configured to generate movement in the movable portion (104) controlled by a computation unit (108); a camera (110) configured to capture images of the at least one calibration marker in a plurality of positions; and the computation unit (108) comprising a robot control layer, an image processing layer, an axes and reduction ratio estimation layer, and a kinematic parameter update layer, wherein: the robot control layer is configured to control rotation of the first and the second joints based on a kinematic model of the robot (100A, 100B), wherein the robot control layer is configured to control rotation of the first joint and/or the second joint to position the at least one calibration marker in the plurality of positions; the image processing layer is configured to process images received from the camera (110) to determine, for each image, a marker position of the at least one calibration marker in three-dimensional space to output as marker position data; the axes and reduction ratio estimation layer is configured to establish a first error function based on the marker position data and determine a first rotational axis of the first joint based on the first error function; and the kinematic parameter update layer is configured to establish a second error function based on at least the first rotational axis and determine a set of kinematic parameters for the kinematic model based on the second error function.

2. The robot system (200) of claim 1, wherein the movable portion (104) further comprises a first servo motor (112A) coupled to the first joint through a first gear system and a second servo motor (112B) coupled to the second joint through a second gear system, and a first encoder (114A) provided to the first servo motor (112A) and a second encoder (114B) provided to the second servo motor (112B), the first and second encoder (114A, 114B) being configured to collect servo values of the first and second servo motors (112A, 112B) respectively, and wherein the computation unit (108) further comprises a clock synchronization layer configured to synchronise the camera (110) with the first and second encoders (114A, 114B) to timestamp images captured by the camera (110).

3. A method (300) of calibrating a robot (100A, 200B) comprising a movable portion (104), the movable portion (104) being provided with at least one calibration marker thereon and comprises at least a first joint and a second joint arranged at either end of at least one link, the first and second joints being configured to generate movement in the movable portion (104) controlled by a computation unit (108), the method (300) comprising: a data collection procedure comprising: positioning the at least one calibration marker in a plurality of positions by rotating the first joint and/or the second joint; obtaining a plurality of images of the at least one calibration marker, the plurality of images being captured by a camera (110) when the at least one calibration marker is at the plurality of positions; and a data processing procedure comprising: processing the plurality of images to determine, for each image of the plurality of images, a marker position of the at least one calibration marker in three-dimensional space to output as marker position data; establishing a first error function based on the marker position data; determining a first rotational axis of the first joint and a second rotational axis of the second joint based on the first error function; establishing a second error function based on at least the first rotational axis and the second rotational axis; and determining a set of kinematic parameters for a kinematic model of the robot (100A, 100B) based on the second error function.

4. The method (300) of claim 3, further comprising calibrating the kinematic model of the robot (100A, 100B) using the determined set of kinematic parameters.

5. The method (300) of claim 3 or 4, wherein the movable portion (104) further comprises a first servo motor (112A) configured to move the first joint and a second servo motor (112B) configured to move the second joint, and a first encoder (114A) provided to the first servo motor (112A) and a second encoder (114B) provided to the second servo motor (112B), wherein the camera (110) is synchronised with the first and second encoders (114A, 114B), and the data collection procedure further comprises timestamping each image of the plurality of images wherein a timestamp corresponds to a time step in a sequence of time steps.

6. The method (300) of claim 5, wherein the first and second encoders (114A, 114B) are configured to collect servo values of the first and second servo motors (112A, 112B) respectively, wherein the data processing procedure further comprises determining, for each of the first and second servo motors (112A, 112B), a plurality of servo motor angles for the respective servo motor based on the servo values collected by the first and second encoders ( 114A, 114B) respectively.

7. The method (300) of any of claims 3 to 6, wherein the marker position is determined, for each image of the plurality of images, by extracting a plurality of comer coordinates of the at least one calibration marker from the image, wherein each of the plurality of corner coordinates corresponds to a position substantially at a comer of the at least one calibration marker.

8. The method (300) of claim 5, wherein the marker position is determined, for each image of the plurality of images, by computing a marker center coordinate using the plurality of comer coordinates of the at least one calibration marker on the image and one or more intrinsic camera parameters.

9. The method (300) of claim 7 or 8, wherein processing the plurality of images further comprises, for each image of the plurality of images, determining a marker orientation of the at least one calibration marker in three-dimensional space based on the plurality of corner coordinates of the at least one calibration marker on the image and one or more intrinsic camera parameters, to output as the marker position data.

10. The method (300) of claim 9, wherein establishing a first error function comprises circular arc fitting of the marker center coordinate and the marker orientation of the at least one calibration marker that corresponds to a circular movement of the at least one marker with respect to a coordinate system of the camera (110).

11. The method (300) of claim 9 or 10, wherein the movable portion comprises a first servo motor (112A) coupled to the first joint through a first gear system having a gear reduction ratio, wherein determining a first rotational axis of the first joint comprises establishing the first error function based on the marker center coordinates and the marker orientations of the at least one calibration marker determined from the plurality of images, and applying an optimization method to the first error function to determine a joint rotation axis origin Q, a rotation axis vector V, and the gear reduction ratio r.

12. The method (300) of claim 11, wherein a first encoder (114 A) is provided to the first servo motor (112A) configured to output encoder data comprising servo values of the first servo motor (112A), and wherein the gear reduction ratio is determined based on marker orientations of the at least one calibration marker determined from the plurality of images and the encoder data of the first servo motor (112A).

13. The method (300) of any of claims 3 to 12, wherein the at least one calibration marker comprises one or more marker key points, wherein a marker key point corresponds to a uniquely identifiable point on the at least one calibration marker, and the data processing procedure further comprises: extracting the one or more marker key points from the plurality of images; and determining one or more intrinsic camera parameters based on a relative position of the one or more marker key points.

14. The method (300) of any of claims 3 to 13, wherein the movable portion (104) comprises a plurality of J joints arranged to be rotated by a corresponding plurality of J servo motors, wherein positioning the at least one calibration marker (106A, 106B, 106C) comprises controlling the plurality of J servo motors to perform a set of motions to rotate the plurality of J joints.

15. The method (300) of claim 14, wherein, for 1 ≤ n ≤ J, motion of the nth joint, Mn, is determined by controlling the nth servo motor to rotate the nth joint while maintaining the remaining joints of the plurality of J joints stationary in a respective zero position.

16. The method (300) of claim 14 or 15, wherein at least one calibration marker is provided to one or more links each connecting two adjacent joints of the plurality of J joints and an encoder is provided to each servo motor of the plurality of J servo motors, each of the plurality of encoders being configured to output encoder data corresponding to a respective joint of the plurality of J joints, wherein the encoder data comprises servo values of the corresponding servo motor.

17. The method (300) of claim 16, wherein the set of motions M1 , M2, ... ,MJ is arbitrary and processing the plurality of images comprises, for each nth joint of the plurality of J joints, for each image of the plurality of images, transforming a position of the at least one calibration marker between the (n-1)th joint and the nth joint from a moving frame of the (n-1)th joint to a stationary frame of the (n-1)th joint in which the (n-1)th joint is stationary and in a zero position using the encoder data of the (n-1)th joint.

18. The method (300) of any of claims 3 to 17, wherein determining a set of kinematic parameters comprises determining a link length between the first joint and the second joint.

19. The method (300) of any of claims 3 to 18, wherein determining a set of kinematic parameters comprises determining a marker-base transformation between a frame of the at least one calibration marker in a zero position and a frame of a base (102) of the robot (100A, 100B) using the set of kinematic parameters, determining a marker-camera transformation between the frame of the at least one calibration marker in the zero position and a frame of the camera (1 10) by processing the plurality of images, and computing a camera-base transformation between the frame of the camera (110) and the frame of the robot base based on the determined marker-base transformation and the determined marker-camera transformation.

20. A non-transitory computer-readable storage medium comprising machine-readable code which, when executed by a processor, causes the processor to: position the at least one calibration marker in a plurality of positions by rotating the first joint and/or the second joint; obtain a plurality of images of the at least one calibration marker, the plurality of images being captured by a camera (110) when the at least one calibration marker is at the plurality of positions; process the plurality of images to determine, for each image of the plurality of images, a marker position of the at least one calibration marker in three-dimensional space to output as marker position data; establish a first error function based on the marker position data; determine a first rotational axis of the first joint and a second rotational axis of the second joint based on the first error function; establish a second error function based on at least the first rotational axis and the second rotational axis; and determine a set of kinematic parameters for a kinematic model of the robot (100A, 100B) based on the second error function.

Description:
ROBOT SYSTEM COMPRISING ROBOT AND METHOD OF CALIBRATING

ROBOT

TECHNICAL FIELD

The present disclosure relates generally to the field of robotics and more specifically, to a robot system comprising a robot and a method of calibrating the robot.

BACKGROUND

In today’s world, the growth of technology is increasing rapidly. With this rapid increase in technology, much of the manual work is now performed automatically, for example by a robot, and so the manufacturing of robots has grown enormously. Various parameters require calibration while exploiting a robot, one of these parameters being kinematic parameters. The kinematic parameters of a robot describe a geometry of the motion of the robot and are used to relate the position of the end-effector of the robot with the position of the base of the robot via joint values. If the kinematic parameters are un-calibrated, then they are subjected to large errors due to manufacturing defects of the robot, continuous wear and tear during use, as well as accumulated zero position offset of the joints. As a result, a precise calibration method is desirable for a robot. A commonly-used collection of kinematic parameters is the Denavit- Hartenberg (DH) parameters, which allows a position of an end-effector of the robot to be calculated based on joint values. Another set of kinematic parameters includes the transition between a camera, robot base coordinate systems, and gear reduction ratios of the robot.

Conventionally, calibration was performed by a laser tracker system, where one part of the laser tracker system measures the body positions of the robot, and another part of the laser tracker system measures an end-effector position of the robot. However, such calibration is complex and inefficient due to its manual nature, and the necessary equipment has a high cost. Another approach includes the introduction of a camera to a robot, where the camera is used in multiple positions of the robot. In addition, a marker is arranged sufficiently close to an optical line of the camera to calculate a correction vector for each position of the robot. However, all positions of the robot used for calibration are required to be sufficiently close to the optical line of the camera. This implies that the kinematic parameters are subject to calibration only within a limited configuration space of the robot, which is a technical problem.

In light of the foregoing discussion, it is desirable to address the aforementioned drawbacks associated with calibration of robots.

SUMMARY

The present disclosure relates to a robot system comprising a robot and a method of calibrating the robot. An objective of the present disclosure is to address at least partially the problems encountered in the prior art by providing an improved robot system comprising a robot and an improved method for automatically calibrating the kinematic parameters of the robot. For example, to reduce position and orientation errors of the end-effector of the robot and for increasing operation performing accuracy of the robot.

One or more objectives of the present disclosure is achieved through the enclosed independent claims. Optional or alternative implementations of the present disclosure are further defined in the dependent claims.

In one aspect, the present disclosure provides a robot system. The robot system includes a robot that further includes a movable portion, the movable portion is provided with at least one calibration marker thereon. The robot system further includes at least a first joint and a second joint arranged at either end of at least one link, the first and second joints being configured to generate movement in the movable portion controlled by a computation unit. The robot system further includes a camera configured to capture images of at least one calibration marker in a plurality of positions. The robot system further includes the computation unit comprising a robot control layer, an image processing layer, an axes and reduction ratio estimation layer, and a kinematic parameter update layer. The robot control layer is configured to control the rotation of the first and the second joints based on a kinematic model of the robot. Further, the robot control layer is configured to control the rotation of the first joint and/or the second joint to the position of at least one calibration marker in the plurality of positions. The image processing layer is configured to process images received from the camera to determine, for each image, a marker position of the at least one calibration marker in three-dimensional space to the output as marker position data. The axes and reduction ratio estimation layer are configured to establish a first error function based on the marker position data and determine a first rotational axis of the first joint based on the first error function. The kinematic parameter update layer is configured to establish a second error function based on at least the first rotational axis and determine a set of kinematic parameters for the kinematic model based on the second error function.

The robot system includes the movable portion of the robot. There is at least one calibration marker provided on the movable portion of the robot. The calibration markers help in analysing the movement of the movable portion on the robot and the camera is used to capture multiple positions of the calibration markers. Further, the movement in the movable portion is generated by the first and second joints, where one joint value differs from zero position and other joint values are in zero position during all data collection procedures. Moreover, calculation of the rotation axes parameters and calculation of kinematic parameters involves minimization of an error function. Further, the first and second joints are controlled by the computation unit. The computation unit includes a robot control layer that controls the rotation of the first joint and/or the second joint to change the position of one of the calibration markers. The computation unit further includes the image processing layer that helps in providing marker position data by analysing the images captured by the camera. The computation unit further includes the axes and reduction ratio estimation layer that analyses the marker position data to establish the first error and determine a first rotational axis of the first joint based on the first error function. The computation unit further includes the kinematic parameter update layer that analyses the first rotational axis to establish the second error and determine the set of kinematic parameters for the kinematic model based on the second error function. Beneficially, the robot system calibrates the kinematic parameters of the robot that includes robot link lengths, joint positions, and orientations, zero position of robot joints, reduction ratios per joint. With the calibrated parameters, the robot can achieve better accuracy of the end-effector position and orientation during the motion execution.

In an implementation form, the movable portion further includes a first servo motor coupled to the first joint through a first gear system and a second servo motor coupled to the second joint through a second gear system and a first encoder provided to the first servo motor and a second encoder provided to the second servo motor, the first and second encoder being configured to collect servo values of the first and second servo motors respectively. Further, the computation unit comprises a clock synchronization layer configured to synchronise the camera with the first and second encoders to timestamp images captured by the camera, and, optionally, the rotation angles provided by the encoders.

The servo motor can rotate in a clockwise and anticlockwise direction. Further, the gear system helps to transfer the motion of the servo motor to the joints.

In another aspect, a method of calibrating a robot including a movable portion with at least one calibration marker thereon. Further, at least a first joint and a second joint are arranged at either end of at least one link, the first and second joints being configured to generate movement in the movable portion controlled by a computation unit. The method includes a data collection procedure that includes positioning the at least one calibration marker in a plurality of positions by rotating the first joint and/or the second joint. Further, obtaining a plurality of images of the at least one calibration marker. In this, the plurality of images being captured by a camera when the at least one calibration marker is at the plurality of positions. Further, the data processing procedure discloses about processing the plurality of images to determine, for each image of the plurality of images, a marker position (and optionally orientation) of the at least one calibration marker in three-dimensional space to output as marker position data. The method further discloses about establishing a first error function based on the marker position data. After that determining a first rotational axis of the first joint and a second rotational axis of the second joint based on the first error function. Further, establishing a second error function based on at least the first rotational axis and the second rotational axis. Furthermore, determining a set of kinematic parameters for a kinematic model of the robot based on the second error function.

The method achieves at least some of the advantages and technical effects of the robot system of the present disclosure.

It should be noted that all devices, elements, circuitry, units and means described in the present application may be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it will be clear to a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.

Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative implementations construed in conjunction with the appended claims that follow.

BRIEF DESCRIPTION OF THE DRAWINGS

The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.

Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:

FIGs. 1A and IB are schematic illustrations of an architecture of a robot system, in accordance with different embodiments of the present disclosure;

FIG. 2 is a block diagram of a robot system, in accordance with an embodiment of the present disclosure; and

FIG. 3 is a flow chart of a method for calibrating a robot, in accordance with an embodiment of the present disclosure.

In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non- underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing. DETAILED DESCRIPTION OF EMBODIMENTS

The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.

FIGs. 1A and IB are schematic illustrations of an architecture of a robot for a robot system, in accordance with different embodiments of the present disclosure. FIG.s 1A and IB are explained collectively and FIG IB is an embodiment of FIG 1A. With reference to FIG. 1A, there is shown a robot, such as a robot 100A (of FIG.1 A), and a robot 100B (of FIG. IB). The robot 100A includes a base 102, a movable portion 104, a calibration marker 106A, a computational unit 108, a camera 110, a first servo motor 112A, and a second servo motor 112B, and a first encoder 114A and a second encoder 114B. The robot 100B includes all elements as that of the robot 100A (computational unit 108 not shown), and in addition, the robot 100B further includes calibration markers 106B, and 106C.

The robot 100A and the robot 100B disclosed correspond to a basic robot, which is used for automatically calibrating kinematic parameters of the robot system by using an industrial camera, such as the camera 110, and at least one fiducial marker, such as the calibration marker 106A. The robot 100B further includes the calibration markers 106B and 106C.

The base 102 of the robot 100A is used to provide balance to the movable portion 104 of the robot 100A. The movable portion 104 is the arm of the robot 100A.

The calibration markers 106A, 106B, and 106C are used to provide marker position data. The calibration markers 106A, 106B, and 106C are attached at an end-effector of the robot 100A. At least one calibration marker is used for calibration purposes.

The computational unit 108 may include suitable logic, circuitry, interfaces, and/or code that are configured to control the robot 100A. Examples of implementation of the computational unit 108 may include but are not limited to a central data processing device, a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application- specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a state machine, and other processors or control circuitry. The camera 110 may include suitable logic, circuitry, interfaces, and/or code that are configured is to capture images of the position of the calibration marker 106A.

The first servo motor 112A and the second servo motor 112B are used to provide movement to the arm of the robot 110 A. The first servo motor 112A and the second servo motor 112B are capable of moving clockwise as well as anti-clockwise directions. In addition, the first encoder 114A and the second encoder 114B are used to evaluate (e.g., by computing rotation angles) the rotation of the first servo motor 112A and the second servo motor 112B respectively.

In one aspect, there is provided a robot system. The robot system includes the robot 100A that further includes the movable portion 104. The movable portion 104 is provided with at least one calibration marker thereon. The robot system further includes at least a first joint and a second joint arranged at either end of at least one link, the first and second joints being configured to generate movement in the movable portion 104 controlled by the computation unit 108. In other words, the movable portion 104 corresponds to the arm of the robot 100A. The movable portion 104 has an arbitrary number of joints that are arranged at one of the ends of at least one link. The movement of at least one link is performed by rotation of at least one preceding joint. In addition, the computational unit 108 controls the at least first joint and the second joint for providing movement to the movable portion 104, which is further beneficial for the movement of at least one calibration marker, such as movement of the calibration marker 106 A. In an implementation, the computational unit 108 includes a robot control layer that stores a pre-determined set of commands for the movements of the first joint and the second joint. It should be noted that the robot control layer and the computational unit 108 can be the same device or separate devices in a distributed system. Moreover, the robot control layer sends the pre-determined set of commands to the first servo motor 112A and the second servo motor 112B for generating movement in them.

In accordance with an embodiment, the first servo motor 112A is coupled to the first joint through a first gear system and the second servo motor 112B is coupled to the second joint through a second gear system. Moreover, the first encoder 114A is provided to the first servo motor 112A and the second encoder 114B is provided to the second servo motor 112B. Further, the first encoder 114A and the second encoder 114B are configured to collect servo values (e.g. angles) of the first servo motor 112A and the second servo motor 112B respectively. In addition, the computation unit 108 further includes a clock synchronization layer configured to synchronise the camera 110 with the first encoder 114A and the second encoder 114B to timestamp images captured by the camera 110 and servo values. In other words, the at least first joint and the second joint are connected to the first servo motor 112A and the second servo motor 112B, respectively through a gear system with a gear reduction ratio per joint as system parameters. The movement of the first servo motor 112A and the second servo motor 112B is transferred to the at least first joint and the second joint with some reduction ratio coefficient(s). Further, the clock synchronization layer that is present in the computation unit 108, performs the synchronization for the timestamp. Moreover, the clock synchronization layer synchronizes the camera 110 with the first encoder 114A and second encoder 114B.

Further, the robot system comprises the camera 110 configured to capture images of at least one calibration marker in a plurality of positions. In an implementation, the camera 110 includes a perception layer. Moreover, the synchronization of the camera 110 with the first encoder 114A and the second encoder 114B is performed by timestamping the image captured by the camera 110 and encoder data. For example, during motion M J the camera 110 captures multiple images of at least one calibration marker, such as the calibration marker 106A. For example, the camera 110 captures multiple images I 1 -I T , where T is the total number of frames for motion M J . Thereafter, the images are sent to the computational unit 108 for further processing. Moreover, during the motion (e.g., motions M J and M J+1 ), the first encoder 114A and the second encoder 114B collect information from the first servo motor 112A of the joint J and the second servo motor 112B of the joint J+1 and send the information to the computational unit 108. Further, the synchronization is performed between an encoder clock and a camera clock. In an example, angle values (e.g., angles a 1 - a T ) are available even without any synchronization. In an implementation, the synchronization is performed, and as a result time tags per image and per angle are obtained, for establishing a first error function.

The computation unit 108 of FIG 1 A further includes a robot control layer, an image processing layer, an axes and reduction ratio estimation layer, a kinematic parameter update layer, and a clock synchronization layer. The robot control layer is configured to control the rotation of the first and the second joints based on a kinematic model of the robot 100A, the robot control layer is configured to control the rotation of the first joint and/or the second joint to position the at least one calibration marker in the plurality of positions. In other words, there are different layers in the computation unit 108 that are used to perform the calibration process in the robot system. Further, the robot control layer provides movement to the arm of the robot 100A for changing the position of the at least one calibration marker, such as the calibration marker 106A. For example, several fiducial markersΔ i , e.g., i ∈ [1,2], statically attached in arbitrary location and orientation on an end-effector of the robot arm. Further, at least one calibration marker, such as the calibration marker 106A allows for the calculation of position and orientation of the robot 100A in a three-dimensional (3D) space, such as based on its image given intrinsic parameters of the camera 110 and real- world marker dimensions.

The image processing layer is configured to process images received from the camera 110 to determine, for each image, a marker position and (optionally) orientation of at least one calibration marker in three-dimensional space to the output as marker position data. In other words, the image processing is performed by obtaining data from the camera 110 and outputs a sequence of position/orientation data for each observed marker with corresponding timestamps. The image processing layer of the computational unit 102A is responsible for the calculation of the position and orientation of the calibration markers 106 A, 106B, 106C with respect to the camera 110, such as by using the image data I 1 to I T . For example, without losing generality, the calibration markers 106A, 106B, 106C, such as fiducial markers Δ i are visible and detectable on all images I 1 to I T . In another case, the image processing layer chooses a subset of frames I il to I iT for which Δ i is detected by the camera 110. Moreover, the intrinsic parameters of the camera 110 may be assumed to be correct and known. In an implementation, a camera calibration procedure is also performed for the intrinsic parameters of the camera 110. Thereafter, the fiducial marker detection on images I 1 to I T is performed. The result of the detection procedure is a sequence C 1 I to C T i , where C t i is the collection of (x,y)-coordinates of marker Δ i corners on the frame It. In an example, pre-calculated intrinsic parameters of the camera 110 and comer coordinate C T I are used to calculate the 3D position of the marker center P t i and 3D marker orientation O t i with respect to the camera 110 via perspective-n-point (PnP) method. Further, the representation of P t i is performed by a three-dimensional vector, and the representation of O t i is performed by a 3x3 orthonormal basis matrix.

Further, the axes and reduction ratio estimation layer is configured to establish a first error function based on the marker position data and determine a first rotational axis of the first joint based on the first error function. In other words, the axis and reduction ratio estimation layer is responsible for processing marker position/orientation data, further encoder data, and output estimated rotational axis for each joint. For example, from the geometrical properties of rotation, for each i, the 3D marker coordinates P i 1 , ... , P T i are arranged on a circle centered at Q j , on a plane orthogonal to V j , and according to angles r j α 1 ,...,r j α T ; and orientation bases O I i ,.. ,O T i revolve around V j according to angles r j α 1 ,...,r j α. T Further, the rotation axis vector V j , rotation axis point Q j and reduction ratio rj are estimated. For example, an optimization method can be applied to a loss function L(V, Q, r) which reflects the measure of how much the geometrical properties above are not satisfied for a given set of 3D vector V, 3D point Q, and scalar r. The desired values are found by solving an optimization problem: (V j , Q j , rj) = argmin L(V, Q, r).

Further, the kinematic parameter update layer is configured to establish a second error function based on at least the first rotational axis and determine a set of kinematic parameters for the kinematic model based on the second error function. In other words, the kinematic parameter update layer is responsible for processing the estimated rotational axis for each joint and output calibrated kinematic parameters. For example, L denotes a collection of kinematic parameters subject to calibration. In an implementation, given the estimated rotational axis for each joint A l to A j and initial values of the kinematic parameters L 0 , the kinematic parameter update layer finds an estimation of the actual kinematic parameters, that minimizes the difference between the actual rotational axes corresponding to the updated parameter values , and the estimated axes A l to A j . The difference between the two different sets of axes, the estimated axes A l to A j and the actual axes are defined as where T is a transformation operator that applies certain rotation and translation to the axes position and orientation parameters. In an example, the transformation operator corresponds to the transformation between the camera 110 and the world coordinate systems. Moreover, the error function is defined as where d 1 (A, B) is the smallest distance between any point that belongs to the A axis and any point that belong to the B axis, and d 2 (A, B) is a difference between orientation vectors of the axes A and B. Further, minimization of D and D' is performed by using an optimization algorithm, for example, gradient descent. It should be noted that the present disclosure is not limited to a particular definition of the error functions D and D'. The updated kinematic parameters and the transformation operator T completes the kinematic parameters calibration procedure, which is beneficial as compared to the conventional approach. In an example, transformation between the marker frame in zero position and the robot base frame is obtained via the calibrated kinematic parameters, together with transformation between marker frame in zero position and camera frame obtained via image processing, form transformation between the camera 110 and the base 102, thus completing hand-eye calibration.

The robot system calibrates the kinematic parameters of the robot 100A that includes robot link lengths, joint positions, and orientation, zero position of robot joints, reduction ratios per joint positions and orientation, reduction ratios per joint. With the calibrated parameters, the robot 100A is able to achieve improved accuracy of end-effector position and orientation during motion execution. According to an embodiment, the calibration procedure might be performed without encoder data. A plurality of images with markers is sufficient to provide kinematic model parameters and hand-eye calibration.

According to an embodiment, a special fiducial marker type, for example, 3x3 ChArUco marker, can be attached to the end-effector to automatically find intrinsic parameters of the camera 110 using motion information from the marker’s key points. In the embodiment, marker key points are extracted from image data I T to I T . Further, knowing about the ground-truth information about the relative position of the marker key points, intrinsic camera parameters of the camera 110 are obtained. Further, by using the obtained camera parameters and the position of marker key points on the image, the 3D position of the marker center P t i and 3D marker orientation with respect to the camera 110 are calculated via PnP method.

According to another embodiment, at least one marker is attached to each link of the robot. This setup allows arbitrary movements of the robot during calibration and online kinematic parameter calibration. If arbitrary movements are applied, then encoder values may be required for the zero-position calibration. The present embodiment may be applied to any robot architecture with revolute joints, allowing for arbitrary link coupling. The difference between the embodiment and the main disclosure is that the robot arm movements could be arbitrary. In this embodiment, the camera 110 captures images I 1 , ... , I T , where T is the total number of frames. After that, collection of rotary encoder data for joints j = 1, ..,J is performed. Further, positions and orientations of the markers attached to the links j = 1, ... , J is obtained. Further, rotational axes and reduction ratio estimation are performed by the data from the first joint to find for the first joint. For each compute the 3D affine transformation , which corresponds to the rotation of 3D space around the axis A 1 = (V 1 , Q 1 ) by angle value Further, denote The new positions correspond to the second joint marker positions as if the first joint is stationary and in zero position. After that, rotational axes and reduction ratio estimation are performed for the transformed data for the second joint: to find Next, which corresponds to rotation of 3D space around the axis A 2 = (V 2 , Q 2 ) by angle value is found. Next, positions and orientations for joint 3: are transformed. The same sequence of actions are repeated for the rest of the joints until the rotational axis in zero position are obtained. Furthermore, an adjustment procedure using the second error function to determine the kinematic parameter based on the rotational axis A 1 ...A J as described above is performed.

In another embodiment, the camera 110 is attached to the end-effector and a fiducial marker is attached to a stationary workspace visible to the camera 110, instead of the end-effector. In this embodiment, image processing is performed to obtain a sequence of marker positions P 1 , ... , P T and a sequence of marker orientation bases O 1 , ... , O T . After that, rotational axes and reduction ratio estimation is performed by using the position-orientation data of the camera 110, such as — P 1 , ... , —P T and O 1 -1 , ... , O T -1 , and lastly kinematic parameter adjustment is performed.

In another embodiment, several cameras may be used in case spatial obstacles prevent the markers from being imaged. The camera 110 preferably has a static position. In another embodiment, the camera 110 may have a pre-defined trajectory in case of spatial obstacles, as an alternative to using several cameras.

FIG. 2 is a block diagram of a robot system, in accordance with an embodiment of the present disclosure. FIG 2 is described in conjunction with elements from the FIGs. 1A and IB. With reference to FIG. 2, there is shown a robot system 200 for calibrating the robot. The robot system 200 includes a robot control 204, a robot servo 206, an encoder layer 208, a perception 210, an image processing 212, a clock synchronization 214, an axes and reduction ratio estimation 216, and a kinematic parameter update 218. The robot system 200 further includes the robot 100A (or 100B), the computation unit 108, and the camera 110. FIG. 2 is explained stepwise from steps 202A to 202K.

The robot control 204 of the robot system 200 is used to control the robot 100A by providing instruction for each movement. The encoder layer 208 in the robot system 200 is used to capture the angles and movement. The perception 210 may include the camera 110 for collecting visual information. The image processing 212 is a layer present in the computational unit 108 for analysing the images captured by the camera 110. The clock synchronization 214 is a layer present in the computational unit 108 for synchronizing the camera 110 with the encoder layer 208. Further, the axes and reduction ratio estimation 216 is to provide an estimated rotational axis for each joint. Furthermore, the kinematic parameter update 218 is used to provide calibrated kinematic parameters.

At step 202 A, the robot control 204 in the computational unit 108 provides commands for movement. The robot control 204 stores a pre-determined set of commands for the movements of the at least first joint and the second joint. It should be noted that the robot control 204 and the computational unit 108 might be the same device.

At step 202B, the robot control 204 sends the pre-determined set of commands to the robot servo 206 for generating movement in the robot servo 206.

At step 202C, the angles of the robot servo 206 are captured by the encoder layer 208. In other words, the at least first joint and the second joint are connected to robot servo 206 through a gear system with a gear reduction ratio per joint as system parameters. The movement of the robot servo 206 is transferred to the at least first joint and the second joint with some reduction ratio coefficient. Further, the clock synchronization 214 performs the synchronization for the timestamp.

At step 202D, visual information is collected by the perception 210. At step 202E, the clock synchronization 214 synchronizes the camera 110 with the encoder layer 208. In other words, the perception layer 112 is in the camera 110. The synchronization of the camera 110 with the encoder layer 208 is performed by timestamping the image captured by the camera 110. For example, during motion M J the camera captures multiple images of the at least one calibration marker I l -I T , where T is the total number of frames for motion M J . The images are sent to the computational unit 108 for processing. During the motion, M J , the encoder layer 208 collect information from the robot servo 206 of the joint J and send the information to the computational unit 102 A. The clock synchronization 214 synchronizes the encoder clock and camera clock to obtain servo motor angles a l - a T .

At step 202F, the image processing 212 is performed, where the image processing 212 is responsible for obtaining data from the camera 110 and outputs a sequence of position/ orientation data for each observed marker with corresponding timestamps. The image processing 212 of the computational unit 108 is responsible for the calculation of the position and orientation of the marker with respect to the camera 110 using the image data I 1 to I T . Further, without losing generality, fiducial markers Δi are visible and detectable on all images I 1 to I T . In another case, the image processing layer chooses a subset of frames I il to I iT for which Δi is detected. Intrinsic parameters of the camera 110 are assumed to be correct and known. Otherwise, a camera calibration procedure is performed. Further, the fiducial marker detection on images I 1 to I T is performed. The result of the detection procedure is a sequence C 1 I to C T i , where C t i is the collection of (x,y)-coordinates of marker Δ i comers on the frame I t . Further, pre-calculated intrinsic parameters of the camera 110 and comer coordinate C t 1 are used to calculate the 3D position of a marker center P t i and 3D marker orientation O t i with respect to the camera 110 via perspective-n-point (PnP) method. In an example, the representation of the marker center P t i is a three-dimensional vector, and the representation of the 3D marker orientation O t i is a 3x3 orthonormal basis matrix.

At step 202G the visual information collected by the perception 210 is timestamped by the clock synchronization 214. Collectively at steps, 202H and 2021 the estimation of axes and reduction ratio are performed after calculating the position and orientation of the calibration markers 106A, 106B, 106C with respect to the camera 110. The axes and reduction ratio estimation 216 is responsible for processing calibration marker’s position/orientation data and encoder data and output estimated rotational axis for each joint.

At step 202 J the estimated rotational axis for each joint is processed by the kinematic parameter update 120. The kinematic parameter update 218 is responsible for processing the estimated rotational axis for each joint and output calibrated kinematic parameters.

At step 202K the procedure of calibrating the kinematic parameters is completed. The updated kinematic parameters and the transformation operator complete the kinematic parameters calibration procedure. The transformation between the marker frame in zero position and the robot base frame obtained via the calibrated kinematic parameters, together with the transformation between marker frame in zero position and camera frame obtained via image processing, form the transformation between the camera 110 and the base 102, thus completing hand-eye calibration.

The robot system 200 calibrates the kinematic parameters of a robot that includes robot link lengths, joint positions, and orientation, zero position of robot joints, reduction ratios per joint positions and orientation, and zero position of robot joints. With the calibrated parameters, the robot system 200 is able to achieve better accuracy of the end-effector position and orientation during the motion execution. FIG. 3 is a flow chart of a method for calibrating a robot, in accordance with an embodiment of the present disclosure. FIG. 3 is described in conjunction with elements from the FIG. 1A, IB, and 2. FIG. 3 is explained stepwise from steps 302 to 304.

The method 300 is used to calibrate the robot 100A that comprises the movable portion 104, the movable portion 104 being provided with at least one calibration marker thereon and comprises at least a first joint and a second joint arranged at either end of at least one link, the first and second joints being configured to generate movement in the movable portion 104 controlled by a computation unit 108. In other words, the movable portion 104 forms the arm of the robot 100A. The movable portion 104 has an arbitrary number of joints. The joints are arranged at either end of at least one link. The movement of the at least one link is performed by rotation of adjacent joints. The computational unit 108 controls a first joint and a second joint for providing movement to the movable portion 104 and data is collected by evaluation of the movement of the at least one calibration marker. For example, in FIG. 3 at step 302, the method 300 begins with executing data collection procedure. The data collection procedure 302 includes, at step 302A, positioning the at least one calibration marker in a plurality of positions by rotating the first joint and/or the second joint. In other words, the data is collected by changing the position of the at least one calibration marker, such as the calibration marker 106 A. Further, the first joint and/or the second joint may be connected to the first servo motor 112A and the second servo motor 112B through a gear system with system parameters including a gear reduction ratio per joint. The movement of the first servo motor 112A and the second servo motor 112B are transferred to the first joint and/or the second joint, and in turn, the movement is transferred to the at least one calibration marker.

The data collection procedure 302 further includes, at step 302B, obtaining a plurality of images of the at least one calibration marker. In an implementation, the plurality of images being captured by a camera 110 when the at least one calibration marker is at the plurality of positions. In other words, the synchronization of the camera 110 with the first encoder 114A and second encoder 114B is performed by timestamping the image captured by the camera 110. For example, during motion M J the camera 110 captures multiple images of the at least one calibration marker I 1 -I T , where T is the total number of frames for motion M J . The images are sent to the computational unit 108 for processing. During the motion, M J the first encoder 114 A and second encoder 114B collects information from the first servo motor 112A and the second servo motor 112B of the joint J and sends the information to the computational unit 108. The method 300 further comprises, at step 304, executing data processing procedure. In other word, the data collected by evaluating movements of the at least one calibration marker is further processed. The procedure 304 is further divided into various steps, such as steps 304A, 304B, 304C, 304D, and 304E.

The data processing procedure 304 comprises, at 304A, processing the plurality of images to determine, for each image of the plurality of images, a marker position of the at least one calibration marker in three-dimensional space to the output as marker position data. In other words, the image processing is performed by obtaining data from the camera 110 and outputs a sequence of position/orientation data for each observed marker with corresponding timestamps. In an implementation, an image processing layer of the computational unit 102A is responsible for the calculation of the position and orientation of the marker with respect to the camera 110, such as by using the image data I 1 to I T . In an implementation, without losing generality, fiducial markers Δ i are visible and detectable on all images I 1 to I T . In another implementation, the image processing layer 114 chooses a subset of frames I il to I iT for which Δ i is detected. Intrinsic parameters of the camera 110 are assumed to be correct and known. Otherwise, a camera calibration procedure is performed. Further, the fiducial marker detection on images I 1 to I T is performed. The result of the detection procedure is a sequence C 1 I to C T i , where C t i is the collection of (x,y)-coordinates of marker Δ i comers on the frame It. Further, pre-calculated intrinsic parameters of the camera and comer coordinate C t I are used to calculate the 3D position of the marker center P t i and 3D marker orientation O t i with respect to the camera via perspective-n-point (PnP) method. In an example, the representation of the marker center P t i is a three-dimensional vector, and the representation of 3D marker orientation O t i is a 3x3 orthonormal basis matrix.

The data processing procedure 304 further comprises, at step 304B, establishing a first error function based on the marker position data. In other words, the marker position data is basically the data obtained by image processing. During the image processing, the position of the at least one calibration marker in the images captured by the camera 110 is marked. These marked positions of the at least one calibration marker are called marker position data. Further, the axis and reduction ratio estimation layer is responsible for processing marker position/orientation data and encoder data and output estimated rotational axis for each joint. For example, from the geometrical properties of rotation, for each i, the 3D marker coordinates P i 1 to P T 1 are arranged on a circle centered at Q j , on a plane orthogonal to V j , and according to angles r j α 1 ,...,r j αT ; and orientation bases O 1 i ,..., O T i revolve around V j according to angles r j α 1 ,...,r j αT . Further, the rotation axis vector V j , rotation axis point Q j and reduction ratio rj are estimated. For example, an optimization method can be applied to a loss function L(V, Q, r) which reflects the measure of how much the geometrical properties above are not satisfied for a given set of 3D vector V, 3D point Q, and scalar r. The desired values are found by solving an optimization problem: (V j , Q j , r j ) = argmin L(V, Q, r).

The data processing procedure 304 further comprises, at step 304C, determining a first rotational axis of the first joint and a second rotational axis of the second joint based on the first error function. In other words, based on the computed marker position/orientation data and servo motor rotational data collected by the first encoder 114A and the second encoder 114B, the information is processed to determine the rotational axis for each joint and reduction ratio for each joint.

The data processing procedure 304 further comprises, at step 304D, establishing a second error function based on at least the first rotational axis and the second rotational axis. After determining the first rotational axis of the first joint and a second rotational axis of the second joint based on the first error function, the second error function is established using the first rotational axis and the second rotational axis. In particular, one or several error functions may be established based on the calculated plurality of 3D positions of the markers. Further, the parameters are calculated to define a set of rotational axes of the robot system 200 based on the error functions.

The data processing procedure 304 further comprises, at step 304E, determining a set of kinematic parameters for a kinematic model of the robot 100A based on the second error function. In other words, after determining the second error function, the set of kinematic parameters are determined using the second error function. In particular, the kinematic parameter update layer is responsible for processing the estimated rotational axis for each joint and output calibrated kinematic parameters.

In accordance with an embodiment, the method 300 comprises calibrating the kinematic model of the robot 100A using the determined set of kinematic parameters. In other words, the set of kinematic parameters is used to calibrate the kinematic model of the robot 100A. Further, the calibration is performed including a robot link length, joint positions, and orientations, zero position of robot joints, reduction ratios per joint. In accordance with an embodiment, the movable portion 104 further comprises a first servo motor 112A configured to move the first joint and a second servo motor 112B configured to move the second joint, and a first encoder 114 A provided to the first servo motor 112A and a second encoder 114B provided to the second servo motor 112B. Further, the camera 110 is synchronized with the first encoder 114A and the second encoder 114B, and the data collection procedure further comprises time-stamping each image of the plurality of images. Further, a timestamp corresponds to a time step in a sequence of time steps. In other words, a consecutive circular movement of each joint is performed. During the movements of the joints, a motor rotary encoder data is collected, and images of the marker are statically attached to the end- effector. Further, the image processing and marker position and orientation calculation is performed. Furthermore, based on the computed marker position/orientation data and motor rotational data collected by the first encoder 114A and the second encoder 114B. Further, the information is processed to determine the rotational axis for each joint and the reduction ratio for each joint.

In accordance with an embodiment, the first encoder 114A and the second encoders 114B are configured to collect servo values of the first servo motor 112A and the second servo motors 112B respectively. The data processing procedure further comprises determining, for each of the first servo motor 112A and second servo motor 112B, a plurality of servo motor angles for the respective servo motor based on the servo values collected by the first encoder 114A and the second encoder 114B respectively. In other words, the at least first joint and the second joint are connected to the first servo motor 112A and the second servo motor 112B through a gear system with a gear reduction ratio per joint as system parameters. The movement of the first servo motor 112A and the second servo motor 112B is transferred to the at least first joint and the second joint with some reduction ratio coefficient.

In accordance with an embodiment, the marker position is determined, for each image of the plurality of images, by extracting a plurality of corner coordinates of the at least one calibration marker from the image. Further, each of the plurality of comer coordinates corresponds to a position substantially at a corner of the at least one calibration marker. In other words, an image processing layer of the computational unit 102A is responsible for the calculation of the position and orientation of the calibration marker with respect to the camera 110 using the image data I 1 to I T . Further, without losing generality, fiducial markers Δi are visible and detectable on all images I 1 to I T . In accordance with an embodiment, the marker position is determined, for each image of the plurality of images, by computing a marker center coordinate using the plurality of comer coordinates of the at least one calibration marker on the image and one or more intrinsic camera parameters. In other words, the at least one calibration marker performs a circular movement while the joint rotates. Further, the angular trajectory of the joints needs to be large enough for the camera 110 to capture a plurality of distinguishable positions of markers Δ i . Moreover, for each j = 1, ... J a sequence of data collection, image processing, rotational axes, and reduction ratio estimation is repeated to obtain rotational axis position and orientation estimation for each joint A j , defined by a tuple (V j , Q j ), where V j is a 3D rotation vector that defines the orientation of the j-th rotation axis, and Q j is a 3D point that defines the position of the rotation axis, and reduction ratio r j .

In accordance with an embodiment, the method 300 comprises processing the plurality of images, which further comprises for each image of the plurality of images, determining a marker orientation of the at least one calibration marker in three-dimensional space based on the plurality of comer coordinates of the at least one calibration marker on the image and one or more intrinsic camera parameters, to output as the marker position data. For example, without losing generality, fiducial markers Δi are visible and detectable on all images I 1 to I T . In an implementation, the image processing layer chooses a subset of frames I il to I iT for which Δ i is detected. Intrinsic parameters of the camera 110 are assumed to be correct and known. Otherwise, a camera calibration procedure is performed. Further, the fiducial marker detection on images I 1 to I T is performed. The result of the detection procedure is a sequence C l I to C T i , where C t i is the collection of (x,y)-coordinates of marker Δ i corners on the frame It. Further, pre-calculated intrinsic parameters of the camera 110 and corner coordinate C t I are used to calculate the 3D position of the marker center P t i and 3D marker orientation O t i with respect to the camera via perspective-n-point (PnP) method. In an example, the representation of marker center P t i is a three-dimensional vector, and the representation of the 3D marker orientation O t i is a 3x3 orthonormal basis matrix.

In accordance with an embodiment, the method 300 comprises establishing a first error function that includes circular arc fitting of the marker center coordinate and the marker orientation of the at least one calibration marker that corresponds to a circular movement of the at least one marker with respect to a coordinate system of the camera 110. In other words, the error function is based on circular arc fitting of the 3D position of the markers and 3D marker orientation fitting that corresponds to a circular movement of the at least one calibration marker with respect to another marker or camera coordinate systems, and optionally based on the encoder data and fitting the encoder data to the 3D orientation of the at least one calibration marker.

In accordance with an embodiment, the movable portion comprises a first servo motor 112A coupled to the first joint through a first gear system having a gear reduction ratio. Further, the method 300 comprises determining a first rotational axis of the first joint, which comprises establishing the first error function based on the marker center coordinates and the marker orientations of the at least one calibration marker determined from the plurality of images and applying an optimization method to the first error function to determine a joint rotation axis origin Q, a rotation axis vector V, and the gear reduction ratio r. In other words, the movable portion 104 has at least two joints. The joints are arranged at one of the ends of at least one link, and the movement of at least one link is performed by rotation of at least one preceding joint. Further, the at least first joint and the second joint are connected to the first servo motor 112A and the second servo motor 112B through gear system with a gear reduction ratio per joint as system parameters. The movement of the first servo motor 112A and the second servo motor 112B is transferred to the at least first joint and the second joint with some reduction ratio coefficient. Further, the axis and reduction ratio estimation layer is responsible for processing marker position/orientation data and encoder data and output estimated rotational axis for each joint. Further, the kinematic parameter update layer is responsible for processing the estimated rotational axis for each joint and output calibrated kinematic parameters.

In accordance with an embodiment, the first encoder 114A is provided to the first servo motor 112A configured to output encoder data comprising servo values of the first servo motor 112A, and the gear reduction ratio is determined based on marker orientations of the at least one calibration marker determined from the plurality of images and the encoder data of the first servo motor 112A. In other words, the gear system with a gear reduction ratio per joint as system parameters is used to join at least the first joint and the second with the first servo motor 112A and the second servo motor 112B. The at least first joint and the second joint are moved by the movement of the first servo motor 112A and the second servo motor 112B. Moreover, some reduction ratio coefficient is also added with the movement.

In accordance with an embodiment, the at least one calibration marker comprises one or more marker key points. Further, a marker key point corresponds to a uniquely identifiable point on the at least one calibration marker, and the data processing procedure further includes extracting the one or more marker key points from the plurality of images. For example, a special fiducial marker type, for example, 3x3 ChArUco marker, can be attached to the end-effector to automatically find intrinsic parameters of the camera 110 using motion information from the marker’s key points.

In accordance with an embodiment, the method 300 comprises determining one or more intrinsic camera parameters based on a relative position of the one or more marker key points. For example, extract marker key points from image data Further, knowing about the ground-truth information about the relative position of the marker key points, intrinsic camera parameters of the camera 110 are needed to be obtained. Further, by using the obtained camera parameters and position of marker key points on the image, calculate the 3D position of the marker center P t i and 3D marker orientation O t i with respect to the camera 110 via PnP method.

In accordance with an embodiment, the movable portion 104 includes a plurality of J joints arranged to be rotated by a corresponding plurality of J servo motors. Further, positioning the at least one calibration marker includes controlling the plurality of J servo motors to perform a set of motions M 1 , M 2 ,...,M J to rotate the plurality of J joints. In other words, a pre-defined set of commands stored at the computational unit 108 is sent to the first servo motors 112A and second servo motor 112B. The first servo motors 112A and second servo motor 112B perform a set of motionsM 1 , ... , M J , where J is the number of joints in the movable portion 104, and M j is determined by moving j- th the robot joint, while other joints are static in their zero position. The joint moves and the at least one calibration marker, such as the calibration marker 106A is statically attached at an arbitrary position and orientation with respect to the moving part of the robot. As a result, the at least one calibration marker performs a circular movement while the joint rotates. The angular trajectory of the joints needs to be large enough for the camera 110 to capture a plurality of distinguishable positions of markers Δ i . Further, for each j = 1, ...,J a sequence of data collection, image processing, rotational axes, and reduction ratio estimation is repeated to obtain rotational axis position and orientation estimation for each joint A j , defined by a tuple (V j , Q j ). Further, the V j is a 3D rotation vector that defines the orientation of the j-th rotation axis, and Q j is a 3D point that defines the position of the rotation axis, and reduction ratio r j .

In accordance with an embodiment, for 1 ≤ n ≤ J, the motion of the n th joint, M n , is determined by controlling the n th servo motor to rotate the n th joint while maintaining the remaining joints of the plurality of J joints stationary in a respective zero position. In other words, the numbers of joints are equal to the number of servo motors and as the servo motor moves, the movement is transferred to the joint. This movement is controlled by the computational unit 108. If only one servo motor is rotating then only one joint is in motion and in this case the other joints are considered in the zero position.

In accordance with an embodiment, at least one calibration marker is provided to one or more links each connecting two adjacent joints of the plurality of J joints, and an encoder is provided to each servo motor of the plurality of J servo motors, each of the plurality of encoders is configured to output encoder data corresponding to a respective joint of the plurality of J joints. Further, the encoder data includes servo values (e.g., angles) of the corresponding servo motor. Movements of the servo motor translate to rotation of the corresponding joint. Upon evaluating the movement, the encoder generates the encoder data. Herein the encoder evaluates the angles of rotation of the joint movements, which corresponds to the servo values.

In accordance with an embodiment, the set of motions M 1 , M 2 ,...,M J is arbitrary and processing the plurality of images comprises, for each n th joint of the plurality of J joints, for each image of the plurality of images, transforming a position of the at least one calibration marker between the (n-1) th joint and the n th joint from a moving frame of the (n-1) th joint to a stationary frame of the (n-1) th joint in which the (n-1) th joint is considered stationary and in a zero position using the encoder data of the (n-1) th joint.

In accordance with an embodiment, the method 300 comprises determining a set of kinematic parameters, which comprises determining a link length between the first joint and the second joint. In other words, the link length is determined between the first joint and the second joint by the process performed in the computational unit 108. Moreover, kinematic parameters that relate to every link and joint that locate between the robot base and the at least one calibration marker location are subject to calibration, except the first and the last link length respectively.

In accordance with an embodiment, determining a set of kinematic parameters includes determining a marker-base transformation between a frame of the at least one calibration marker in a zero position and a frame of the base 102 of the robot 100A using the set of kinematic parameters, determining a marker-camera transformation between the frame of the at least one calibration marker in the zero position and a frame of the camera 110 by processing the plurality of images, and computing a camera-base transformation between the frame of the camera 110 and the frame of the robot base based on the determined marker-base transformation and the determined marker-camera transformation. In other words, the kinematic parameter update layer is responsible for processing the estimated rotational axis for each joint and output calibrated kinematic parameters. For example, denote a collection of kinematic parameters subject to calibration as L. Further, given the estimated rotational axis for each joint A l to A j and initial values of the kinematic parameters Lo, the kinematic parameter update layer finds an estimation of the true kinematic parameters that minimizes the difference between the actual rotational axis corresponding to the updated parameter values and the estimated axis A l to A j . The difference between the two different sets of axes, the estimated rotational axes A l to A j and the actual rotational axes are defined as where T is a transformation operator that applies certain rotation and translation to the axes position and orientation parameters.

The method 300 calibrates the kinematic parameters of the robot that includes robot link lengths, joint positions, and orientation, zero position of robot joints, reduction ratios per joint positions and orientation, zero position of robot joints, reduction ratios per joint. With the calibrated parameters, the method 300 is able to achieve better accuracy of the end-effector position and orientation during the motion execution.

The steps 302 and 304 are only illustrative, and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein.

There is further provided a non-transitory computer-readable storage medium comprising machine-readable code which, when executed by a processor, causes the processor to position the at least one calibration marker in a plurality of positions by rotating the first joint and/or the second joint. The processor further obtains a plurality of images of the at least one calibration marker the plurality of images being captured by a camera 110 when the at least one calibration marker is at the plurality of positions. The processor further processes the plurality of images to determine, for each image of the plurality of images, a marker position (and optionally orientation) of the at least one calibration marker in three-dimensional space to output as marker position data. The processor further establishes a first error function based on the marker position data. The processor further determines a first rotational axis of the first joint and a second rotational axis of the second joint based on the first error function. The processor further establishes a second error function based on at least the first rotational axis and the second rotational axis. The processor further determines a set of kinematic parameters for a kinematic model of the robot 100A based on the second error function. In an example, the instructions are implemented on the computer-readable media which include, but is not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Read Only Memory (ROM), Hard Disk Drive (HDD), Flash memory, a Secure Digital (SD) card, Solid-State Drive (SSD), a computer readable storage medium, and/or CPU cache memory. In an example, the instructions are generated by a computer program, which is implemented in view of the method 300, and for use in implementing the method 300 on the processor.

Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". It is appreciated that certain features of the present disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as suitable in any other described embodiment of the disclosure.