Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR AUTOMATED 3D PART LOCALIZATION AND ADJUSTMENT OF ROBOT END-EFFECTORS
Document Type and Number:
WIPO Patent Application WO/2023/102647
Kind Code:
A1
Abstract:
A 3D vision system in viewing relationship to an industrial robot's workspace obtains a 3D point cloud to identify and locate an arbitrarily placed target part, calculates required adjustment of the target part's actual position against an initially "taught" reference position, and automatically adjusts the robot's programmed movements to accurately match the target part's position. Teaching of desired robot pose(s) relative to the reference position is performed with a physical reference part without need for CAD models, offline programming or operator expertise. The robot can execute simple pick and place operations involving a singular taught robot pose, but also more complex operations with toolpaths whose waypoints are composed of a plurality of taught robot poses. Rather than transforming taught robot poses, another implementation involves deriving a part frame from taught reference points, transforming the part frame, and updating the active working frame of the robot to the transformed part frame.

Inventors:
KHOSHDARREGI MATT (CA)
ALBORZI YOUSEF (CA)
MAGHAMI SEYEDALI (CA)
DHARIA BHAVIN NARENDRAKUMAR (CA)
NEWMAN MICHAEL TERRY (CA)
Application Number:
PCT/CA2022/051764
Publication Date:
June 15, 2023
Filing Date:
December 02, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV MANITOBA (CA)
International Classes:
B25J9/18; B25J19/04; G06V20/50
Foreign References:
CN111791239A2020-10-20
JP2021070149A2021-05-06
CN112109075A2020-12-22
US20180222049A12018-08-09
US20180194007A12018-07-12
US20140229005A12014-08-14
US20120197464A12012-08-02
US20100161125A12010-06-24
Attorney, Agent or Firm:
ADE & COMPANY INC. (CA)
Download PDF:
Claims:
CLAIMS:

1 . A method of controlling movement of a robot relative to a target part in a robot workspace that is situated within a workable field of view of a 3D vision system, said method comprising:

(a) storing in non-transitory computer readable memory:

(i) a camera-robot transformation for transforming 3D vision system co-ordinates of said 3D vision system to a robot reference frame of said robot; and

(ii) a reference 3D point cloud, in the robot reference frame, representative of a reference part, of matching relation to said target part, when residing in a reference position within said robot workspace;

(b) using said 3D vision system, capturing a target part 3D point cloud of the target part while occupying a target position within the robot workspace;

(c) by one or more processors:

(i) using the stored camera-robot transformation, transforming the target part 3D point cloud to a transformed target part 3D point cloud in the robot reference frame;

(ii) performing point cloud registration between the transformed target part 3D point cloud and the reference 3D point cloud, and thereby deriving a reference-to-target position transformation;

(iii) adjusting programmed movements of the robot using said reference-to-target position transformation; and

(iv) commanding automated movement of the robot according to the adjusted programmed movements.

2. The method of claim 1 wherein step (a) further comprises storing, in said non-transitory computer readable memory, (iii) a robot reference pose corresponding to placement of an end-effector of the robot at a reference point relative to the reference part when residing in said reference position within said robot workspace.

3. The method of claim 2 wherein step (c)(iii) comprises transforming the robot reference pose to a targeted working pose for the robot using the reference-to- target position transformation.

4. The method of claim 3 wherein step (c)(iv) comprises commanding automated movement of the robot into said targeted working pose, and thereby placing the end-effector of the robot at a targeted point whose relation to the target position of the target part corresponds to a relation of said reference point to the reference position of the reference part.

5. The method of claim 1 wherein: step (a) further comprises storing, in said non-transitory computer readable memory, (iii) a reference part frame denoting a local coordinate reference frame of reference part in the reference position; step (c)(iii) comprises transforming the reference part frame to a target part frame using the reference-to-target position transformation; and step (c)(iv) comprises implementing said target part frame as an active working frame of the robot in which the programmed movements thereof are executed.

6. The method of any one of claims 1 to 5 further comprising, before step (a), deriving the reference 3D point cloud through performance of a teaching procedure that includes: with the reference part placed in the reference position within the robot workspace, capturing an initial 3D point cloud of said reference part using said 3D vision system; and transforming said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud.

7. The method of any one of claims 2 to 4 further comprising, before step (a), deriving the reference 3D point cloud through performance of a teaching procedure that includes: with the reference part placed in the reference position within the robot workspace, capturing an initial 3D point cloud of said reference part using said 3D vision system; and transforming said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud; wherein said robot reference pose is a recorded robot pose recorded during said teaching procedure.

8. The method of claim 7 wherein the recorded robot pose is a human- guided pose into which the robot is guided by a human operator during the teaching procedure.

9. The method of claim 5 further comprising, before step (a), deriving the reference 3D point cloud through performance of a teaching procedure that includes: with the reference part placed in the reference position within the robot workspace, capturing an initial 3D point cloud of said reference part using said 3D vision system; transforming said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud; deriving said reference part frame by recording of a set of reference points, relative to the reference position of the reference part, to which the end-effector of the robot is posed during said teaching procedure.

10. The method of any one of claims 6 to 9 wherein said reference position is an arbitrarily placed position.

1 1 . The method of any one of claims 2 to 4, 7 and 8 wherein: said reference pose is one of a plurality of reference poses stored in said computer readable memory, each corresponding to a different respective reference point relative to the reference position of the reference part; step (c)(iii) comprises transforming said plurality of reference poses to a plurality of target poses using the reference-to-target position transformation; and step (c)(iv) comprises commanding automated movement of the robot on a movement path that includes said plurality of target poses, each of which places the end-effector of the robot at a different respective target point whose relation to the target position of the target part corresponds to a relation of a respective one of the reference points to the reference position of the reference part.

12. The method of claim 7 or 8 wherein” 21 said reference pose is one of a plurality of reference poses stored in said computer readable memory, each corresponding to a different respective reference point relative to the reference position of the reference part, and each being a respective one of a plurality of recorded robot poses recorded during said teaching procedure; step (c)(iii) comprises transforming said plurality of reference poses to a plurality of target poses using the reference-to-target position transformation; and step (c)(iv) comprises commanding automated movement of the robot on a movement path that includes said plurality of target poses, each of which places the end-effector of the robot at a different respective target point whose relation to the target position of the target part corresponds to a relation of a respective one of the reference points to the reference position of the reference part.

13. A robotics control system for controlling movement of a robot relative to a target part in a robot workspace, said robotics control system comprising:

(a) a 3D vision system having a field of view aimed on said robot workspace;

(b) non-transitory computer readable memory having stored therein:

(i) a camera-robot transformation for transforming 3D vision system co-ordinates of said 3D vision system to a robot reference frame of said robot; and

(ii) a reference 3D point cloud, in the robot reference frame, representative of a reference part, of matching relation to said target part, when residing in a reference position within said robot workspace;

(c) one or more processors configured to: after capture, using the 3D vision system, of a target part 3D point cloud of the target part while occupying a target position within the robot workspace, perform the following steps:

(i) using the stored camera-robot transformation, transform the target part 3D point cloud to a transformed target part 3D point cloud in the robot reference frame; 22

(ii) perform point cloud registration between the transformed target part 3D point cloud and the reference 3D point cloud, and thereby derive a reference-to-target position transformation;

(iii) adjust programmed movements of the robot using said reference-to-target position transformation; and

(iv) command automated movement of the robot according to the adjusted programmed movements.

14. The system of claim 13 wherein the non-transitory computer readable memory also has stored therein (iii) a robot reference pose corresponding to placement of an end-effector of the robot at a reference point relative to the reference part when residing in said reference position within said robot workspace.

15. The system of claim 14 wherein the one or more processors are configured to, in step (c)(iii), use the reference-to-target position transformation to transform the robot reference pose to a targeted working pose for the robot.

16. The system of claim 15 wherein the one or more processors are configured to, in step (c)(iv), command movement of the robot into said targeted working pose, and thereby place the end-effector of the robot at a targeted point whose relation to the target position of the target part corresponds to a relation of said reference point to the reference position of the reference part.

17. The system of claim 13 wherein: the non-transitory computer readable memory also has stored therein (iii) a reference part frame denoting a local coordinate reference frame of reference part in the reference position; and the one or more processors are configured to: in step (c)(iii), transform the reference part frame to a target part frame using the reference-to-target position transformation; and in step (c)(iv), implement said target part frame as an active working frame of the robot in which the programmed movements thereof are executed.

18. The system of any one of claims 13 to 17 wherein the one or more processors are also configured to execute a teaching procedure that comprises: 23 receiving confirmation of a presence of the reference part in the workspace of the robot in the reference position, and in response to said confirmation, capture an initial 3D point cloud of said reference part using said 3D vision system; and perform transformation of said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud, and storing said reference 3D point cloud in said non-transitory computer- readable memory for later use in step (c)(i).

19. The system of any one of claims 14 to 16 wherein the one or more processors are also configured to execute a teaching procedure that comprises: receiving confirmation of a presence of the reference part in the workspace of the robot in the reference position, and in response to said confirmation, capture an initial 3D point cloud of said reference part using said 3D vision system; perform transformation of said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud, and storing said reference 3D point cloud in said non-transitory computer- readable memory for later use in step (c)(i); and capture the reference pose, and store said reference pose in said non- transitory computer-readable memory for later use in step (c)(iii).

20. The system of claim 17 wherein the one or more processors are also configured to execute a teaching procedure that comprises: receiving confirmation of a presence of the reference part in the workspace of the robot in the reference position, and in response to said confirmation, capture an initial 3D point cloud of said reference part using said 3D vision system; perform transformation of said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud, and storing said reference 3D point cloud in said non-transitory computer- readable memory for later use in step (c)(i); and recording of a set of reference points, relative to the reference position of the reference part, to which the end effector of the robot is posed during said teaching procedure, and deriving the reference part frame from said reference points. 24

21 . A method of preparing the system of any one of claims 13 to 20 for use, said method comprising: with the reference part placed in the reference position within the robot workspace, capturing an initial 3D point cloud of said reference part using said 3D vision system; and transforming said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud.

22. The method of claim 21 wherein said reference position is an arbitrarily placed position.

23. The method of claim 21 or 22 also comprising, with the reference part in the reference position, human guided movement of the end-effector of the robot toward a desired reference point, and having achieved placement of the end-effector at said desired reference point, recordal, in non-transitory computer readable memory, of a possessed pose of the robot as a reference pose.

24. The method of claim 21 or 22 also comprising, with the reference part in the reference position, moving the end-effector of the robot, under human guidance thereof, to plurality of reference points, individually one after another, and upon achieved placement of the end-effector at each of said reference points, recording, in non-transitory computer readable memory, of at least one of either:

(i) a respectively possessed pose of the robot, for use as a respective reference pose; and/or

(ii) said reference points.

25. The method of claim 24 comprising storing, in non-transitory computer readable memory, a reference path that includes said at least one of either said reference poses and/or said reference points.

26. The method of claim 24 comprising recording said reference points, and deriving and storing, in non-transitory computer readable memory, a part frame derived from said reference points and denoting a local coordinate reference frame of reference part in the reference position.

Description:
METHOD FOR AUTOMATED 3D PART LOCALIZATION AND ADJUSTMENT OF ROBOT END-EFFECTORS

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims benefit of U.S. Provisional Application No. 63/286,510, filed December 6, 2021 , the entirety of which is incorporated herein by reference.

FIELD OF THE INVENTION

This application relates generally to the field of robotics, and more particularly to vision guided robotics.

BACKGROUND

Industrial robots used in manufacturing are valued for their productivity and repeatability. To ensure the accuracy of the robot operation, the parts of interest must be both dimensionally repeatable and located precisely as per the programmed robot instructions. If the part is not exactly where it is specified to be, the robot cannot complete its operation accurately, leading to manufacturing errors and even collisions within the robot cell. Typically, parts that cannot be located repeatedly within the cell are not suitable for robotic operations. Due to robots’ inability to detect changes in part location and orientation, a large amount of manual adjustment is required to handle such variations.

Accordingly, there is a need for automated solutions by which industrial robots can overcome variation in part placement without need for corrective human intervention.

SUMMARY OF THE INVENTION

According to a first aspect of the invention, there is provided a method of controlling movement of a robot relative to a target part in a robot workspace that is situated within a workable field of view of a 3D vision system, said method comprising:

(a) storing in non-transitory computer readable memory:

(i) a camera-robot transformation for transforming 3D vision system co-ordinates of said 3D vision system to a robot reference frame of said robot; and (ii) a reference 3D point cloud, in the robot reference frame, representative of a reference part, of matching relation to said target part, when residing in a reference position within said robot workspace;

(b) using said 3D vision system, capturing a target part 3D point cloud of the target part while occupying a target position within the robot workspace;

(c) by one or more processors:

(i) using the stored camera-robot transformation, transforming the target part 3D point cloud to a transformed target part 3D point cloud in the robot reference frame;

(ii) performing point cloud registration between the transformed target part 3D point cloud and the reference 3D point cloud, and thereby deriving a reference-to-target position transformation;

(iii) adjusting programmed movements of the robot using said reference-to-target position transformation; and

(iv) commanding automated movement of the robot according to the adjusted programmed movements.

According to a second aspect of the invention, there is provided a robot relative to a target part in a robot workspace, said robotics control system comprising:

(a) a 3D vision system having a field of view aimed on said robot workspace;

(b) non-transitory computer readable memory having stored therein:

(i) a camera-robot transformation for transforming 3D vision system co-ordinates of said 3D vision system to a robot reference frame of said robot; and

(ii) a reference 3D point cloud, in the robot reference frame, representative of a reference part, of matching relation to said target part, when residing in a reference position within said robot workspace;

(c) one or more processors configured to: after capture, using the 3D vision system, of a target part 3D point cloud of the target part while occupying a target position within the robot workspace, perform the following steps:

(i) using the stored camera-robot transformation, transform the target part 3D point cloud to a transformed target part 3D point cloud in the robot reference frame;

(ii) perform point cloud registration between the transformed target part 3D point cloud and the reference 3D point cloud, and thereby derive a reference- to-target position transformation;

(iii) adjust programmed movements of the robot using said reference-to-target position transformation; and

(iv) command automated movement of the robot according to the adjusted programmed movements.

According to a third aspect of the invention, there is provided a method of preparing the preceding system, from the second aspect of the invention for use, said method comprising: with the reference part placed in the reference position within the robot workspace, capturing an initial 3D point cloud of said reference part using said 3D vision system; and transforming said initial 3D point cloud to the robot reference frame using said camera-robot transformation, thereby deriving the reference 3D point cloud. BRIEF DESCRIPTION OF THE DRAWINGS

Preferred but non-limiting embodiments of the invention will now be described in conjunction with the accompanying drawings in which:

Figure 1 is a schematic block diagram illustrating one possible system architecture for a vision-based robotics control system of the present invention.

Figure 2A schematically illustrates cooperative interaction between an industrial robotic arm and accompanying 3D vision componentry of the Figure 1 system during an initial calibration procedure.

Figure 2B is a flowchart of the calibration procedure schematically shown in Figure 2A.

Figure 3 is a flowchart of a post-calibration teaching procedure performed subsequent to the calibration procedure of Figure 2B.

Figure 3A schematically illustrates cooperative interaction between the industrial robotic arm and the 3D vision componentry during a “single-pose teaching” instance the teaching procedure of Figure 3.

Figure 3B schematically illustrates cooperative interaction between the industrial robotic arm and the 3D vision componentry during a “path teaching” instance of the teaching procedure of Figure 3.

Figure 3C schematically illustrates cooperative interaction between the industrial robotic arm and the 3D vision componentry during a “part-frame teaching” instance of the teaching procedure of Figure 3.

Figure 4A is a flowchart illustrating executional working use of the robotic arm after the calibration and teaching procedures to guide accurate placement or movement of the end-effector thereof relative to an arbitrarily placed target part.

Figure 4B schematically illustrates cooperative interaction between the industrial robotic arm and the 3D vision componentry during the flowcharted steps of the Figure 4A.

Figure 5 is a flowchart elaborating on optional details select steps of the Figure 4A flowchart.

DETAILED DESCRIPTION

Disclosed herein is a novel method and system that enables industrial robots to autonomously pick up, or otherwise interact with, complex parts (e.g. aerospace panels, vehicle body parts) by adjusting the programmed robotic movement path on-the-fly in order to compensate for errors in part location and orientation. Using application of advanced point registration algorithms on data received from three- dimensional (3D) vision sensors, the robot can detect the part’s position and orientation. The robot can then adjust its programmed movements to accurately pick up the part, or perform other working operations thereon, without any manual intervention.

Unlike existing methods, which rely on 3D computer models of the part to achieve this goal, the solution disclosed herein instead employs a very intuitive ‘Teaching’ procedure which is performed only once on an actual physical sample of the real part that the industrial robot (e.g. robotic arm) will be working with. This simple teaching procedure can be completed by an average worker in a matter of a few minutes, without need for any programming or other particular expertise. A software package has been developed with a graphical interface to substantially automate this process. The disclosed method is agnostic to the both the particular robot used, and the particular part being manipulated thereby. Deploying this new technology will allow industrial robots to automatically perform, for example, 3D pick and place operations, 3D tool path operations, and more, on parts that are arbitrarily placed within a robot cell, thus reducing downtime and increasing productivity and adaptability.

In brief, the working principle of the invention is to obtain a 3D point cloud to identify and locate a part, calculate the required adjustment of the located part’s actual location against an initially ‘taught’ location, and automatically adjust the robot’s programmed path to accurately match the actual location. While the detailed description and drawings include repeated use of the term “part” to describe the object with which the robot is interacting within the robot workspace, it will be appreciated that this object need not necessarily be a subcomponent “part” of a larger entity that is to be subsequently assembled from a collection of such smaller individual “parts”. Thus, unless stated otherwise, the term part is used in a broad sense to refer to any such object or workpiece with the robot interacts, regardless of whether that part is something intended for later assembly or combination with one or more other “parts” to form an assembled “whole” composed of such assembled/combined parts.

As shown in simplified form Figures 2A, 3A-3C and 4B, the primary equipment setup in the present invention includes a robot, more particularly a six degree-of-freedom (6DOF) robotic arm in the illustrated example, and an accompanying 3D vision system, which may be a camera-based 3D vision system employing one or more cameras, though laser-based 3D vision systems may alternative be employed within the scope of the present invention. Since the illustrated embodiment employs a camera-based 3D vision system, the term camera is used to refer to any sensor component of that system, which in other embodiments may be of another sensing type suitable for visioning applications, for example a laser scanner. The 3D vision system’s operational field of view is aimed to encompass the workspace of the robot. The robot and each camera of the 3D vision system have their own respective 3D coordinate reference frames, also referred to herein as the robot reference frame, and the camera reference frame, or simply the robot frame and camera frame, for brevity. The robot and camera are both connected to a computerized controller embodied by one or more computer devices with one or more processors, and one or more non-transitory computer readable media for storing executable statements and instructions for execution by said processor(s) to perform the various steps and processes described herein, as well as the various stored data (point clouds, transformations, robot poses, etc.) referenced herein to enable those described processes. The computerized controller preferably includes a display screen through which a graphical user interface is presentable to guide a human operator through the various human-aided steps described herein.

Figure 1 is a non-limiting example of an overall system architecture in which the simplified equipment setup of Figures 2A, 3A-3C and 4B may optionally be embodied. Here, the computerized controller is embodied by the combination of a robot controller 10 that is communicably connected to the robot 12, and a separate computer 14 that is communicably connected to both the robot controller 10 and the one or more cameras of the 3D vision system 16, of which there are two such cameras 16A, 16B in the illustrated example, though one camera may suffice in other instances. The computer 14 embodies, or is operably connected to, an electronic display screen 18 on which the graphical user interface (GUI) 20 is displayed to guide the human operator through the various human-aided steps of the processes described herein below, and to collect instructional and confirmatory user input from the human operator as those steps are completed. The display screen 18 may, for example, be a touchscreen display, in which case the display alone is operable to both display prompts or other guidance, and also collect user input via finger or stylus touch of actuable onscreen indicia 20-20D of the user interface 20. In other instances, the display 18 may instead be of a display-only type, and accompanied by one or more other input devices (mouse, keyboard, trackpad) for receiving the user input, whether these are built-in devices of the computer 14, or separate peripheral devices connected thereto. The computer 14 also embodies the aforementioned processor(s), and embodies, or is connected to, the aforementioned computer readable media 14A.

Referring initially to Figures 2A & 2B, before the robot 12 can meaningfully interact with any part, an initial calibration procedure between the robot 12 and 3D vision system 16 must first be performed. First, at initial step 200 of Figure 2B, the 3D vision system 16 is installed in a suitably positioned manner to encompass the robot’s workspace within the 3D vision system’s operable field of view (FOV). Next, at step 202, a calibration artifact 22 visually recognizable to the 3D vision system from different viewing angles is attached by a human operator to the end-effector 12A of the robot 12 in a manner visually detectable by the 3D vision system 16 in different various orientations within the robot workspace. Next, at step 204, through controlled maneuvering of the robot 12 by the human operator, via the robot controller 14, the robot is maneuvered into different various poses among which the artifact is repositioned in different orientations, in each of which the artifact is captured by the 3D vision system 16. The captured camera data from each camera 16A, 16B, and the robot’s pose coordinates from the robot controller 10, in each such captured robot pose and artifact orientation, are received by the computer 14, and are used thereby at step 206 to determine, via automated calculation, a respective transformation matrix for each camera 16A, 16B by which coordinate points from that camera’s reference frame are transformable to the robot reference frame. Each such transformation matrix is referred to herein as a camera-robot transformation (whether or not the visioning system is actually camera-based), and each such camera-robot transformation is stored in computer-readable memory 14A at step 208 for subsequent retrieval and use in other procedures described below.

Having completed the necessary one-time calibration, the teaching procedure of Figure 3 is then carried out. Firstly, at step 300, an initial “teaching” or “reference” part 24 is placed in front of the robot 12 at an arbitrary location within the robot’s workspace, for example by a human operator, who may or may not be the same person who performed the one-time calibration procedure above, optionally in response to prompting of such human action by an on-screen prompt displayed in a workflow of the graphical user interface 20. Such arbitrary placement of the reference part 24 means that expensive fixturing for repeatable part positioning may be omitted. Next, at step 302, capture of a 3D point cloud of the reference part 24 by each camera 16A, 16B of the 3D vision system 16 is then triggered, for example in response to an input signal from the human operator via the graphical user interface 20 to signify that the reference part 24 has been placed in the workspace and is ready for digital capture by the 3D vision system 16. In the illustrated example of the graphical user interface 20, this input from the human operator is derived by their selection of an on-screen “scan reference part” button 20A. Next, at step 304, the computer 14 applies the respective camerarobot transformation for each camera 16A, 16B to the captured 3D point cloud from that camera, and thereby transforms each camera’s captured 3D point cloud of the reference part 24 into a respective reference 3D point cloud in the robot frame. In instances where the 3D vision system 16 has multiple cameras 16A, 16B, the transformed respective reference 3D point clouds from the different cameras 16A, 16B are assembled (combined) in the robot frame at step 306. Next, the singular reference point cloud, whether that be the only reference point cloud captured by a single-camera vision system, or the assembled point cloud combined from the multiple cameras of a multi-camera vision system, is subjected to an isolation process at step 308 in order to remove any background or other extraneous content other than the point cloud representation of the reference part, and the isolated reference point cloud is stored in computer readable memory 14A at step 310. At this point, the position occupied by the placed reference part 24, and the transformed reference 3D point cloud representative thereof, both denote a “teaching position” or “reference position” of the reference part 24.

Next, at step 312, with the teaching/reference part 24 remaining in this reference position, the human operator will jog the robot’s end-effector 12A (e.g. gripper) toward the reference part 24, which, depending on the robot type and its operational capabilities, may be performed manually via direct physical usermanipulation of the robot itself in hands-on fashion, or electronically via user-directed operation of the robot controller 10 through a user-input device thereof. Once the operator is satisfied with the position of the end-effector 12A at a selected reference point on or near the reference part 24, for example as signaled to the controller via a confirmatory operator input from the user interface 20, for example by selection of an onscreen “record reference pose" button 20B, the computer 14 will record the robot’s current pose coordinates in computer readable memory 14A as a “taught” reference pose at step 314. At this point, in a “single-pose teaching” instance denoted by the Figure 3A scenario where the human operator is intending to teach only a singular robot pose (e.g. for the “pick” aspect of a pick and place operation), this recordal of a singular reference pose denotes the end of the teaching procedure, and the singular reference pose denotes a state of the robot 12 in which the end-effector 12A thereof would occupy the same reference point on or near another matching part in the future if likewise placed in the robot’s workspace in the same reference position. The reference part’s 3D point cloud in the reference position and the robot’s singular reference pose, and though not typically needed, optionally the reference point occupied by the robot’s endeffector 12A in that taught reference pose, thus serve as pre-established reference parameters derived from the one-time teaching procedure for the purpose of adjusting the robot’s computer-controlled (i.e. automated) movement during subsequent executional working use of the robot, where the placement of a future non-teaching or target part 24’ (of matching relation to the reference part 24) may be in disagreement with the reference position, thus necessitating adjustment of the robot’s computer- commanded pose in order to successfully place the end-effector 12A at the same reference point on or near that future target part 24’.

On the other hand, the teaching procedure of Figure 3 is not limited to teaching of a singular reference pose, and can alternatively be used to teach multiple reference poses of the robot 12, or multiple reference points pointed to by the robot’s end effector 12A, for various purposes during later execution working use of the robot 12 to perform more complex operations on a target part than a single-pose pick-and- place operation. For example, the robot 12 may be taught to guide its end-effector 12A along a desired path on the part (e.g. for welding or other tool operations), of which the reference points denote waypoints of this path. Such a scenario is schematically illustrated in Figure 3B, where a generally circular broken line on a surface of the reference part 24 denotes a reference path to be taught to the robot 12, and a set of solid dots at discretely spaced positions along the broken line path denote reference waypoints of the reference path, for each of which a respective robot reference pose is to be taught. Figure 3C denotes another scenario, where teaching of multiple reference points alone (without corresponding poses) is useful, specifically to teach a local 3D coordinate reference frame of the part itself (hereinafter simply a “part frame”). Such teaching of a part frame may be accompanied by subsequent teaching and storing of one or more target poses, defined locally in the derived part frame, that are later to be achieved by the robot 12 during executional working use thereof on a target part to perform any variety of operation thereon (e.g. drilling of a whole at a particular location on the part, defined in the local part frame thereof). Additionally, or alternatively, one or more predefined poses likewise defined in a local part frame of the part may be stored in memory for later use once that part frame has been taught in the reference position of the reference part. Since the part frame is taught using the reference part in the reference position, the taught part frame is subsequently referred to herein as the reference part frame.

Those skill in the art will be familiar with teaching of an industrial 6DOF robot of a user-specified 3D coordinate system by pointing the end-effector 12A of the robot 12 to three distinct points in the workspace, whereafter co-ordinate point in the robot’s workspace can be defined in that user-defined coordinate system. The three points may be treated by the robot controller 10 as an origin point, an x-axis point, and an x-y plane point of the user-specified X, Y, Z coordinate system. Alternatively, the three points may be treated by the robot controller 10 as two x-axis points, and one y- axis point. This existing functionality of the robot controller 10 is exploited in the partframe teaching scenario of the present invention in order to derive the reference part frame.

Turning back to Figure 3, for multi-pose or multi-point teaching instances such as the path teaching scenario of Figure 3B or part-frame teaching scenario of Figure 3C, first-time completion of step 314 does not denote the end of the teaching procedure, which instead includes subsequent repetition of steps 312 and 314, until all of the desired reference poses/points have been recorded, for example as signaled by receipt of a confirmatory signal from the human operator, through the user interface 20, that the last recorded reference pose was the intended final reference pose. In the final step 316 of such multi-pose/multi-point teaching instances, the computer 14 constructs a reference path or reference part frame from the recorded reference poses/points, and stores same in computer readable memory 14A. It is anticipated that, typically, the teaching procedure of Figure 3 need be performed only once for each particular type of part for which the robot 12 is intended to work with.

Having described the teaching procedure with reference to Figures 3 to 3C, working use of the now-taught robot in a subsequent executional working procedure interacting with a targeted part 24’ of matching relation the reference part 24 is illustrated in Figures 4A & 4B, of which the process outlined in Figure 4B is repeated anew every time such a target part 24’ is arbitrarily placed in front of the robot 12 within the workspace thereof, as denoted at step 400.

Once presence and readiness of this targeted part 24’ in the robot workspace is detected or otherwise confirmed, the computer 14 initiates a point cloud collection routine 402, which consists of the same steps 302-308 described above for the teaching procedure of Figure 3, except executed on the target part 24’ instead of the reference part 24. In the illustrated example of the graphical user interface 20, userbased confirmation of the presence of the target part 24’ may be inputted by a human operator (whether the same or a different operator than from the calibration and teaching procedures) via their selection of an on-screen “scan target part” button 20C, for example, during initial post-teaching testing of the robot’s execution under human supervision before transitioning to a fully automated working context. In other cases, automated detection of the presence of the target part may trigger the point cloud collection routine 402. The point cloud collection routine 402 includes:

- capture of a new 3D point cloud from each camera 16A, 16B of the 3D vision system (akin to step 302), which in routine 402 thus denotes a current actual position of the target part 24’, as opposed to the reference position of the earlier reference part 24, and so the this newly captured 3D point cloud and the target part position represented thereby are henceforth referred as the target part 3D point cloud and the target position, respectively;

- application, by the computer 14, of the respective camera-robot transformation for each camera to the respective target part 3D point cloud captured thereby (akin to step 304), thus transforming each target part 3D point cloud from the respective camera frame to a transformed target part 3D point cloud in the robot frame;

- in instances where the 3D vision system 16 has multiple cameras 16A, 16B, assembly (combination), by the computer 14, of the transformed target part 3D point clouds from the different cameras 16A, 16B (akin to step 306) into a singular transformed target part 3D point cloud; and

- isolation, by the computer 14, of the singular transformed target part 3D point cloud, whether that be the sole transformed target part 3D point cloud point from a single-camera 3D vision system, or the assembled transformed target part 3D point cloud combined from multiple cameras of a multi-camera vision system (akin to step 308).

Having completed the point cloud collection routine 402, the computer 14 then retrieves one or more reference 3D point clouds from the computer readable memory 14A at step 404. At step 408, the computer 14 uses point cloud registration between the singular transformed target part 3D cloud point and the reference 3D point cloud for the matching reference part, and thereby derives a reference-to-target position transformation matrix. This denotes a transformation that would need to be applied to the reference position of the matching reference part 24 in order to achieve the target position of the target part 24’. In the instance where there are multiple reference point clouds stored in computer-readable memory 14A for different respective reference parts 24, step 408 may be proceeded by step 406, where the computer performs a preliminary coarse registration between the singular transformed target part 3D point cloud and the plurality of stored reference point clouds corresponding to different reference parts. In this step 406, the computer 14 may apply a scored evaluation scheme to find the best match to said singular transformed target part 3D point cloud from among the plurality of stored reference point clouds. At step 408, the computer 14 then applies a subsequent fine registration between the singular transformed target part 3D point cloud and the best matched reference point cloud to derive the reference-to-target position transformation matrix.

The taught reference pose, reference path or reference part frame is then retrieved from the computer readable memory 14A at step 410. Next, at step 412, the reference-to-target position transformation is then applied by the computer 14 to the taught reference pose, reference path or reference part frame, thereby deriving a transformed target pose, target path or target part frame, which is then sent to the robot controller 10 at step 414 for execution of robot movements according thereto. In some instances, robot command at step 414 may be conditional on a confirmatory or instructional input to initiate such command. For example, during initial post-teaching testing of the robot’s execution under human supervision before transitioning to a fully automated context, such confirmatory or instruction input may be executed by human operator selection of an on-screen “execute process on target” button 20D in the graphical user interface 20.

Still referring to step 414, in the instance of a singular taught reference pose recorded in accordance with Fig. 3A, the robot controller 10 commands the robot 12 to the transformed reference pose, thus achieving proper placement and orientation of the robot’s end effector 12A on or near the target part 24’ at a targeted point thereon or near thereto whose location on or near the target part 24’ is the equivalent to the location of the reference point on or near the reference part, as taught during the original teaching process. Alternatively, in the instance of a taught path recorded in accordance with Fig. 3B, the robot controller 10 commands the robot 12 to the move through the transformed reference path at step 414, thus achieving proper movement of the robot’s end effector 12A along a target path on the target part 24’ that is equivalent to the reference path taught on the reference part 24 during the teaching process, whereby the path-governed movement of the robot includes occupation of transformed poses at target waypoints along this target path that correspond to the taught reference poses at the reference waypoints of the reference path. Since the reference path is composed at least partially of the reference poses, the execution of a taught path inherently includes transformation of poses, and command of the robot to those transformed poses, just like execution of a taught-single-pose.

By contrast, no pose transformation is performed in the working execution that follows a taught part-frame recorded in accordance with Fig. 3C. Instead of using the reference-to-target transformation to transform poses during executional working use of the robot 12, the computer 14 applies the reference-to-target transformation to the reference part frame, thus converting it to a target part frame of conforming relation to the target position of the target part 24’. The computer 14 sends the target part frame to the robot controller 10 for implementation of this target part frame as the active working frame of the robot controller 10, and any taught poses from teaching procedure are already defined locally within this target frame. So, in the single-pose and path scenarios, the active working frame is the robot frame, and doesn’t change, and the on- the-fly adjustment of programmed robot movements (i.e. the taught reference pose or reference path) by the computer is the transformation of the pose(s)/path using the vision-derived reference-to-target transformation, whereas in the part frame scenario, the on-the-fly adjustment of programmed robot movements (taught pose(s), or predefined poses, defined in the part frame) is transformation of the part frame from the reference position to the target position, again using the vision-derived reference-to- target transformation.

Figure 5 elaborates on steps 406 and 408 of Figure 4A in more detail, where the preliminary coarse registration at step 406 may, for example, involve performance of a Random Sample Consensus (RANSAC) for a predetermined number of times (e.g. ten times), where the solution with the highest evaluation score is that which has the largest number of correspondences. Visual markers or tagged part features may optionally be used for verification purposes, or in combination with RANSAC to add robustness, or used as the primary method for Coarse Registration instead of RANSAC. The fine registration at step 408 may include performance of an Iterative Closest Point algorithm to find the best fit between the reference and target point clouds. Optionally, as shown at step 408A, local registration may follow, where the point cloud is isolated in the vicinity of the taught reference points, and ICP is performed locally to better match the reference and target point clouds at taught reference points. All of the obtained transformations can then be combined at 408B to find a final reference-to-target transformation between the reference and target point clouds.

In instances involving relatively large parts or other scenarios where the part cannot be fully seen from one angle, an alternative to the multi-camera example of the illustrated embodiment with multiple cameras installed at fixed locations is installation of a singular camera on the robot, and using movement of the robot to reposition the camera around the workspace to take multiple point cloud captures from different vantagepoints. In this case, the calibration procedure (Figs. 2A & 2B) is modified, by fixing the artifact 22 at a static location, and moving the robot-carried camera to different positions. It will also be appreciated that alternative calibration methods to those described above and illustrated in Figs. 2A & 2B may alternatively be employed to derive the camera-robot transformation.

It will also be appreciated that use of term “pose” or “robot pose” herein refers to the “robot end-effector pose”, i.e. the pose of the end effector frame (see 12A in Figure 3A), and that while the teaching and execution procedures can of course be completed using the same robot as one another, this need not necessarily be the case. Instances in which the end-effector pose relative to the reference part is taught with one robot, and then the working execution is performed by a different robot commanded to put its end-effector in the same pose relative to the target part, are also within the scope of the present invention.

It will also be appreciated that while the illustrated embodiment of Figure 1 relies on combination of a dedicated robot controller 10 and a separate general- purpose computer that communicates with the robot controller and 3D vision system and performs the workflow described herein, thus representing an “add-on” for an existing robot controller, other embodiments in which execution of the novel workflow described herein is incorporated directly into a robot controller, having a direct connection to the 3D vision system componentry, is also within the scope of the present invention. In yet another embodiment omitting the computer 14, the one or more cameras 16A, 16B of the 3D vision system 16 may have one or more local processors integrated therein, and at least some of the processing steps (such as point cloud collection, assembly, isolation, and registration) could be performed onboard the camera(s) by the local processor(s) thereof, with a remainder of the processing being done by the robot controller 10. In another embodiment employing one or more processing-capable cameras or other sensors in its 3D vision system, the computer 14 may still be included, with any variety of the different processing tasks being distributed among two or more of the 3D vision system, computer and robot controller.

In support of the utility of the invention, a prototype was developed using a KUKA Kr6 r700-2 industrial robot equipped with a custom pneumatic suction-cup end effector. A Zivid camera was used to collect a 3D point cloud of a large convex composite panel mounted to a movable cart (i.e. cart on wheels). A software user interface was developed to guide a non-expert user through a few simple and intuitive tasks to set up the system. The invention can be directly commercialized through a software platform. The software can be deployed either as a standalone package or as a plug-in to other existing programs, such as RoboDK, RobotStudio, etc. A standalone software implementation could interact either directly with an existing robot controller (via built-in modules like KUKA EKI) or an external PLC via ethernet/IP communication. This invention can be used by many manufacturing companies with a wide range of applications. This technology has particularly significant potential in applications that frequently require setting up a robot to work with new parts, e.g. in high-mix and low batch size manufacturing, or applications in which parts cannot be placed inside the robot cell in a repeatable way due to the lack of part-specific fixturing.

Numerous benefits and advantageous arise from the disclosed system and methodology

• Unlike many existing methods, the proposed method does not require a CAD model of the part. • Manual teaching of the initial part is intuitive and easy and only needs to be done once. Also, by manually teaching the end effector orientation for the initial part, no detailed robot-cell calibration is required (as compared to offline programming). The technology utilizes industrial robot’s repeatable nature. • The proposed method exploits full 3D point cloud of the part, which increases the accuracy of subsequent robotic operations.

• By using a 3D point cloud (compared to 2D vision systems), parts with complex geometries can be accurately localized and picked up by the robot.

• When the CAD file is available, the proposed method can be used for mapping manually selected features from the CAD model, e.g., machining path, pick up point, etc., to the physical parts using the same point cloud registration process.

Since various modifications can be made in my invention as herein above described, and many apparently widely different embodiments of same made, it is intended that all matter contained in the accompanying specification shall be interpreted as illustrative only and not in a limiting sense.