Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MOTION CONTROL SYSTEM FOR A DIRECT DRIVE ROBOT THROUGH VISUAL SERVOING
Document Type and Number:
WIPO Patent Application WO/2016/193781
Kind Code:
A1
Abstract:
The present invention describes a system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration, comprising: a three-joint robot manipulator arm; and a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein the microprocessor is configured to: performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot; transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera; storing the information of visual servoing output by the fixed web camera; and calculating the coordinates of the joints based on the centroid of each reference image marked.

Inventors:
CID-MONJARAZ JAIME JULIÁN (MX)
REYES-CORTÉS JOSÉ FERNANDO (MX)
Application Number:
PCT/IB2015/054096
Publication Date:
December 08, 2016
Filing Date:
May 29, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BENEMÉRITA UNIV AUTÓNOMA DE PUEBLA (MX)
International Classes:
B25J9/00; B25J13/00; G05B17/02; G05D3/00; G06V10/42
Other References:
CID, J.; ET AL.: "Controlador con retroalimentación visual para brazo robot'';", REVISTA IBEROAMERICANA OF SISTEMAS, CIBERNÉTICA E INFORMÁTICA, vol. 8, no. 2, 2011, pages 20 - 25, XP055332179, ISSN: 1690-8627, Retrieved from the Internet [retrieved on 20160114]
CID, J.M.; ET AL.: "Visual Servoing Controller for Robot Manipulators", INTERNATIONAL CONFERENCE ON ELECTRICAL, COMMUNICATIONS, AND COMPUTERS (CONIELECOMP 2009), February 2009 (2009-02-01), pages 153 - 158, XP031489676, ISBN: 978-0-7695-3587-6
MORENO-ARMENDARIZ, M.A.; ET AL.: "A new fuzzy visual servoing with application to robot manipulator", PROCEEDINGS OF THE 2005 AMERICAN CONTROL CONFERENCE, vol. 5, June 2005 (2005-06-01), pages 3688 - 3693, XP010820370, ISBN: 978-0-7803-9098-0
NADI, F.; ET AL.: "Visual servoing control of robot manipulator with Jacobian matrix estimation", 2014 SECOND RSI/ISM INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), October 2014 (2014-10-01), pages 405 - 409, XP032709481
YANTAO SHEN ET AL.: "Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback", IEEE /ASME TRANSACTIONS ON MECHATRONICS, vol. 8, no. 1, March 2003 (2003-03-01), pages 87 - 98, XP011076243, ISSN: 1083-4435
Attorney, Agent or Firm:
VON WOBESER HOEPFNER, Claus Werner (Colonia Santa Fe México, D.F., MX)
Download PDF:
Claims:
CLAIMS

1. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration, comprising:

a three-joint robot manipulator arm; and

a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and

a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein

the microprocessor is configured to:

performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot;

transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera;

storing the information of visual servoing output by the fixed web camera; and

calculating the coordinates of the joints based on the centroid of each reference image marked.

2. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the working area of the robot is a circle with a radius of 0.7 meters.

3. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the web camera is a CCD (charged-couple-device) and placed in front of the robot.

4. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the three reference images marked correspond to three black disks of different sizes and mounted on the shoulder, elbow and end-effector of the robot manipulator arm each.

5. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the transmission and receiving of information is wired or wirelessly.

Description:
MOTION CONTROL SYSTEM FOR A DIRECT DRIVE ROBOT

THROUGH VISUAL SERVOING

BACKGROUND. 1. Technical Field of the invention.

The present invention relates to a system for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.

2. Particulars of the invention The problem of positioning of robot manipulators using visual information has been an area of research in the last 30 years. In recent years, attention to this subject has drastically grown. The information with visual servoing can solve many problems that limit applications of current robots, such as: long range exploration, automatic driving, medical robotics, aerial robots, etc. Visual servoing refers to a closed-loop position control for a robot end- effector using such visual servoing. This term was introduced by Hill and Park in 1979. It represents an attractive solution for positioning and moving autonomous robot manipulators evolving in unstructured environments.

On visual servoing Weiss and William have categorized two classes of vi- sion-based robot control: position-based visual servoing, and image-based visual servoing. In the former, the main features are extracted from an image and the position of the target with respect to the camera is estimated. Using these values, an error signal between the current and the desired position of the robot is defined in the workspace; while in the latter the error signal is defined directly in terms of image main features to control the robot end-effector. In both classes of methods, object feature points are mapped onto the camera image plane, and from these points, for example a particularly useful class of image features is centroid used for robot control.

In the configuration between camera and robot, a fixed-camera or a cam- era-in-hand can be had. Fixed-camera robotic systems are characterized in that a vision system fixed in the coordinate frame captures images of both the robot and its environment. The control objective of this approach is to move the robot end- effector in such a way that it reaches a desired target. In the camera-in-hand configuration, often called an eye-in-hand, generally a camera is mounted in the robot end-effector and provides visual information of the environment. In this configuration, the control objective is to move the robot end-effector in such a way that the projection of the static target be always at a desired location in the image given by the camera.

Since the first visual servoing systems were reported in the early 1980s the last few years have seen an increase in these reports and in published research results. An excellent overview of the main issues in visual servoing in the control of robot manipulators is given by Corke. However, few rigorous results have been obtained incorporating the nonlinear robot dynamics. The first explicit solution of the problem formulated in this paper was due to Miyazaki and Masutani in 1990, where a control scheme to deliver bounded control actions belongs to a new control algorithm through visual servoing, using the Transpose Jacobian-based philosophy introduced by Takegaki and Arimoto. Kelly addresses the visual servoing of planar robot manipulators under the fixed-camera configuration. Malis proposed a new approach to vision-based robot control, called 2- 1 /2-D visual servoing in 1999. The visual servoing problem is addressed by coupling the robot's nonlinear control theory with a convenient representation of the visual information used by the robot in 1999 by Conticelli.

Park and Lee (2003) present a visual servoing control for a ball on a plate to track its desired trajectory. Kelly proposes a novel approach aimed at the application of the velocity field control philosophy by using visual servoing of the robot manipulator under a fixed-camera configuration. Schramm presents a novel visual servoing approach, aimed at a controller so-called extended-2D (E2D) for the coordinates of the points constituting a tracked target and provide simulation results. Malis and Benhimane (2005) present a generic and flexible system for vision-based robot control, the system integrating visual tracking and visual servoing approaches a unifying framework.

The present invention addresses the positioning problem with fixed- camera configuration to position-based visual servoing of planar robot manipula- tors. The main contribution is the development of a new family of position-based visual controllers supported by rigorous local asymptotic stability analysis, taking into account the full nonlinear robot dynamics, and the vision model. The objective concerning the control is defined in terms of joint coordinates which are deduced from visual information. The general control problem is called positioning control, this approach is particularly relevant when in the servoing loop a video signal or images are included, which are open and interesting problems in the scientific community. This application focuses on the positioning control of robot manipulators with visual servoing on the planar fixed-camera configuration. The solution to this issue is through a proposed new control strategy with rigorous support according to automatic control techniques and experimental validation.

An important component of a robotic system is the acquisition, processing and interpretation of the information provided by the sensors. This information is used to derive signals for controlling a robot. Information about the system and its environment can be obtained through a variety of sensors such as position, speed, strength, touch or vision.

The international patent application WO 2015/058297 A1 (VAKANSKI ET AL), published on April 30 th , 2015 describes: a programming method of at least one robot by demonstration comprising: performing at least one demonstration of at least one task in held view (Held of View) of at least a fixed camera for at least one observed task trajectory observed from a manipulated object, preferably at least a set of the observed task trajectories; generating a generalized task trajectory of said at least one observed task trajectory, preferably from said at least one set of observed task trajectories; and executing said at least one task by said at least one robot in the field of view of said at least one fixed camera, preferably using image-based visual control to minimize the difference between the executed trajectory during said execution path and the generalized task trajectory.

US 8,879,822 B2 US (MATSUMOTO), published on November 4 th , 2014, describes a robot control system including a processing unit that performs visual control based on a reference image and a captured image, a robot control unit that controls a robot based on a control signal, and a storage unit that stores the reference image and a marker. The storage unit stores, as a reference image, a reference image to the marker where the marker is located in an area of a workpiece or a robot hand. The processing unit generates, based on the captured image, an image captured with the marker where the marker is located in an area of the workpiece or a robot hand, makes visual control based on the reference image with the marker and the image captured with the marker, generates the control signal, and outputs the control signal to the robot's control unit.

Although in recent years the number of research on the control of robot manipulators has increased, most only exhibit simulation results and very few have an experimental evaluation. Positioning controllers have been developed but positioning controllers with visual servoing are practically nonexistent. The above is a direct consequence of the lack of adequate experimental platforms as well as the difficulty of obtaining an accurate dynamic model. The simulation of a particular control algorithm can be very useful in the initial stages of design; however, the simulation results are frequently incomplete because practical aspects are not taken into account, so the simulation has a limited value. For example, in most of the simulations with robot manipulators controllers, noise in sensors, friction phenomena and dynamics of the actuators of the manipulator are neglected. Validating a control algorithm experimentally provides the means to ensure its success in the real world of applications. Thus, a proper testing system is a critical step towards validating new and existing control algorithms. It should be noted that it is easier to obtain experimental results than simulation results. The first explicit solution to the issue of positioning control by visual servoing was contributed by Miyazaki and Masutani in 1990, by modeling the vision system as a rotation matrix, considering the design philosophy based on the so-called transposed Jacobian controller proposed by Takegaki and Arimoto in 1981 . Recently, a more realistic model of the vision system incorporates a perspective projection based on lenses' geometrical optics.

These developments are incipient though. No controllers exist in the state of the art for the movement of a direct drive robot with visual servoing to a fixed- camera configuration, based on stability algorithms as explained by Lyapunov, as described in greater detail below.

SUMMARY OF THE INVENTION One example of an object of the present invention is to provide a system for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.

Another example of an object of the invention is to model the fixed-camera configuration for the direct drive planar robot manipulator and CCD camera-type vision system.

Another example of an object of the invention is to propose new control algorithms using visual servoing, which can be implemented, configured and communicatively coupled to computing subsystems.

Another embodiment of the present invention is to provide a position- based visual-servoing control scheme, using the robot's full dynamics and vision model to show the overall asymptotic stability equilibrium point using the closed- loop equation. Inverse kinematics is used to obtain the desired angles for the joints and for the angles of the joint of the calculated centroid position.

The above objects are achieved by providing a system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration, comprising: a three-joint robot manipulator arm; and a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein the microprocessor is configured to: performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot; transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera; storing the information of visual servoing output by the fixed web camera; and calculating the coordinates of the joints based on the centroid of each reference image marked. Other features and advantages will become apparent from the following detailed description, taken together with the attached drawings, which illustrate by way of example the characteristics of various embodiments.

BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be completely understood by the detailed description given below and in the attached drawings, which are given only by way of illustration and example and therefore do not limit the aspects of the present invention. In the drawings, identical reference numbers identify similar elements or actions. The sizes and relative positions of the elements in the drawings are not necessarily drawn to scale. For example, the forms of the various elements and angles are not drawn to scale, and some of these elements are enlarged and located arbitrarily to improve the understanding of the drawing. In addition, the particular forms of the elements as drawn do not intend to convey any information concerning the real shape of the particular elements and only have been selected to facilitate its recognition in the drawings, wherein:

Figure 1 shows a block diagram of the experimental platform in accord- ance with the present invention.

Figure 2 shows a scheme of the fixed-camera configuration in accordance with the present invention.

Figure 3 shows a block diagram of the image acquisition and processing in accordance with the present invention. Figure 4 shows a block diagram of the program in Simulink, in accordance with the present invention.

Figure 5 shows a display of the centroids in accordance with the present invention.

Figure 6 shows the robot manipulator and the vision system in accordance with the present invention.

Figure 7 graphically shows the visual joint errors with a tanh controller in accordance with the present invention.

Figure 8 graphically shows the torque applied with the tanh controller in accordance with the present invention.

Figure 9 graphically shows the visual joint errors with an arctan controller in accordance with the present invention.

Figure 10 graphically shows the torque applied with the arctan controller in accordance with the present invention. DETAILED DESCRIPTION OF THE INVENTION

Several aspects of the present invention are described in more detail below, with reference to the attached drawings (figures, diagrams and graphs), in which the variations and the aspects of the present invention are shown. Several examples of aspects of the present invention may, however, be realized of many different forms and should not be construed as limitations to the variations in the present invention; on the other hand, the variations are provided so that this description is complete in the illustrative embodiments, and the scope thereof is fully conveyed to those skilled in the art.

Unless otherwise defined, all the technical and scientific terms used in this document have the same meaning as generally understood by a person skilled in the art to which aspects of the present invention belong. The apparatuses, systems and examples provided in this document are for illustrative purposes only and are not intended to be limiting.

To the extent that the mathematical models are capable of reproducing the magnitudes reported in experiments, they can be still be considered for modeling various natural processes. Therefore, the present invention is considered a model for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.

In Figure 1 the experimental platform implemented in the present invention is shown, which comprises the following parts: a) The first part is related to the description of the experimental platform for experimental purposes in open architecture; i.e. the hardware or physical components; and b) The second part deals with the dynamic model of the experimental manipulator and the visual model focused on the positioning control in joint coordinates, i.e. the implemented, configured and communicatively coupled software to computer subsystems.

1. Robotic system model

1.1 Robot Manipulator.

The robotic system described in the present invention comprises a direct drive robot with three degrees of freedom and a CCD camera placed in the working area and of the robot in the fixed-camera configuration.

For purposes of the present invention, Robot shall mean a direct drive mechanical manipulator, which is reprogrammable and constituted by a serial sequence of links of the rigid type, which are connected together by rotary joints and wherein the device used to equip the robot with input and output capabilities is the motion control board MFIO-3A by manufacturer Precision MicroDynamics.

1.2 Robot Dynamics.

In mechanical, electrical, thermal and mechatronic systems whose dynamic behavior is described mathematically, Euler - Lagrange equations are commonly used; such systems being referred to as systems Euler - Lagrange systems; they are characterized by physical quantities inherent to systems such as kinetic energy, potential energy, friction forces and external forces applied. One subclass of Euler - Lagrange systems are those characterized for their kinetic energy that can be mathematically expressed as a quadratic form and which potential energy depends only on the position used widely. Within the subclass of Euler - Lagrange systems mentioned above, robot manipulators may be mentioned.

Since the dynamic model of a robot manipulator depends on the geometry thereof, as well as on the type of joints used (rotary or linear), such model may be useful in the manipulator's mechanical design stage. The model, which reveals the dynamic behavior of the manipulator, is the principle for the design of model- based controllers, since for purposes of scientific research in the development of new control algorithms for robot manipulators, the dynamic model has properties that are very helpful when analyzing stability and robustness. As in other problems of automatic control, in the case of robot manipulators, the concept of stability is essential. The dynamic model of a robot manipulator, the state-space theory and Lyapunov methods provide adequate means to design new model- based control laws with stability and robustness. The dynamic model of a robot manipulator plays an important role for simulating motion, the manipulator's structure analysis and the design of control algorithms. Furthermore, it can be shown that as the number of degrees of freedom increases in a manipulator, the complexity in the use of these equations is increased. The dynamic equation of n freedom degrees of the robot is based on the Euler-Lagrange methodology, which is given by:

M (q)q + C'fq, q)q + g(q = r

where q is the vector of joint positions of n X 1 , is the vector of joint velocities of n X 1 , T is the applied torque vector of n X 1 , M(q) is the manipulator's symmetric positive definite inertia matrix of n X n, C(q, q ' ) is the matrix of centrifu- gal and Coriolis forces of n X n, and g(q) is the vector of gravitational torques of n X 1 . It is assumed that the robot links are joined together with revolute joints. Although the equation of motion (1 ) is complex, it has several fundamental properties which can be exploited to facilitate the control system design. For the new control scheme, the following important property is used:

Property 1 : C(q, q ) matrix and the time derivative of the inertia matrix regarding time M ' (q) satisfies [12]:

T

q Α ' / ; η ; ( ' ( q. i 0 V q, q G R

(2)

1.3 Direct kinematic model. Direct kinematics is a vectorial function that relates the joint coordinates of q with Cartesian coordinates of f:Rn→Rm where n is the number of degrees of freedom, and m represents the dimension of the Cartesian coordinate frame. The position xReR3 of the end-effector of the robot with respect to the robot coordinate frame in terms of the joint positions is given by: xR=f(q).

2. Vision system.

The present invention is directed to the control of planar robot manipulators using visual information provided by a webcam. The camera technology most widely used is CCD (charged-coupled device). These cameras are based on a single silicon chip built using standard lithographic processes. 4096x4096 image elements (pixels) from five to six microns have been built.

The goal of a vision system is to create a model of the real world from images. A vision system recovers useful information on a scene from its two- dimensional projections. Since images are two-dimensional projections of the three-dimensional world [32], the information not directly available must be recovered via mapping. To recover, such information, knowledge about the objects on the scene and their geometric project [33] is required. The subject of sensors comprises image capture devices, which are the camera and the lens. This recovery requires many data on a plane; see Figure 2.

2.1 Centroid-Based servoinq

Determining the position of an object along the image sequence can be given by the centroid of said object which is obtained by a cluster analysis, dependent on prior segmentation and recognition operations. So that tracking objects based on centroid is acceptable, other general procedures must be previously performed such as thresholding, segmentation and recognition. While these contribute to improving the definition of the objects and therefore their monitoring, they also substantially increase its computational cost. Because of this, the process turns out to be too heavy for a general purpose processor to carry it out, so often these tasks are performed on components specially created for this purpose. This tracing method using centroid needs previous stages to be carried out, and these are commonly segmentation stages; once the segmentation is performed, the centroid of the region representing the object to be followed is calculated by determining their moments where the set of threshold values of the region in the image is the number of rows and columns of the region. With the result available, it is possible to determine the centroid coordinates.

2.2 Vision model.

The present invention employs a Cartesian coordinate system ∑ fl = ,/¾,/¾} placed at the base of the robot. Where axis fli y /¾ represent the working area of the robot. Another Cartesian coordinate system is available ∑E= {Ei, E2, E3} present at the robot end-effector, the origin of which is determined by direct kinematics XR. The CCD camera has an associated therein a reference system, which is denoted by∑c = {Ci ,C2, C3} and the origin of which is located at the intersection of the optical axis and the center of ∑c lens. The coordinates of a point regarding this reference system are expressed as xc- The relative location between the robot's reference systems ∑R and the camera system ∑c is represented by the vector Oc = [oCi ,oC2,oC3] T .

An objective has a point in the Cartesian system ∑r = { 7Ί, Γ2, Γ3}, which origin makes reference with respect to its geometric center. The position of the frame of the object with respect to∑f?is denoted by or= [071 ,072, ore] 7 . The acquired scene by the camera is projected on the CCD, which has associated therein a reference system described by∑/= { ,h}, the origin of which is found in the geometric center of the CCD. Axis /1 e k are parallel and point in the same direction of axis Ci y C2 respectively. To obtain the coordinates of the image at the CCD plane a perspective transformation is required. Considering that the camera has a perfect focus of the optical system and therefore free of optical aberrations, the optical axis intersects at the geometric center of the CCD plane.

Finally, the image of the scene on the CCD is digitalized, transferred to the computer screen and a new two-dimensional coordinate system∑D = {U, V}, which origin is located at the upper left corner of the monitor. Therefore, the complete vision system, for the fixed-camera configuration, expresses the coordinates of the image in pixels. Therefore, the vision system model is described as follows:

M A au 0 ] x a , 1

(3)

(4) where a u > 0, a v > are the scale factors in pixels/m, A > 0 is the focal length of the camera and '

3. Positioning control.

The motion control objective is to determine the torque of the motors so that the error vector of joint positions tends to zero, similarly to the speed error. This means that the joints of the robot manipulator asymptotically follow the trajectory of the desired motion.

3.1 Visual servoinq scheme based on positioning and stability for the configuration of a fixed-camera. In this stage the stability analysis for the positioning-based visual servoing is described. The robot task is specified in the plane of the image for which it refers to values of the main features of the image that correspond to the robot and to the positions of the object. It is assumed that the target resides in the plane fi /¾, shown in Figure 2. Its description regarding the reference frame- work∑D of the computer image (screen) is [ud Vd] r and is denominated characteristic features vector of the desired image. The desired joint displacement qd is calculated from the inverse kinematics as a function of [ud Vd] T .

The problem of the control by means of visual servoing for the fixed- camera configuration consists of designing a law of control to calculate the applied torques τ in such a way that the characteristics or distinctive features of the image [u v] T pertaining to the point that take place with the coordinates of the centroid of a black disk placed at the robot's end-effector reach the point or objective desired in the plane of the image [ud Vd] T . The image error is defined as therefore, the objective of control is to guarantee that at least for initial conditions and that be sufficiently small. That is to say, to secure that the error of the image's characteristic features, the difference between the current position of the end-effector expressed as pixels [u v] T and the position desired also in pixels [ud Vd] T , tends to zero when time advances.

To solve the problem of control with servoing, the following control scheme with gravity compensation is presented:

T = Vv a (kp, q) - f v (k v , q) + g(q)

(6) where ¾ ¾ ¾ * ' is the error vector of the joint positions, ¾ is the vector of the desired joint positions. p ^ is a diagonal matrix known as proportional gain,

K v G W is a defined positive matrix also known as derivative gain; Vt'a( .p , q) re p resen t s t ne artificial potential energy, and it is a defined positive function, and ί^' ^) denotes a diminishing dissipation function, i.e.

' ' ./ i /··,· . q : o

Proposition. The present invention considers the dynamic robot model (1 ) along with the law of control (6), then the closed-loop system is asymptotically stable locally and the visual positioning targeted is achieved. Test: The object equation of the closed-loop system of the present invention is achieved by combining the dynamic robot model (1 ) and the control scheme (6) and written as: which is an autonomous equation of the differential, and the origin of the state-space is an equilibrium point. To carry out the stability analysis of the equation (7), the following candidate Lyapunov function is proposed:

1

Uq- q) = ^ q T (q)q + t;a(kp , q)

(8)

The first term of ^ ¾> *¾) is a positive defined function with regard to ^ because M{q) is a positive defined matrix. The second term of the candidate function of Lyapunov (8) may be interpreted as an potential energy induced by the law of control, it is also a positive defined function with regard to the position error VI, because the term k p is a positive defined matrix. Consequently is positive defined and the function is radially unlimited. The derivate with regard to the time of the candidate function of Lyapunov (8) is found along the trajectories of the closed-loop equation (7) and considered thereafter as property 1 , can be written as:

1

q J (q)q - -q J (q)q - V ¾ > a (k p j q) 1 q q Vi a (k p , q) - q J f v (k v , q)-C(q, q)q

1 V7„ . / I ,

which is a negative semi-defined function and consequently it is possible to conclude stability of the equilibrium point. To demonstrate the local asymptotic stability, the autonomous nature of the closed-loop equation (7) is employed to apply the invariance principle of LaSalle's [14] in the Ω region: qi

G l 2 " : F(q, q) = 0

q

[ q= 0 G " , q = 0 e n : (q, q) = 0 J (10) since F(¾, q) < 0 € Ω, V(q(i), q(t)) is a decreasing function of i. l'(q, q) is continuous in fixed compact Ω, is limited underneath in Ω, for example, it satisfies 0 < q(t)) < ¾0), q(0)). Consequently, the normal solution is the only solution of the closed-loop system (7) restricted to Ω, and therefore it is concluded that the origin of the state-space is locally stable asymptotically.

3.2 Specific case of the controller.

The purpose of this sub-phase is to take advantage of the methodology described above to derive the new regulators. Control schemes with gravity compensation are presented:

Test: The object equation of the closed-loop system of the present invention is achieved by combining the dynamic robot model (1 ) and the control scheme (1 1 ) and written as:

(12) which is an autonomous equation of the differential, and the origin of the state-space is an equilibrium point. To carry out the stability analysis of the equation (12), the following candidate Lyapunov function is proposed:

The first term of ^ *¾) is a positive defined function with regard to ^ because M{q) is a positive defined matrix. The second term of the candidate function of Lyapunov (13) is interpreted as an potential energy induced by the law of control, it is also a positive defined function with regard to the position error*!, because the term k p is a positive defined matrix. Consequently Hq- q) is positive defined and the function is radially unlimited. The derivate with regard to the time of the candidate function of Lyapunov (13) is found along the trajectories of the closed-loop equation (12) and considered thereafter as property 1 , can be defined as: qi

V " (q, q) ή ' . q iq < ' — q K„ tanli

¾2

qi i

q tarih q K(, tarih

¾2

1 qi

-^(q, q)q+ 2 ¾ <l)<l— q tanli

q2 q Kp tanli (14)

q 2

A second example for the purposes of this methodology according to the present invention, is:

(15) 4. Example: Implementation and experimental evaluation according to one embodiment of the present invention.

A new computed-created group of vision algorithms was designed and im- plemented, which particular objective was to extract the necessary visual information for the vision model. This information consists of the spatial position of centroids, that is to say the location of the visual marks, as well as their size, to correlate these black circles with the robot joints (base, elbow and end-effector).

Figure 3 shows a block diagram on the steps in which the process has been divided into for the vision system stage.

It is important to point out that two forms for programming were used in Matlab to perform the acquisition and processing of images, using block diagrams of the working environment called Simulink and with customized blocks called S functions. These tools use adapters of devices to connect different image acquisition equipment to their controllers. Besides, they include an adapter for video devices of generic Windows.

Figure 4 shows a block diagram of the program sequence in Simulink, in accordance with the present invention.

The webcam is connected to the computer via the USB port, from which the images obtained from the video are acquired and characterized through the Simulink module to perform the detection and characterization of the images sent by the camera. Acquisition of the color images takes place in the RGB 8-bit format and obtained with a 320 x 240 pixel resolution, the color images transformed into gray scale tones, binarization of the gray tones of the images takes place, identification of the centroid is performed to determine the coordinates of the end-effector and these are kept in a file that is transferred to the computer that controls the robot motors. The algorithm employed for the development of this program is shown subsequently with the following steps:

1 . Acquisition of images color (video input).

2. Transformation to tones of gray of the color images (Color Space Conversion). 3. Binarization of the images in tones of gray (Vision Operating).

4. Identification of the centroid of the circles in each image (Vision Operating).

5. Calculation of joint positions (Vision Operating).

6. Sending the information to a file (Vision Operating). 7. Displaying angle information of the joints (Display).

8. Visualizing the image of the three centroids (Video Out Robot).

Figure 5 shows the information obtained and kept in the file that is transferred to the computer that controls the robot motors.

The transfer of information of the vision stage to the control stage takes place via a parallel port where, once the data from the robot are obtained, these numeric values are converted from a real floating point format to a 32-bits format grouped in 8 levels; and through interruptions this information is transmitted and received at the control stage, and a process to apply the same directly to the law of control; this program was made with the of Visual C + + 6.0 compiler. This control scheme is working with sampling periods of 41 ms for the vision system and 2.5 ms for the control stage. Figure 5 shows some windows of the proposed visual control system, wherein said windows are displayed through a visual device controlled by a computer according to the present invention.

The experimental robot consists of a base and two aluminum 6061 links driven by direct drive servos by Parker Compumotor. The advantages of this type of direct drive actuator is that it reduces friction, compared with other type of actuators. The motors used in the robot in the board are listed in table 1 . The servos work in the torque mode, so that the motors perform as a torque source, they accept an analog voltage as a reference to send the torque signal. The working area of the manipulator is a circle with a 0.7 m radius. In addition to the positioning sensors and the motor conductors, the robot also includes a motion control table manufactured by Precision MicroDynamics, that is used to obtain the positions of the joints. The control algorithms run in a central computer with a Pentium microprocessor.

Table 1 Servo actuators of the experimental robot

With the reference of the direct drive robot, the gravitation torque is required to implement the new control scheme (6) that is available in [8]:

38.46 sin (qi ) + 1.82 sin (q L + q 2 )

ff(q) [Nm]

1.82 sin (gi + q 2 )

(16) 4.1 Experimental results.

In this stage, the experimental results are presented, which were obtained from the controllers proposed in the planar robot with the fixed-camera configuration.

Three black disk are mounted on the joints. One large black disk for the shoulder, a medium-sized black disk for the elbow, and one small for the end- effector. The coordinates of the joints were calculated based on the centroid of each disk and by using the inverse kinematics as shown in Figure 6, from where it gathers:

"2 .Γ · r,f ( 18)

c]2 =

where 41 , represent the length of the links 1 and 2 respectively, U and are the axes of the visual information. To find the equations 19 and 20, the design shown in figure 3 is used.

The centroids of each disk were selected as the characteristic points of the object. In all the controllers they were selected as the position desired in the

[· . , I T

plane of the image to t u ' d V(i i = [198 107] T [pixels] and the initial

N°) °)] T position - [50 210] T [pixels], this is 4ι (°)< ¾(0) = [0 o and = 0 [degrees/sec]. The evaluated controllers have been written in C language. The sampling time was executed to 2.5 ms, while the visual servoing was at 43 ms. The CCD camera was placed in front of the robot and its position with regard to the robot framework∑ R was ^ R i ue L/ c ~ [υ ¾ 1 u ¾ 1 u ¾ i = [-0.5, -0.6, - 1 ] T meters, the rotation angle Θ = 0 degrees. MATLAB ver. 2007a was used along with the SIMULINK module to process the image. The video signal of the CCD camera has a resolution of 320x240 pixels in the RGB format.

Figures 7 and 8 graphically show the experimental results obtained from the controller (1 1 ), the proportional and derivative gains were selected with these values K p = diag{26.0, 1 ,8} [N], K v = diag{12.0, 1 ,2} [Nm], respectively and Ud = 198 and Vd = 107. The transitory response is fast at around 3 sec. The positioning error is small and tends to zero asymptotically.

The experimental results for the controller (1 1 ) are graphically shown in Figures 7 and 8. The transitory response was around 3 sec. The error compo- nents spread asymptotically.

Finally, figures 9 and 10 show the experimental results obtained from the controller (15), the proportional and derivative gains were selected with these values K p = diag{17.3, 1 ,2} [N], K v = diag{6.6, 1 ,2} [Nm], respectively and Ud = 198 and Vd = 107. The transitory response is fast and was at around 1 sec. The positioning error is small and tends to zero asymptotically.

Although the invention ha been described with reference to diverse aspects of the present invention and examples with regard to a system to control the movement of a direct drive robot through controllers with visual servoing for a fixed-camera configuration, it is within the reach and spirit of the invention the incorporation or use with any system and/or adequate mechanical device. Therefore, it must be understood that numerous and varied modifications can be made without departing from the spirit of the invention.