Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM FOR CONVEYOR BELT AUTOMATION WITH VISUAL SERVOING
Document Type and Number:
WIPO Patent Application WO/2016/193785
Kind Code:
A1
Abstract:
The present invention discloses a system for analyzing, transporting, detecting and automatically monitoring the position of an object that is being transported through a conveyor belt that interacts with robot manipulators with visual servoing, wherein it comprises: a three-joint robot manipulator arm; a conveyor belt; a fixed webcam; and a microcontroller attached to the three-joint robot manipulator, to the conveyor belt and to the web camera; wherein the microcon- troller is configured to: identify an object on the static conveyor belt; detect, through visual servoing based on reference images, a specific position and generated by the motion of an object, which is displaced by the conveyor belt; and determine a working area of the robot manipulator arm for supervision of an object.

Inventors:
CID-MONJARAZ JAIME JULIÁN (MX)
REYES-CORTÉS JOSÉ FERNANDO (MX)
FÉLIX-BELTRÁN OLGA GUADALUPE (MX)
Application Number:
PCT/IB2015/054109
Publication Date:
December 08, 2016
Filing Date:
May 30, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BENEMÉRITA UNIV AUTÓNOMA DE PUEBLA (MX)
International Classes:
B25J9/16; G01B11/00
Foreign References:
US5506682A1996-04-09
US20010050207A12001-12-13
US20030168317A12003-09-11
EP1432555A22004-06-30
Other References:
ELIONDO AGUILERA, I.: "Automatización of una Banda Transportadora con Retroalimentación Visual", PHD THESIS, December 2014 (2014-12-01), Retrieved from the Internet
ELIZONDO AGUILERA, I. ET AL.: "Controlled robotic cell using visual servoing", 2014 INTERNATIONAL CARIBBEAN CONFERENCE ON DEVICES, CIRCUITS AND SYSTEMS (ICCDCS);, 2 April 2014 (2014-04-02), pages 1 - 6, XP055332196, DOI: doi:10.1109/ICCDCS.2014.7016175
Attorney, Agent or Firm:
VON WOBESER HOEPFNER, Claus Werner (Colonia Santa Fe, México D.F., MX)
Download PDF:
Claims:
CLAIMS

1. - A system to analyze, transport, detect and automatically supervise the position of an object that is transported through a conveyor belt that interacts with robot manipulators with visual servoing, which comprises:

a three-joint robot manipulator arm;

a conveyor belt;

a fixed WEB camera; and

a microcontroller coupled to the three-joint robot manipulator, to the con- veyor belt and to the WEB camera;

wherein the microcontroller is configured to:

identify an object when the conveyor belt is static;

detect through visual servoing, based on reference images of reference, a specific position and output by the motion of an object, which is displaced by the conveyor belt; and

determine a work area of the robot manipulator arm for the supervision of an object.

2. - The system to analyze, transport, detect and automatically super- vise the position of an object that is transported through a conveyor belt that interacts with robot manipulators with visual servoing according to claim 1 , wherein the three-joint robot manipulator is configured to interact directly with the identified object. 3.- The system to analyze, transport, detect and automatically supervise the position of an object that is transported through a conveyor belt that interacts with robot manipulators with visual servoing according to claim 1 , wherein when the WEB camera detects motion, it displays an image by changing the color to identify the motion at the corresponding area.

4. - The system to analyze, transport, detect and automatically supervise the position of an object that is transported through a conveyor belt that interacts with robot manipulators with visual servoing according to claim 1 , wherein detection of the performance of an specific activity with the identified object is by means of morphological dilate and erosion on the pixels of the image of such object.

5. - The system to analyze, transport, detect and automatically super- vise the position of an object that is transported through a conveyor belt that interacts with robot manipulators with visual servoing according to claim 1 , wherein the conveyor belt is configured to stop at an specified distance, which comprises the work area of the robot manipulator arm.

6.- A conveyor belt comprising cogwheels adapted to a motor to generate the movement of the chain of the conveyor belt, which comprises: a plurality of cogwheels designed with an inner diameter, a pitch diameter, an outer diameter, a tooth vernier and a tooth; a support box adapted to the conveyor belt to hold and attach a motor at an appropriate distance, so that the cogwheel meshes with the chain; and wherein the current draw corresponds to the weight that transports the chain of the conveyor belt.

7.- A conveyor belt comprising cogwheels adapted to a motor to generate the movement of the chain of the conveyor belt according to claim 6, wherein the inner diameter of the cogwheel corresponds the shortest radius and the lower limit determines the vernier, while the other two limits are the tooth of the cogwheel.

8.- A conveyor belt comprising cogwheels adapted to a motor to generate the movement of the chain of the conveyor belt according to claim 6, wherein to the weight range of 1 .01 Kg to 10.02 Kg, a current draw of from 1 .40 A to 2.33 A corresponds respectively.

Description:
SYSTEM FOR CONVEYOR BELT AUTOMATION WITH VISUAL

SERVOING

BACKGROUND

1. Technical Field of the invention.

The present invention relates to a system for automation of a chain conveyor (conveyor belt), which interaction with anthropomorphic robot manipulators can be controlled from visual servoing provided by a conventional web camera.

2. Particulars of the Invention

Today, science and technology represent strategic areas in the progress of humankind. Competitive levels necessary to have an impact on international markets globally societies arise from societies that directly related to their ability to produce technology, thereby improving the conditions of life of their inhabitants in many cases.

With the evolution of technology and computing, control systems have also evolved. Today we can find automation in most of mechatronics systems around us. Through the interaction of sensors and actuators with some control device, signals are evaluated and decisions are made with an algorithm to cause the activation or deactivation of the various actuators.

Automation is defined as the study of methods and procedures aimed at replacing the human operator by the artificial operator to cause a preset physical or mental task. Therefore, automation is the study and application of industrial process and production management control. Robotics is an area of research and development of various applications that is experiencing an explosive growth driven by advances in computing, sensors, electronics and software. There are two main groups in which robotics is divided into: robot manipulators and mobile robots. The first group is importantly present in the industry and its study is closely linked to the type of work it performs; the second group, mobile robots, is of greater interest in research.

For the proper use of robots, it is necessary to know what the difference is between them, as such may rely on several factors such as their architecture, application, generation, their degrees of freedom, among others. Robots have different shapes and sizes and their classification may be based on their geometric configuration movements; some robots can be in more than one classification. Thus, the different classifications depend on the joints of the geometric configurations and on the workspace of the robot. Within the seven basic designs of industrial robots the following configurations can be identified: Cartesian or rectangular, Cartesian gantry robot, SCARA robot, cylindrical configuration, polar or spherical configuration, articulated or revolute configuration, spiral configuration and pendulum configuration.

A general outline of the robot manipulator system comprises two basic parts: the mechanical structure and electronic instrumentation.

On mechanical structure: Robot manipulators are essentially hinged arms; more precisely, it is an open kinematic chain formed by a set of links or chain elements interconnected by joints. The joints allow the relative motion between successive links. The movement is related to the number of degrees of freedom, which is determined by the number of joining points of a manipulator. The types of motion to be had in a rigid body are pure translation, pure rotation and complex movement. The end-effector is a device that attaches to the wrist of the robot arm and activates the robot with the overall purpose of performing a specific task. Most production machines require special devices and tools designed for a particular operation, and a robot is no exception. The end-effector is part of the special-purpose tools for a robot. There is a wide range of end-effectors required to perform a variety of different job functions. These types can be divided into two main categories: clamps and tools, the electromagnet being one of these. Space or workload consists of all positions of the potentially accessible space by the robot's end. This is determined by the type of configuration of each robot.

On the electronic instrumentation: This section aims at the description of the command and control devices that integrate electronic positioning control system and velocity of a robot arm with three degrees of freedom. The elements are: 1 ) PC, 2) multifunctional instrumentation board, 3) amplifiers, and 4) motors. Position decoder is a device that converts motion into digital pulse. This can give the relative position by means of a sequence of bits or the absolute position through coded bits, for example Gray code. It may be a linear or rotary configura- tion, but the most common is the rotating one.

Visual servoing refers to a closed-loop positioning control for a robot end- effector using such visual servoing. It represents an attractive solution for positioning and moving autonomous robot manipulators evolving in unstructured environments. On visual servoing by Weiss and Williams two classes of vision- based robot control have been categorized: position-based visual servoing, and image-based visual servoing. In the former, the main features are extracted from an image and the position of the target with respect to the camera is estimated. Using these values, an error signal between the current and the desired position of the robot is defined in the workspace; while in the latter the error signal is defined directly in terms of image main features to control the robot end-effector. In both classes of methods, object feature points are mapped onto the camera image plane, and from these points, a particularly useful class of image features is centroid used for robot control. In the configuration between camera and robot, a fixed-camera or a camera-in-hand can be had. Fixed-camera robotic systems are characterized in that a vision system fixed in the coordinate framework captures images of both the robot and its environment. The control objective of this approach is to move the robot end-effector in such a way that it reaches a desired target. In the camera-in-hand configuration, often called an eye-in-hand, generally a camera is mounted in the robot end-effector and provides visual information of the environment. In this configuration, the control objective is to move the robot end-effector in such a way that the projection of the static target be always at a desired location in the image given by the camera. An important component of a robotic system is the acquisition, processing and interpretation of the information provided by the sensors. This information is used to derive signals for controlling a robot. Information about the system and its environment can be obtained through a variety of sensors such as position, speed, strength, touch or vision.

A conveyor belt is a continuous conveyor system essentially consisting of a continuous belt that moves between two drums, it is driven by friction by one of the drums, and in turn is driven by a motor [23]. The other drum usually rotates freely, without any drive, and its function is to return the belt. Sometimes the belt is supported by rollers between the two drums. The conveyor belt and rollers are auxiliary elements whose mission is to receive a product relatively continuously and take it to another point. They are stand-alone apparatuses, installed in the process lines and generally do not require any operator to directly manipulate them continuously. Many ways have been devised to transport materials, raw materials, minerals and various products, but among the most efficient of these is transportation through belts and rollers, since these elements are of a great simplicity of operation and, once installed in normal conditions, usually give few mechanical and maintenance problems. Conveyor belts are devices for horizontal or sloped transportation of solid objects or bulk material, whose two main advantages are: high speed and long distances (10 km). Various types of conveyor exist depending on the mobility and/or position, for example, rubber conveyors, roller conveyors, slipping strap roller conveyor, sprocket-and-roller conveyors, motorized roller conveyors, live roller conveyors, thermoplastic conveyors, modular conveyors, metal mesh conveyors, metal conveyors with roller drive, metal conveyors with sprocket drive, Teflon conveyors, tubular conveyors and so on. Conveyor belts have several important features that support its application in the industry and are described below: They work independently from the workers, i.e., they can be placed between machines or buildings and the material placed at one would reach the other end without human intervention. They provide an effective method for handling materials by means of which the materials are not easily misplaced. They can be used to set the pace of work along fixed routes. This makes them suitable for mass production or continuous flow processes. Its main industrial application is as follows: Mining, since it works with its own roller bed and these rollers require a minimum of care and maintenance. Conveyor belts can adapt to the ordinary nature of the terrain because they possess the ability to cross relatively inclined steps (slopes and gradients of up to 18°, depending on the transported material). As better tensile strengths, synthetic materials and/or reinforced steel members are developed, the conveyor system may extend along kilometers of land with horizontal and vertical curves with no problem. They display negligent wear and tear under the rough and harsh mining work. Conveyors belts are important in mining or excavations, where two or more digging operations may be directed to a central loading point. By the end of the unloading, the material may be sent in different directions from the main line and at the same time it can be unloaded at any point along the conveyor by additional machinery for this purpose. In Construction, it provides an easy and fast assembling thereof, as the belt can be easily assembled and disassembled. Great ability to carry material over long distances; and fast driving speed of the material to the workplace safely and efficiently. In the Food industry, as it streamlines production because it has a constant speed without interruption. It is hygienic, which means the product is not contaminated with bacteria, dirt or other factors that could alter the same; and it can be installed indoors for greater product protection. In the Automotive or motor industry, since the modular lines of the conveyor belts can be extended, shortened or relocated with minimum labor and time. No competition as regards to transport capacity. At a speed of 5 m/s it can download over 100 metric tons of raw material per minute, and its high efficiency reduces production costs.

The international patent application WO 201 1/058529 A2 (AGUILERA ET AL), issued May 19 th , 201 1 describes; a method and system based on computer vision and pattern recognition techniques for detecting, measuring and analyzing processed fish meats automatically, especially salmon circulating through a conveyor belt, specifically real-time measuring of geometry and color of meat, as well as counting the number of processed pieces for detecting surface defects such as bruises and classifying meats according to patterns of quality".

US 6,236,735 B2 (BJORNER ET AL), issued on May 22 nd , 2001 discloses a two camera over-the-belt optical character recognition (OCR) system for reading the destination addresses on parcels carried on a conveyor belt. The parcels bear an orientation defining fluorescent ink fiduciary mark that is typically stamped on the parcel in the same area as the destination address. The fiduciary mark is located approximately in the center of the destination address block and oriented in the same direction as the underlying destination address. The first camera, a low resolution CCD camera, captures an image of the fiduciary mark. A host computer stores an image of the fiduciary mark, determines the position and orientation of the fiduciary mark, and defines a region of interest about the fiduciary mark. The second camera, a high resolution CCD camera, captures a gray-scale image of the conveyor and the parcels carried on the conveyor. The host computer extracts and stores the portion of the high resolution image that is within the region of interest including the text defining the destination address of the parcel. The host computer then rotates and displays the text image and/or transmits the text image to a text reader.

European patent EP 0 820 617 B1 (Morton ET AL), issued and granted on January 5 th , 2000 discloses Fiduciary mark detection system usable with an over- the-belt optical character recognition (OCR) reader for ascertaining the position and orientation of text within the destination address on parcels moving along a conveyor, comprising charged coupled device (CCD) array, analog-to-digital (A/D) converter, general purpose computer, and software program. Image processing software program comprising projection histograms, convolution filtering, correlation, radial variance, first moments of inertia, edge image analysis, Hough method, detection confidence testing. Orientation defining mark comprising fluorescent ink placed approximately in the center of the destination address block on a parcel is non-obstructive of the underlying text and may be affixed to parcel any time prior to scanning. Fiduciary mark comprising two circles of different diameter oriented such that a vector from the center of large circle to center of small circle is oriented in the same direction as underlying text.

These developments are incipient though. No chain conveyor (conveyor belt) automation systems exist in the art whose interaction with anthropomorphic robot manipulators can be controlled from visual servoing provided by a conventional web camera, as further disclosed in detail below.

SUMMARY OF THE INVENTION One object example of the present invention is to provide a system for automation of a chain conveyor (conveyor belt), which interaction with anthropomorphic robot manipulators can be controlled from visual servoing provided by a conventional web camera.

Another object example of the present invention is to design a conveyor belt to interact with a series of robotic manipulators.

Another object example of the present invention is to implement the conveyor belt. Yet another example of an objective of the present invention is to propose new control algorithms using visual servoing implemented to a conveyor belt.

Furthermore, another example of an objective of the present invention is to automate the conveyor belt through the configuration of a microcontroller implemented on a computer, with a development environment capable of making the data acquisition and control by means of input and output interfaces, thereby achieving the desired control.

The above objects are achieved by means of a system for analyzing, transporting, detecting and automatically monitoring the position of an object that is being transported through a conveyor belt that interacts with robot manipulators with visual servoing, wherein it comprises: a three-joint robot manipulator arm; a conveyor belt; a fixed webcam; and a microcontroller attached to the three-joint robot manipulator, to the conveyor belt and to the web camera; wherein the microcontroller is configured to: identify an object on the static conveyor belt; detect, through visual servoing based on reference images, a specific position and generated by the motion of an object, which is displaced by the conveyor belt; and determine a working area of the robot manipulator arm for supervision of an object.

Other features and advantages will become apparent from the following detailed description, taken together with the attached drawings, which illustrate by way of example the characteristics of various embodiments.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be completely understood by the detailed description given below and in the attached drawings, which are given only by way of illustration and example and therefore do not limit the aspects of the present invention. In the drawings, identical reference numbers identify similar elements or actions. The sizes and relative positions of the elements in the drawings are not necessarily drawn to scale. For example, the forms of the various elements and angles are not drawn to scale, and some of these elements are enlarged and located arbitrarily to improve the understanding of the drawing. In addition, the particular forms of the elements as drawn do not intend to convey any information concerning the real shape of the particular elements and only have been selected to facilitate its recognition in the drawings, wherein:

Figure 1 shows a diagram of the robot to determine the direct kinematics in accordance with the present invention. Figure 2 shows one of the stages of the code in Simulink ior motion detection in accordance with the present invention.

Figure 3 shows an image of the motion detection program in accordance with the present invention.

Figure 4A shows an image that represents the original output, in accord- ance with the present invention.

Figure 4B shows an image that represents the Edge output, in accordance with the present invention.

Figure 4C shows an image that represents the Overlay output, in accordance with the present invention. Figure 5 shows an image with the detection of a color in accordance with the present invention.

Figure 6 shows a block diagram of the complete system in accordance with the present invention.

Figure 7 shows the assembly in Solidworks 2013 of the mesh with the conveyor chain in accordance with the present invention. Figure 8 shows a window that displays the manufacturing of the cogwheel in Solidworks/CAMWorks in accordance with the present invention.

Figure 9 shows the motor casing design in accordance with the present invention. Figure 10 shows a block diagram of the program in Simulink for motion detection with data sending to Arduino in accordance with the present invention.

Figure 1 1 shows the control algorithm in accordance with the present invention.

Figure 12 shows ROTRADI robot painting a circle in accordance with the present invention.

Figure 13 shows the box that is approaching to the desired position in accordance with the present invention.

Figure 14 shows the robot starting his task after detecting the position of the box in accordance with the present invention. Figure 15 shows that at the end of the task, the robot returns to the home position and the box continues its travel, in accordance with the present invention.

Figure 16 shows the beginning of the motion detection tests with painting and color detection in accordance with the present invention. Figure 17 shows the painting stage in accordance with the present invention.

Figure 18 shows the beginning of the inspection stage of the second robot in accordance with the present invention.

Figure 19 shows a perspective image of the inspection stage in accord- ance with the present invention.

Figure 20 shows the inspection stage from a computer screen in accordance with the present invention.

DETAILED DESCRIPTION OF THE INVENTION

Several aspects of the present invention are described in more detail below, with reference to the attached drawings (figures, diagrams and graphs), in which the variations and the aspects of the present invention are shown. Several examples of aspects of the present invention may, however, be realized of many different forms and should not be construed as limitations to the variations in the present invention; on the other hand, the variations are provided so that this description is complete in the illustrative embodiments, and the scope thereof is fully conveyed to those skilled in the art.

Unless otherwise defined, all the technical and scientific terms used in this document have the same meaning as generally understood by a person skilled in the art to which aspects of the present invention belong. The apparatuses, systems and examples provided in this document are for illustrative purposes only and are not intended to be limiting.

To the extent that the mathematical models are capable of reproducing the magnitudes reported in experiments, they can be still be considered for modeling various natural processes. Therefore, the present invention comprises a system for automation of a chain conveyor (conveyor belt), which interaction with anthropomorphic robot manipulators can be controlled from visual servoing provided by a conventional web camera. The present invention discloses the design, development and implementation of a system for automation of a conveyor belt with visual servoing, which includes: a dynamic model of an anthropomorphic robot; processing of images and mechanical design of the conveyor belt. a) Dynamic model of an anthropomorphic robot.

The anthropomorphic robot with 3 degrees of freedom is formed by three links with rotational movement, i.e. three joints. The first element is the base, which has a height and the rotation due to this element is defined by the generalized coordinate q i . At the upper end of the base, the second link is located that receives the shoulder name, in this link its center of mass is identified as \c2] additionally the distance between the turn axis of the shoulder to the turn axis of the following link is identified as I2, the mass of this link is rri2, and the joint position due to the rotary movement is denoted by q2, which has as a frame of reference the one established at the upper end of the base.

The third joint has a measured length from the turn axis to the center of mass and is denoted by l C 3; I3 as the length of the turn axis towards the holding center of the end-effector; m 3 is the mass, and the joint position is denoted by q3; likewise, the reference framework at the upper end of the base.

In this way, the total potential energy of the robot is given as:

U( q) = migfd + m , 2 g[U - l c -i∞s(¾>)] + m^- gl ~ h cos(g 2 ) ~ ? > cos(</ 2 + ¾)] ^ j

On the other hand, the vector of gravitational forces is defined by the derivative of the potential energy, which for the system is given by the expression (1 ), such that:

(2)

Finally, the friction vector, considering the viscous drag and Coulomb classical models, is defined by: 7;

The general problem of control in joint coordinates of robots manipulators is the motion control or trajectory control, which consists of determining the torques applied to the servo actuators forming the joints, such that the positions associated with the joint coordinates of the robot q(t) follow with accuracy the desired position qd(t) that varies with time. Positioning control or regulation of control of robot manipulators is a particular case of motion control in which there is no reference that varies with time that the robot follows as in the trajectory control case, but instead in a constant point in time which is called desired position or set point. The objective of control is to position the end-effector of the robot on that point and to remain there indefinitely. The present invention implements the application of the robot with the positioning control concept or regulation and the point-by-point control in two robots. For that, a proportional- derivative control algorithm was used. b) Image processing.

The implementation of algorithms in vision by computer turns out to be very time intensive, since handling pointers, memory management etc. is required. All these problems are solved with the implementation of a test carried out in MATLAB utilizing its toolbox of image processing, and with this the implementation time comes down to a minimum. The image processing toolbox contains a set of functions of the best-known algorithms to work with binary images, geometric transforms, morphology and color handling that, along with the already built-in functions in MATLAB allow to carry out analysis and transformation of images in the frequency domain (Fourier and Wavlets transforms). In MATLAB an gray scale image is represented through a two-dimensional matrix m x n elements, where n represents the number of pixels in width and m the number of pixels in length. The element v1 1 corresponds to the element of the left upper corner (see Figure 1 ), where each element of the image matrix has a value of 0 (black) to 255 (white). On the other hand, an RGB color image (the most widely used for the computer vision, and because MATLAB is the option by default) is represented by a three-dimensional matrix m x n x p, where m and n have the same meaning as in the case of the gray scale images, whereas p represents the plane, which for RGB can be 1 for the red, 2 for green and 3 for blue. From time to time, it is necessary to make calculations that require the image to be processed completely; in these cases, based on the original resolution of the image, this would be very costly. A more efficient alternative is the sub-sampling of the image. Sub-sampling means to output an image by taking periodic samples of the original image, in such a way that this image remains smaller. A binary image is an image in which each pixel can have only one of two possible values, 1 or 0. As it is logical to suppose, an image in those conditions is easier to find and to distinguish structural characteristics thereof. In computational vision, the work with binary images is very important, whether by carrying out segmentation by intensity of the image, by outputting reconstruction algorithms or by recognizing structures. The most common way to output binary images is through the use of the threshold value of a gray scale image; that is to say a limit value is chosen (or an interval) from which all greater intensity values will be codified as 1 , while those below will be codified to zero. In MATLAB, this type of operations are carried out quite simply using the overload properties of the relational symbols.

For motion detection, we will first show the code of blocks in MATLAB, for a program in charge in the detection of movement; in Figure 2, the stages integrating said program can be seen. In Figure 3, we can see what has been captured by the camera, which is segmented in 16 blocks (the number of blocks can be modified by changing the resolution to be used), and when a motion is detected in any of these, the color is changed to identify the movement in the corresponding zone. The contour detection block outputs a binary image with edges that are shown in white. This output is shown in the Edges window (see Figure 4A). The block Compositing accepts the original video stills, that are shown in the Original window (see Figure 4B), and the output of the block Edge Detection serves as an input in Mask of the Compositing block. The input to the Mask port indicates to the Compositing block what pixels to highlight. As a consequence, the model shows a composed image in the window of Overlay (see Figure 4C), wherein the values of the original pixels are substituted by the values of the edge in white color.

The first block is the camera, after a conversion to a RGB format, the signal is split to filter values pertaining to the detection of the desired color, afterwards they pass to an AND gate to carry out two morphological operations Erode and Dilate. The most basic morphological operations are dilate and erosion. Dilate adds pixels to the limits of the objects in an image, while erosion eliminates the pixels on the edges of the object. The number of pixels added or eliminated from the objects in an image depends on the size and the form of the structural element employed to process the image. The Block Analysis calculates statistics of regions in a binary image, the block returns quantities such as centroid, a bordering frame, among others. With the help of this, we can highlight the color of our choice in a frame in the image captured by the camera. Figure 5 shows the final image observed obtained in Simulink, wherein it is observed that a frame is output, which encloses all the pixels containing the selected color range; for this test a red circle is being detected, which was painted by ROTRADI.

Arduino is based on microcontrollers ATMEGA168, ATMEGA328 and ATMEGA1280. Planes of the modules are published under the license of Creative Commons, therefore experienced circuit designers can make their own version of the module, thereby expanding or optimizing it. Arduino designs are free; this means that any person can manufacture his/her own printed circuit board and mount an Arduino thereon from the schema published in Arduino's website. The core of Arduino boards is a microcontroller and, this integrated circuit, as known, has three similar functional units to a computer, which are the following: CPU (Central Processing Unit), the memory, input - output peripherals I/O.

c) Mechanical design of the conveyor belt.

Figure 6 shows the block diagram of the complete system. The first thing to carry out with regard to the mechanical design is to generate the displacement of the chain for the conveyor belt. By analyzing the assembly of the pieces and with the help of Solidworks, a cogwheel was designed with the necessary dimensions to be adapted to the conveyor belt, and by adapting a motor to this cogwheel to generate movement of the chain. A fundamental part of the motion of the conveyor belt consists of a correct design of the cogwheel, for which the tooth clearance should match each link of the conveyor chain; besides, the proportion of the tooth should be adequate for the size of each link. Figures 7 and 8 show the design of the cogwheel and, as seen, said design and operation is correct. Meshing of the cogwheel with the conveyor belt chain was in different stages.

A small DC 12 V motor is installed with a very good gearbox, this motor is characterized by a dynamometer and it was verified that is capable of outputting a force on its axis of around 30 kg. As the motor of use, the gearbox is removed therefrom and is attached a new motor of similar characteristics but with a reduced performance set of gears. This motor offers 80 RPM and has a consumption of 300 mA on a free run and can have a peak consumption up to 5 A. A raw test was carried out by adapting the cogwheel to the motor and holding it to see if it could move the chain; upon verifying its efficacy, a structure capable of holding the motor along with the cogwheel is designed and which could be adapted to the conveyor belt, without affecting or modifying the structure of the conveyor itself, since the material it is made of is high-cost. With all the elements above working, the following step is to design and to assemble a structure of capable holding the motor. An important consideration for this design is that the existing material of Rexroth should not be modified. To allow this possibility, the joining elements that the conveyor belt offers us are used; with these properties the following piece was designed (see Figures 9), which allows the attachment thereof to the conveyor belt and to hold the motor to the appropriate distance, so that the cogwheel meshes with the chain. With the cogwheel assembled and in tandem with its supporting box and coupled to the conveyor belt, a characterization of the weights to be carried by the chain and current draw according to the weights it carries, this characterization is shown in the following table:

Weight (Kg) Current drawn

(A)

1 .01 1 .40

1 .995 1 .42

3.03 1 .58

4.03 1 .62

5.02 1 .73

6.00 1 .82

6.98 1 .91

7.02 2.01

8.01 2.13

9.03 2.24 10.02 2.33

Pursuant to the present invention, a motion detection program is included, and the sending of control data takes place to the Arduino Mega2560 board, in order to control the motion of the conveyor chain, and Figure 10 shows the block diagram of motion detection and data sending. This program uses a Prefect Choice brand camera, which has a resolution of 800 x 600 pixels, and splits the image in 16 frames, wherein each frame produces signals. A specific quadrant is selected and when motion takes place in said quadrant, Simulink sends that signal to the Arduino board, which is useful since we can use this program to carry out the positioning control of the conveyor belt. In this stage, the Arduino programming environment takes place, along with image processing by Simulink; in Figure 1 1 , the control algorithm for motion of the conveyor chain and for the painting routine with the robot is shown.

d) Example.

The robot will perform a particular task to interact with the conveyor belt. To carry out the painting task, an airbrush is required, which is a pneumatic device that generates a fine dew of painting, dye or protective coating of various diameters and useful to cover generally small surfaces with artistic or industrial purposes. It may have a pencil-like sprayer to apply the spraying in great detail, as required by the shading of drawings and retouching of photographs, as well as a container with the spraying material. Several airbrushes exist with different characteristics, which include the actioning type, such as simple action or double action. A simple action airbrush is that in which the lever only controls air supply. The simplest models work in this way, and the painting is controlled in a different point. Most of airbrushes of this type are fed by suction. In a standalone double- action airbrush, the lever controls all the air and paint flow separately. Airbrushes of this type can be fed through suction or gravity; this system allows the user to have a maximum control of the apparatus, and the most versatile and advanced models work with this. Due to the fact that the painting task to be carried out is very simple, a simple action airbrush was chosen. Since the airbrush is a pneumatic device, a solenoid valve is employed, which allows us to control the propellant gas flow toward the airbrush. A solenoid valve is an electro-mechanic valve, designed to control the flow of a fluid through a conduit such as a tubing. The valve is controlled by an electric current through a solenoid coil. d. 1) Circle painting tests.

The first painting task was carried out with the solenoid valve by controlling the airbrush, painting with the robot and the static conveyor belt, with the purpose of checking if the trajectory followed by the robot was the correct one; the proposed trajectory results from the parameter setting of the equation of a circle in plane coordinates, and this is carried out using the previously programmed direct and inverse kinematics in the robot; in this manner, the joint coordinates generally used by the robot can be converted into coordinates in a XY plane (see Figure 12). d.2) Motion detection tests. Proving that the system can detect a specific position which would be generated by the movement of the box, which is displaced by the conveyor belt, is being verified; once the box is detected at the selected position, the robot has to perform the trajectory with which the painting is carried out, and in addition the system should only recognize the motion generated by the box and not by the robot. Figure 13 shows the displacement of the box through the conveyor belt, figure 14 shows that, upon reaching a selected arbitrary position, the conveyor stops and the robot begins to execute the task selected, and figure 15 shows that once the process is finished, the box continues its trajectory through the conveyor belt. d.3) Motion detection tests with painting and color detection.

Finally, the test is carried out where all the elements get together, wherein two different tasks with two ROTRADIs are carried out: painting of a circle with the robot on a box with an airbrush, and detecting the painting zone with a web camera, which was attached to the robot's end-effector in visual servoing as a camera-in-hand configuration. In this first test, the motion and painting detection is carried out, afterwards the box continues being displaced by the conveyor belt and approaching the second robot that, through the camera web, inspects the robot's painting task, the latter with the purpose of evaluating the painting task of the first robot and thereby being able to know the painting area. In figure 16 it can be observed that the test begins with the movement of the box on the conveyor belt. Figure 17 shows that the WEB camera already has detected movement at the selected zone, thereby sending a serial communication by Simulink/Matlab towards the Arduino Mega board (which controls the movement of the conveyor) and stopping the power stage and sending a signal to the robot to execute the point-to-point control task by tracing the trajectory of a circle, simultaneously carrying out the painting task of with the help of an airbrush controlled by a solenoid valve. After going through the painting state, the box continues moving with the conveyor belt to enter the inspection zone of the second robot, and this process can be seen in figure 18. In this phase the program inspects the painting area of the robot, which has mounted therein a web camera on the end-effector and is located next to the conveyor; this robot, with the help of Simulink, inspects the zone that the robot has painted, which allows us to evaluate the painting performance of the first ROTRADI, thereby obtaining the painting zones with this analysis. Figure 19 shows a perspective view of the inspection stage according to the present invention, as seen by a person supervising such stage. Figure 20 shows the inspection stage from a computer screen in accordance with the present invention. Although the invention has been described with reference to diverse aspects of the present invention and examples with regard to a system for automation of a chain conveyor (conveyor belt) whose interaction with anthropomorphic robot manipulators can be controlled from visual servoing provided by a conventional web camera, it is within the reach and spirit of the invention the incorporation or use with any system and/or adequate mechanical device. Therefore, it must be understood that numerous and varied modifications can be made without departing from the spirit of the invention.