Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD AND A SYSTEM FOR SUPERVISING A WORK AREA INCLUDING AN INDUSTRIAL ROBOT
Document Type and Number:
WIPO Patent Application WO/2007/085330
Kind Code:
A1
Abstract:
The invention relates to a system and method for supervising a work area including an industrial robot having at least two de¬ fined security levels. The system comprises at least one visible marking (1 1 , 12) indicating a potentially dangerous region in the vicinity of the robot, the marking having at least one unique fea¬ ture relative to other objects in the dangerous region, at least one camera (9a-d) adapted to repeatedly during operation of the robot capture images of the potentially dangerous region includ- ing said marking, a computer unit (22) adapted to receive said images and to detect changes in the marking based on said im¬ ages, and to decide whether the security level of the robot should be changed based on the detected changes in the mark¬ ing line.

Inventors:
BROGAARDH TORGNY (SE)
Application Number:
PCT/EP2006/069682
Publication Date:
August 02, 2007
Filing Date:
December 13, 2006
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ABB AB (SE)
BROGAARDH TORGNY (SE)
International Classes:
G05B19/4061; B25J9/16; F16P3/14
Domestic Patent References:
WO2002073086A12002-09-19
WO2002041272A22002-05-23
Foreign References:
DE10000287A12001-07-19
DE19938639A12001-02-22
DE10320343A12004-12-02
EP1457730A22004-09-15
JPH05297944A1993-11-12
Attorney, Agent or Firm:
BJERKÉNS PATENTBYRÅ KB (Box 128, S- Västerås, SE)
Download PDF:
Claims:
CLAIMS

1 . A method for supervising a work area including an industrial robot (1 ) having at least two defined security levels, wherein at least one visible marking (1 1 , 12, 19) indicating a potentially dangerous region is provided in the vicinity of the robot, the marking having at least one unique feature relative to other objects in the dangerous region, and the method comprises repeatedly during operation of the robot: capturing images of the dangerous region including said marking, detecting changes in the marking based on the images, and deciding whether the security level of the robot should be changed based on the detected changes in the marking.

2. The method according to claim 1 , wherein detecting changes in the marking includes detecting changes of position and length of missing parts of the marking.

3. The method according to claim 1 or 2, wherein detecting changes in the marking includes: determining at least one parameter for the marking based on said images, and comparing the determined parameter with at least one pre- viously determined parameter and based thereon detecting changes in the marking.

4. The method according to claim 3, wherein said parameter for the marking is based on one or more of the following proper- ties: colour, area, length, width, center of gravity position, pattern complexity measures, roundness, spatial frequency components, pattern domain descriptors and contrast relative to the surrounding.

5. The method according to any of the previous claims, wherein it includes determining, based on the detected changes

in the marking, whether it is likely that the changes in the marking originate from a human crossing the marking and based thereon deciding whether the security level of the robot should be changed or not.

6. The method according to any of the previous claims, wherein two images of the dangerous region seen from two different angles are captured and changes in the marking are detected based on both images.

7. The method according to claim 6, wherein the method comprises: estimating the height of an object by registration of changes in the marking based on said two images, and determining based on the estimated height of the object whether or not it is possible that the changes in the marking originate from a human crossing the marking, and based thereon deciding whether the security level of the robot should be changed or not.

8. The method according to claim 6 or 7, wherein the method further comprises: detecting slow 2-dimensional changes in the marking based on previous and current images, and generating a warning when the changes exceed a threshold value.

9. The method according to any of the previous claims, wherein the marking or markings are colour-coded in at least two differ- ent colours.

10. The method according to any of the previous claims, wherein at least two markings in the form of lines or a chain of patterns are provided at different distances from the robot.

1 1 . The method according to claim 10, wherein a higher security level is determined if changes in a marking with a shorter distance to the robot are detected, and a lower security level is determined if changes in a marking line with a longer distance to the robot are detected.

12. The method according to any of the previous claims, wherein the method further comprises: estimating the position of the robot based on said changes in the marki ng, compari ng the estimated robot position with the robot position from the control system of the robot, and indicating an supervision error if the difference between the positions is larger than a threshold value.

13. The method according to any of the claims, wherein it further comprises: estimating the position of a human in the work area based on said changes in the marking, receiving information on the next planned movement of the robot, and determi ning whether or not the robot will move in a direction towards the human during the next movement, and based thereon deciding whether the next movement of the robot will be allowed or blocked .

14. A system for supervising a work area including an industrial robot (1 ) having at least two defined security levels, wherein the system comprises: at least one visible marking (1 1 , 12, 19) indicating a potentially dangerous region in the vicinity of the robot, the marking having at least one unique feature relative to other objects i n the dangerous region, at least one camera (9;9a-d;30a-s) adapted to repeatedly during operation of the robot capture images of the potentially dangerous region including said marking,

a computer unit (32) adapted to receive said images and to detect changes in the marking based on said images, and to decide whether the security level of the robot should be changed based on the detected changes in the marking line.

15. The system according to claim 14, wherein said computer unit (30) is adapted to detect changes in position and length of missing parts of the marking.

16. The system according to claim 14 or 15, wherein said computer unit (30) is adapted to determine at least one parameter for the marking based on said images, and to compare the determined parameter with at least one previously determined parameter and based thereon detecting changes in the marking.

17. The system according to claim 16, wherein said parameter for the marking is based on one or more of the following properties: colour, area, length, width, center of gravity position, complexity measure, and contrast relative to the surrounding.

18. The system according to any of claims 14 - 17 wherein the computer unit (30) is adapted to determining, based on the detected changes in the marking, whether it is likely that the changes in the marking originate from a human crossing the marking and based thereon deciding whether the security level of the robot should be changed or not.

19. The system according to any of claims 14 - 18, wherein the system comprises a second camera (30b) adapted to repeatedly during operation of the robot capture images of the potentially dangerous region including said marking, seen from a different angle than the first camera, and said computer unit is adapted to detect changes in the marking based on images from both cameras.

20. The system according to claim 19, wherein the computer unit (22) is adapted to estimate the height of an object by registration of changes in the marking based on said two images, and to determine based on the estimated height of the object whether or not it is possible that the changes in the marking originate from a human crossing the marking, and based thereon decide whether the security level of the robot should be changed or not.

21 . The system according to claim 19 and 20, wherein the computer unit (22) is adapted to detect slow 2-dimensional changes in the marking based on previous and current images, and to generate a warning when the changes exceed a threshold value.

22. The system according to any of claims 14 - 21 , wherein the marking or markings are colour-coded in at least two different colours.

23. The system according to any of claims 14 - 22, wherein at least two markings in the form of lines or a chain of patterns are provided at different distances from the robot.

24. The system according to claim 23, wherein the computer unit (22) is adapted to decide a higher security level if changes in a marking with a shorter distance to the robot are detected, and to decide a lower security level if changes in a marking line with a longer distance to the robot are detected.

25. The system according to any of the claims 14 -24, wherein the computer unit (22) is adapted to estimate the position of the robot based on said changes in the marking, to compare the estimated robot position with the robot position from the control system of the robot, and to indicate an supervision error if the difference between the positions are larger than a threshold value.

26. The system according to any of the claims 14 - 25, wherein the computer unit (22) is adapted to estimate the position of a human in the work area based on said changes in the marking, to receive information on the next planned movement of the ro- bot, and to determine whether or not the robot will move in a direction towards the human during the next movement, and based thereon decide whether the next movement of the robot will be allowed or blocked.

27. A computer program product directly loadable into the internal memory of a computer, comprising software for performing the steps of any of the claims 1 -13.

28. A computer-readable medium, having a program recorded thereon, where the program is to make a computer perform the steps of any of the claims 1 -13, when said program is run on the computer.

Description:

Reference: 400580PCT-9972 Applicant: ABB AB

A METHOD AND A SYSTEM FOR SUPERVISING A WORK AREA INCLUDING AN INDUSTRIAL ROBOT

FIELD OF THE INVENTION AND PRIOR ART

The present invention relates to a method and a system for su- pervising a work area including an industrial robot having at least two defined security levels.

For safety reasons the robot is often placed in a robot cell. The robot cell encloses a dangerous area in which there is a risk of collisions with the robot. The robot cell is often enclosed by a fence having a gate, or by light barriers. Different security levels are applied if a human is outside or inside the robot cell. If a human enters the robot cell, the security level is increased and certain safety rules are applied. One safety rule is that the safe equipment of a portable operator control unit, generally denoted a Teach Pendant Unit, must function within the robot cell and the controller must be in a safety state in which the robot is not allowed to run faster than 250 mm/sec. At the same time, the operator must use a three-position safety switch on the Teach Pendant in order to be able to run or make programs. When there is no operator inside the fence the robot can run programs in full speed and the safety level is thus low in the robot controller. The robot cell is provided with detecting means detecting when a person is entering or leaving the cell. If a fence and gates enclose the robot cell, it is, for example, detected when the gates are opened and closed. Thereby, it is possible to change the security level of the robot in dependence on whether there is a human visiting the cell or not.

However, there is a plurality of drawbacks with enclosing the robot cell with a fence, especially for small and medium sized enterprises. The most important drawbacks are:

- valuable floor space is lost because of the fences, - in a dense production site there is no place for fences which may make it impossible to use robots,

- the cost of the fences and gates and their installation is high,

- it is difficult to rearrange the cell layout in the work area,

- the fences and gates make it difficult to move things into and out from the cell

- the fences make it difficult to move items in the work area,

- special attention has to be made for load and unloading stations connected to the cell, and

- when the robot stops and needs operator intervention it takes time to get into the cell.

Accordingly, there is a desire to replace the fences and gates used around the robot cells today. A great deal of R&D has been invested on sensor-based safety systems to replace the fences. Examples of concepts tried in laboratories are:

- The use of electric field measurement to recognize the presence of a human close to the robot

- The use of cameras and pattern recognition to detect a human in the vicinity of a robot - The use of different distance measurement sensors (ultrasonic, IR, magnetic, radar etc.) to measure the distance between the robot and a human.

None of these concepts has given the robustness needed to be a candidate for a safety-critical system. The most promising concept is to use cameras mounted in the roof to detect the position of a human in relation to the robot. However, no proofed safe design of such a system exists because of the large variations that can be found around a robot. Examples of problems that will make it very difficult to proof the robustness of the camera supervision systems proposed so far are:

- large variations in the characteristics of a human, specially with respect to the type of body pose and the type of clothing,

- large variations in the characteristics of other moving objects in the cell as, for example, the robot itself, objects on position- ers, turntables, tracks and conveyers,

- large variations in the lighting conditions of the cell,

- large variations with respect to reflections, and

- complex background geometry, which makes pattern recognition difficult and unpredictable.

In order to make it possible to perform a safety analysis of a camera system, a much more well defined situation is needed with much less variability from time to time and from installation to installation.

OBJECTS AND SUMMARY OF THE INVENTION

The object of the present invention is to provide a solution to the above-mentioned problem on how to safely supervise a work area including an industrial robot without using fences and gates.

According to one aspect of the invention this object is achieved with a method as defined in claim 1 .

According to the invention, at least one visible marking indicating a potentially dangerous region is provided in the vicinity of the robot, the marking having at least one unique feature relative to other objects in the dangerous region. The method comprises repeatedly during operation of the robot:

- capturing images of the dangerous region including said marking,

- detecting changes in the marking based on the images, and

- deciding whether the security level of the robot should be changed based on the detected changes in the marking.

The visible marking marks a region of a certain security level. The marked region may, for example, correspond to the reach of the robot. Thus, the security level must be changed if a human crosses the marking. If the human enters the dangerous region, the security level must be increased, and if the human leaves the dangerous region the security level must be decreased. The marking is, for example, in the form of a line or a chain of patterns on the floor of the work area. The marking can, for example, be painted on the floor, consist of tape or coloured sheets of a material, or even be part of the floor as a mosaic pattern. The colour and shape of the marking must be such that the contrast between the marking and its background is good enough for the pattern recognition software. The marking must have at least one unique feature, for example colour or shape, relative to other objects in the dangerous region in order to ensure that the pattern recognition does not mix up the marking with any other object, for example a cable, in the region.

The visible marking has two functions. One function of the mark- ing it that it outlines dangerous parts of a work area, with respect to risk of collision with a moving robot, so that people in the work area will know where the dangerous parts are located. The other function of the marking is to be used for pattern recognition in order to detect if a human is entering or leaving the dangerous region. Thus, it is possible to detect if a human gets inside a potentially dangerous region and if so to increase the security level of the robot. Different levels of security mean, for example, a reduction of the speed of the robot, a limitation of the operating range of the robot, or a stop of the robot upon de- tecting that a human is crossing the marking.

The main idea with present invention is that when the images are processed, only properties of the marking are used, information on the surroundings is discarded, whereby simple and reli- able algorithms with high redundancy and selectivity can be adopted.

The marking must have at least one unique feature relative to other objects in the dangerous region. That makes it possible to distinguish the marking from the other objects in the region with a very high reliability. If a human is crossing the marking, parts of the marking will be missing. According to the invention, changes in the marking are detected and based on the changes it is decided if the security level is to be changed or not. For example, changes of position, width, and length of the missing parts of the marking are detected.

According to an embodiment of the invention, detecting changes in the marking includes: determining at least one parameter for the marking based on said images, and comparing the deter- mined parameter with at least one previously determined parameter and based thereon detecting changes in the marking. For example, the parameter of the marking is based on one or more of the following properties: colour, area, length, width, center of gravity position, pattern complexity measures, round- ness, spatial frequency components, pattern domain descriptors, contrast relative to the surrounding etc. Changes in those parameters include valuable information to enable a safe detection of a human crossing the marking, and also information to be used to detect malfunction of the hardware and software of the supervising system.

According to an embodiment of the invention, the method includes determining, based on the detected changes in the marking, whether it is likely that the changes in the marking originate from a human crossing the marking and based thereon deciding whether the security level of the robot should be changed or not. Changes in the marking can be caused by other factors than a human crossing the marking, for example by wear of the marking, dirt on the marking, a small animal, such as a mouse or a cat, crossing the marking, other moving object inside the robot cell, such as the robot itself, is crossing the marking, or a sta-

tionary object which has been put down on the marking. Information obtained from the detected changes in the marking is used to judge if it is likely that the changes originate from a human or if it is impossible that the changes arise from a human. If it is likely, or at least not can be ruled out that the changes are caused by a human crossing the markings in a direction towards the robot, the security level is increased.

According to an embodiment of the invention, two images of the dangerous region seen from two different angles are captured and changes in the marking are detected based on both images. An advantage with having two images is that they achieve redundancy. The two images can be used to detect errors in any of the cameras used to capture the images. Due to the redun- dancy a very high reliability is obtained. This embodiment also makes it possible to determine the height of the object that causes the changes in the marking.

According to an embodiment of the invention, the method com- prises: estimating the height of an object by registration of changes in the marking based on said two images, and determining based on the estimated height of the object whether or not it is possible that the changes in the marking originate from a human crossing the marking, and based thereon deciding whether the security level of the robot should be changed or not. If a high object, such as a human, is crossing the marking, there will be a large difference between the two images of the positions of the missing parts of the marking. If the position of a missing part is the same in both photos, the height of the object causing the missing part is zero. This is, for example, the case when the object is a liquid flowing on the floor, or the missing part is due to dirt or wear of the marking.

According to an embodiment of the invention, slow 2- dimensional changes in the marking are detected based on previous and current images, and a warning is generated when the

changes exceed a threshold value. By 2-dimesional changes are meant changes caused by an object with no height. If changes having a slow development and the height of the object causing the missing parts is approximately zero, slow 2-dimesional changes have been detected. This embodiment detects if the visibility of the marking has been reduced due to wear and dirt and generates an alarm if the visibility of the marking has significantly decreased. Thus, the reliability of the method is increased.

According to an embodiment of the invention, the marking or markings are colour-coded in at least two different colours. To use a coloured marking is an advantage, both in terms of telling the people the warning level of the marking and making it possi- ble to perform pattern recognition in different colours, which very much increases the reliability of the camera-based supervision. It is possible to detect errors in hardware and software of the system based on the different colours. To use more than one colour on the marking gives it a unique appearance, which makes it easier to distinguish the marking from the surroundings and further increases the reliability of the method.

According to an embodiment of the invention, at least two markings in the form of lines or a chain of patterns are provided at different distances from the robot. This embodiment makes it possible to have areas with different security levels, which are marked with different markings. Which of the security levels to be applied depends on which type of the markings the human crosses. The colours and shapes of the markings can be made in many different ways, but they should be organized in such a way that the meaning of them is intuitively evident. The appearance of marking, for example the colour of the markings, will tell the people in the work area that if they pass these markings another security level will be applied. Preferably, a higher security level is determined if changes in a marking with a shorter distance to the robot is detected, and a lower security level is de-

termined if changes in a marking line with a longer distance to the robot are detected. For example, if people pass a first line they will enter an area in which the robot may reach them and the robot will be stopped if the robot arm is too close to them, and if they pass a second line the robot will immediately stop independently of where the robot arm is located. This embodiment also makes it possible to supervise the position of the robot if one of the makings is positioned inside the working range of the robot.

According to an embodiment of the invention, the method further comprises: estimating the position of the robot based on said changes in the marking, comparing the estimated robot position with the robot position from the control system of the robot, and indicating a supervision error if the difference between the positions is larger than a threshold value. This embodiment makes it possible to detect errors in the control system of the robot and errors in the camera system.

According to an embodiment of the invention, the method further comprises: estimating the position of a human in the work area based on said changes in the marking, receiving information on the next planned movement of the robot, and determining whether or not the robot will move in a direction towards the human during the next movement, and based thereon deciding whether the next movement of the robot will be allowed or blocked. This embodiment increases the security for a human visiting an area in the vicinity of the robot.

According to another aspect of the invention this object is achieved by a system as defined in claim 14.

Such a system comprises: at least one visible marking indicating a potentially dangerous region in the vicinity of the robot, the marking having at least one unique feature relative to other objects in the dangerous region, at least one camera adapted to

repeatedly during operation of the robot capture images of the potentially dangerous region including said marking, and a computer unit adapted to detect changes in the marking based on said images, and to decide whether the security level of the ro- bot should be changed based on the detected changes in the marking line.

It is easy to realize that the method according to the invention, as defined in the appended set of method claims, is suitable for execution by a computer program having instructions corresponding to the steps in the inventive method when run on a processor unit.

According to a further aspect of the invention, the object is achieved by a computer program product directly loadable into the internal memory of a computer or a processor, comprising software code portions for performing the steps of the method according to the appended set of method claims, when the program is run on a computer. The computer program is provided either on a computer-readable medium or through a network, such as the Internet.

According to another aspect of the invention, the object is achieved by a computer readable medium having a program re- corded thereon, when the program is to make a computer perform the steps of the method according to the appended set of method claims, and the program is run on the computer.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will now be explained more closely by the description of different embodiments of the invention and with reference to the appended figures.

Fig. 1 shows an example of a prior art robot cell seen from above.

Fig . 2 shows a robot cell seen from above using camera supervision according to a first embodiment of the invention .

Fig . 3 shows a robot cell according to a second embodiment of the invention.

Fig . 4 shows a part of the cell in figure 3 in perspective.

Fig . 5 shows a human entering the cell in figure 3 and 4 in a view as seen from the camera.

Fig . 6 shows a robot cell according to a third embodiment of the invention.

Fig . 7 shows a pattern of the robot cell in figure 6 after pattern recognition.

Fig . 8 shows the pattern in figure 6 after pattern restoration us- ing 3D.

Fig . 9 shows the pattern in figure 6 after pattern restoration using a simple robot model.

Fig. 10a-b show an example of stereo measurements using two cameras.

Fig . 1 1 shows an example of software design for robot cell supervision.

Fig . 12 shows an example of a flow diagram for the robot cell supervision.

Fig . 13 shows an example of geometric redundancy tests of pat- tern on floor.

Fig. 14 shows an example of geometric redundancy tests of 3D pattern.

Figs. 15a-b show examples of colour-coded markings.

Figs. 16a-c show examples of shape-coded markings.

Fig. 17 shows an example of a supervision system according to the invention.

Fig. 18 shows an example of how the supervision system according to the invention can also be used to improve the safety of the robot control.

Fig. 19 shows a further example of markings on the floor.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION

Figure 1 shows a typical prior art robot cell with an industrial robot 1 , two work objects 2, 3, two positioners 4, 5, a cell loading/unloading space 6 and a fence 7. The purpose of the fence is to prevent people from entering the robot cell when the robot is working. The fence has two gates. Those gates work in such a way that as soon as one of the gates is opened the robot stops. As can be seen from the figure, considerable floor space is wasted because of the fence.

Figure 2 shows a possible way of using cameras in the cell of figure 1 to avoid fences and gates. The cell equipment is the same as in figure 1 but the fence and gates have been replaced with four cameras 9a, 9b, 9c and 9d. The broken line circle 8 corresponds to the reach of the robot and thus the cameras must detect if a human gets inside this circle. To do so, the camera lenses and the height of the camera mountings are selected in such a way that the cameras cover the critical parts of

the scene as shown by the continuous circles 10a, 10b, 10c and 1 Od. In some way the people in the workshop must know where the dangerous parts of the work area are with respect to risk of collision with a moving robot. This is achieved by providing a visible marking indicating the dangerous parts of the work area.

In Figure 3 the dangerous parts of the work area are marked using elongated markings 1 1 , 12 on the floor in the vicinity of the robot. These markings can be painted on the floor or consist of tape or coloured sheets of material or even be part of the floor as a mosaic pattern. In an environment where dust and dirt would rapidly cover the markings, these can be kept free by air flow and even the nozzles of an air system can be used as markings. A first marking 1 1 in the form of a circular line in a first colour, for example yellow, tell the people in the work area that if they pass this marking the robot may reach them and the robot will be stopped if the robot arm is too close to them. A second marking 12 in the form of a circular line of a second colour, for example red, tells the people in the workshop that inside this red area the robot will immediately stop. Of course, the colours and shapes of the markings can be made in other ways, but should be organised in such a way that the meaning of them is intuitively evident for the people working in the workshop.

To use coloured markings is an advantage both to tell the people the warning level of the markings and to make it possible to perform pattern recognition in different colours, which very much increases the reliability of the camera-based supervision. However, the concept will also work without any colour coding; just the greyscale contrast between the markings and their background is good enough for the pattern recognition software. The cameras 9a-d are connected to a computer unit 22 adapted to receive the images and to detect changes in the marking based on the images, and to decide whether the security level of the robot should be changed based on the detected changes in the marking line.

Figure 4 shows part of the cell in figure 3 in perspective. As exemplified in the figure the scene for the camera 9 can be quite complex and dynamic. Inside the view circle 10 of the camera there is a conveyer 13 with a moving item 15. There is also a loading station 14 with varying height dependent on how many objects are stacked on it and sometimes with a bright reflection 15 on a mirror-like object surface. Moreover, there is oil 16, dirt 20 and a robot cable 21 on the floor, which will make a safe de- tection even more difficult. Figure 5 shows the same part of the cell as in figure 4, but this time from the camera view. As can be seen, a human 17 is entering the robot cell walking on the floor 18 towards the robot 1 .

In order to obtain a safe registration (independent of all geometry and motion noise in the camera scene) of a safety-critical object as a human moving towards the robot the floor around the robot is covered with several circular shaped markings of different colours forming a pattern, as exemplified in Figure 6. Thus, there are three outer markings 1 1 a-c of a first colour, in this example yellow, two intermediate markings 12a-b of a second colour, in this example red, and one inner marking 19 of a third colour, in this example blue. The higher the resolution of the camera is, the denser can the pattern be (more markings) and the higher accuracy and reliability will the supervision system have. The main idea with the camera-based supervision concept is that when the camera frames are processed, only the properties of the pattern on the floor are used, whereby simple and reliable algorithms with high redundancy and selectivity can be adopted. Of course, the pattern does not need to consist of circular bows even if this is best for a robot with a circular workspace. For example the markings in the case of a gantry robot should have a rectangular pattern to match the rectangular workspace.

The pattern recognition system will only detect the markings and not the other objects in the work area. As for the example shown in figure 6, the pattern recognition system will only detect circular formed markings with the colours yellow, red and blue. The scene detected will look as in Figure 7. As seen from figure 6, the circular markings 1 1 a-c, 12a-b, 19 are interrupted by the objects 15, 16, 20 and 21 and by the arm 1 a of the robot and by the legs of the human 17. The objective of the camera-based safety supervision system is to safely detect the broken mark- ings caused by the human, which means to distinguish the missing parts of the marking between the marking segments 1 1 a:2, 1 1 a:3 and 1 1 a:4 from all other missing parts of the markings. Thus, all the other missing parts of the markings 1 1 a-c, 12a-b and 19 must be ruled out as being a human with very high prob- ability to avoid false stops of the robot.

One way to increase the reliability when detecting the human or another safety-critical object is to use two cameras separated from each other as in a stereo vision setup. Then the flat and low height objects 16, 20 and 21 can be separated from a human being, which always is above a certain height. Thereby, the missing parts 23-26 of the markings can be reconstructed and it can be ascertained with full confidence that it is not a safety- critical object as a human, see Figure 8. However, the object 15 in figure 5 has such a height that it could be a human, but since this object never moves and is static the time history of the missing part between marking segment 1 1 a: 1 and marking segment 1 1 a:2 can be ruled out after some time from the start or the camera supervision system. However, if the object 15 starts moving it will be detected as a human, which is correct since all moving objects may be connected to a human and should therefore not be tolerated in the supervised area around the robot. The low height objects 16, 20 and 21 could also be ruled out since they are static.

Two camera systems are needed to get redundancy both with respect to hardware and software. Having two camera systems, as much redundancy as possible should be made used of in order to find the difference between a human entering the robot area and objects already there. Thus, for low height objects both their static and low height properties should be use. This may be important for example if reflections or light spots on the floor that disturb the detection of the markings move because of changing light conditions, or if a dark liquid flows on the floor above a marking. Even if these are dynamic objects the height detection will still rule them out as human.

The blue marking 19 is used to detect the angle of the robot arm 1 a and an extrapolation with this angle in a radial direction from the centre of the robot will then give possible broken red and yellow markings, and if such lacking markings are found it can be concluded that this is because of the robot arm and also the missing marking parts 28a-d can be ruled out as being a human. Of course it is an advantage to have more than one blue mark- ing in order to more accurately measure the direction of the robot arm with the cameras. One redundancy here is that the position of axis one of the robot can be used to verify that the angle is the same as detected by the blue markings. Of course with more than 1 blue marking the redundancy is increased. It should also be pointed out that with two cameras in a 3D constellation the robot arms will give more complicated shadows on the markings, which further increases the probability to differentiate between a robot and a for example a human.

Moreover, the movement pattern of a robot arm is different from that of a human, which can be used to further increase the redundancy in the robot arm detection. Of course, the marking to measure the robot angle can have another colour and then preferably not the same colour as the robot arm. However, also this situation can be handled by using multi-coloured markings as exemplified in Figure 15.

When all other objects that break the markings on the floor have been ruled out as safety-critical, the human in figure 6 remains to be detected as a safety-critical object. Thus, Figure 10a-b ex- emplifies how two close marking lines 1 1 x and 1 1 y are broken as seen from the two cameras 30a and 30b mounted above the floor in between the inner yellow marking 1 1 y and the centre of the robot. The light rays 31 ,32 corresponding to the ends of the human shadowing of the markings are shown for the left camera 30a and the right camera 30b. The missing parts of the markings as seen from the two cameras are marked out in Figure 10b with 33 and 34 for camera 30a and 35 and 36 for camera 30b. The positions of the missing parts can be calculated in a global coordinate system from a calibration of the cameras using the pat- tern on the floor.

In order to obtain measurement redundancy the floor can be marked with unique symbols 29a, 29b, 29c known by the camera software and each camera can measure the distances, as 35, 36, 37 and 38, from the symbols to the missing parts of the markings and it can be checked if these distances relate to each other as expected from camera calibration. For further redundancy, specific features of a human could be checked, for example if the size and shape is realistic. However, the most im- portant feature for the detection of safe-critical objects like a human is its motion, meaning dynamic changes in the missing parts of the markings and a deviation of the missing parts of the markings in relation to an initial static capture of the marking pattern.

Figure 1 1 shows an example of the software architecture of the camera-based safety system. In this case three cameras are mounted over the area to be supervised, one camera to the left 30a, one in the middle 30b and one to the right 30c. A common clock 40 trigs the reading of camera frames to respective memory partitions 39v, 39m and 39h. Then the images are separated

in different colours and each colour is processed individually in processing units 41 a-i. Of course, more colours can be used to increase the redundancy of the detection algorithms. By 2D filtering the marking geometry is detected either by binary or grey- scale methods and properties for the geometric pattern domains are obtained, as for example width, length and radius of a circular marking. In this way a shape test can be made beside the colour test to be sure that, for example, the markings are the markings used for the safety-critical object detection and not other marking-formed contours in the cell. Having the data for the pattern seen in different colours by the different cameras, safety patterns as shown in figure 7 are processed in block 42y,42r,42b for the different colours, whereby pattern restorations as shown in figures 8 and 9 are made for each colour. As an option, data on the robot position can be used from the robot controller 43.

Finally, all the information obtained is fused in block 44 to make a safe decision as to whether a safety-critical object is in the supervised area. If this object is in the yellow area the warning lamp 4 will be switched on and the controller will be informed. If the robot arm is far away from the area that the safety-critical object is in, the robot will proceed with its work. If it is at a medium distance from the safety-critical object the robot will slow down, and if the robot arm is close, the robot will be stopped immediately. If the safety-critical object is getting inside the red area the robot will be stopped immediately. Of course, it is possible to implement different strategies about how the robot should be controlled when a safety-critical object, such as a human, is at different locations in relation to the robot arm. Such strategies are not the objective of this invention, for which the focus is to safely detect the position of a safety-critical object in the vicinity of a robot.

Figure 12 shows a flow diagram of some of the actions of the software in figure 1 1 . When the system has been started up the

cameras are calibrated by means of the pattern on the floor block 50. This can be made in different ways, for example by measuring the distances between markings and input these values to the camera system or by putting a marking on the robot and move this marking to different places in the workspace with the robot. In the next step all the markings are recognized and their positions and parameters are calculated and saved, block 51 . This can be made automatically if standard markings already programmed into the pattern recognition software are used. Also data on the contrast of the pattern relative to the background are detected and stored to have as a reference with respect to the supervision of the condition of the pattern. When these become too dirty or have been partly destroyed the system will make a warning and ask for cleaning/restoration of the pattern on the floor.

The initially determined static pattern parameters are stored for use during the workspace supervision, block 52. After storing the initial static pattern parameters the dynamic pattern recogni- tion starts, block 53. Then the continuously captured scene marking parameters are calculated after pattern recognition (in the same way as made for the initial static pattern) and the parameters for the pattern geometries with its lacking parts are compared with the initial static pattern parameters stored after starting the system, block 54. Examples of parameters are marking segment colours, contrast, lengths and widths and the position on the floor of the missing parts of the pattern. If there is no or a very small parameter difference (level decided by the noise level, which is calculated during the calculation of the initial static parameters, which was made from several camera frames) new scene frames will be captured and processed until there is a difference in the parameters, block 55.

If there is a change in the marking parameters it will be checked if it is the robot movements that have been detected, block 56, and one way to take this decision is to look at what is happening

with the pattern closest to the robot (the markings 19 in the previous figures). If this pattern is changing, a test can be made if the controller has a position of axis 1 corresponding to the detected pattern interference, block 57. If this is not the case there is a problem in the set-up and the supervision system should force the robot controller to stop the robot and give an error message, block 58,59. If it is the case that the controller and the vision system agree on the calculated axis 1 angle, the safety system sets a flag that the robot is in the supervised area and the next camera frames are analysed, block 60.

If there is a parameter difference, block 55, and there is no robot arm detected, block 56, it is tested if there are other moving objects, and if that is the case a comparison is made with the initial captured frame again and earlier frames of the history of a moving object, block 62. If the change in pattern is large enough a new alarm level is set meaning that a safety-critical object is in the supervised area, block 64, and if the pattern where the critical object is detected has a red pattern, block 65, the cam- era supervision system orders the robot controller to stop the robot, block 59. If the pattern is not red a further test is made if the robot is in the same area as the critical object, block 66, and if that is true the robot will be stopped, block 59, otherwise a new camera capture will be made.

If the comparison with earlier dynamics of the object says that the motion is below a limit, a waiting level of the alarm is set, block 68, and after that it is tested how long the alarm waiting level has been active, block 69, and if this is too long it is de- cided that the moving object is not safety-critical and a new static pattern will be set, block 70. However, the original static pattern, which is used for determining the degradation of the pattern will not be changed. If the difference between the new static pattern and the original static pattern during camera cali- bration is above a certain level (test not shown in the figure) an error signal is given for restoring the markings. After restoration

a new start up of the system is made giving a new original static pattern.

Most of the tests in block 72 of figure 12 build on the redundan- cies in the system and as long as these tests are OK the probability that an error is made by the supervision system is extremely small. Now, some of these redundancies will be summed up. For example, Figure 13 exemplifies the redundancy used for tests with respect to measurements of the height of the cam- eras. 46a and 47a are the camera lenses and 48a is the image plane of the cameras and 49a the floor plane with the pattern 50a, which in this simple 2D figure is just a line. The distance between the cameras is 2 * d and the length of the line on the floor is L. Two redundant tests are exemplified, one for the height and one for the parameters of the offset between the pattern and the cameras in the horizontal plane.

L, d and f known and d1 ,d2,d3,d4 measured:

h=f * L/(d4-d3)

Redundant test 1 : h=f * L/(d2-d1 )

x1 =h * (d1 -d)/f x2=h * (d3-d)/f

Redundant test 2: x1 +L-x2=2d

In the implementation, residuals can be calculated and when the residuals are larger than the expected noise with respect to the camera measurements and the geometric model errors, the system is stopped.

The redundancy tests in figure 12 will be possible to do during the calibration of the cameras, during tests against known geometries of the pattern and upon the detection of all interrup-

tions of the pattern made by thin objects. With an object with a height the situation will be approximately as that outlined in Figure 14. Here several tests of inequalities can be made to prove that the object is not thin and redundant measurements of the height of the objects can be achieved.

L, d and f known and d6,d7,dδ,d9 measured:

From static measurements: h=f * L/(d4-d3) x1 =h * (d1 -d)/f x2=h * (d3-d)/f

From dynamic measurements: x5=h * (d5-d)/f x6=h * (d6-d)/f x7=h * (d7-d)/f x8=h * (dδ-d)/f

Test of static redundancy x1 +L-x2=2 * d OK

Test of dynamic 3D object x5+x7> 2*d => 3D object OK

Test of dynamic 3D object redundancy: x6+xδ>2 * d => 3D object OK

(x5-x1 )+x7+x2>L => 3D object OK

(x6-x1 )+x8+x2>L => 3D object OK

Beside the geometric redundancies, as exemplified in figures 13 and 14, there is a great deal of other redundant information to be used in a pattern interrupted by static and moving objects. Examples are: - Colour of pattern. To be sure that the system does not find other objects with similar colour as the pattern and therefore starts supervising other parts of the robot cell too, the pattern of the marking can be uniquely multi-coloured as exemplified in Figure 15. Each marking includes parts of different colours. The different colours can be processed in

parallel and the obtained patterns in each colour can be fused to obtain the global pattern. The interference between objects on the floor and the pattern can be obtained in different colours and residuals can be formed between the detected interference in the different colours, thus obtaining redundant detection of static and dynamic objects in the scene.

- The shapes of the pattern can also be made unique as exemplified in Figure 16. In the same way as described for colour pattern, unique shapes can be used to avoid the mix of safety pattern and objects in the cell. The pattern can also be made with pointing elements as the triangles in figure 16 to show in what direction the robot is located, giving redundancy in detecting the direction of movement of a critical object.

- More than one camera will make it possible to exchange captured images between the camera processing systems to check that the results will be the same.

- The redundancy with respect to cameras may be neces- sary when objects like conveyors or tables prevent one of the cameras to register the whole pattern. In this case a third camera as in figure 1 1 will be needed . Simultaneously this increases the redundancy for the rest of the pattern.

It is important to use the redundancies in the system not only to have a high probability to detect a static non-critical object as non-critical but of course also to detect a critical object with high probability. The main feature of a critical object is its dynamics, which means that there will be a difference between the interruptions of the patterns both relative to the initially captured scene and relative to a buffer of history scenes. The evolution of the changes in the interruptions in the history scenes give redundant information of direction of movements, speed of move- ments and changes in the structure of the interruptions of the patterns. All of these registrations of changes can be obtained

with redundant measurements in different colours if the pattern is colour-coded and in the interruption of different pattern shapes if the pattern is shape-coded.

For test purposes standard images with different patterns should be stored in the system and pattern recognition and parameter calculations should be made for these stored patterns to test the software. These tests could be repeated at decided time intervals and should give the correct stored parameter results. If there are errors in the results when processing standard images, the software has problems and the system must be shut down. These test patterns should include different cases with static and dynamic objects in the scene.

In the following possible calculations and tests that can be made to make sure that the human will be detected as a safety-critical object, and that no other objects will be classified as a safety- critical object, are listed.

- Pattern recognition must be the same in different colours for the same pattern, which means that, for example, the position, curvature, width and length of two lines close to each other should be the same even if these parameters are calculated independently by different software codes in different computers. - Pattern recognition will test that pattern elements have correct shape features. Here the shape features of a marking in the captured image is compared with the same features of a stored version of the same reference marking

- Test of pattern recognition on stored markings with defined standard test parameters, which means that the system makes use of standard markings loaded into the system at system start-up to test that the pattern recognition software is stable.

- Test of pattern recognition with static calibration view. The original static marking pattern is used to check that the software is stable and that the calibration view is the same.

- Test of pattern recognition with stored standard tests for human intrusion. Different cases of different people in the workspace have once been recorded and the corresponding images are run to test the system. - Calculation of robot position from missing parts in the pattern closest to the robot. The calculated robot arm position is compared with arm position calculated by the robot controller using the robot joint sensors.

- Calculation of robot position from 3D recognition using robot features. Standard markings can also be mounted on the top of the upper robot arm to get a redundant measurement of the robot upper arm position.

- Making use of distribution of missing parts in the floor pattern in order to find 2D objects. This is made by comparing the miss- ing parts seen from at least two cameras.

- Making use of 3D identification to find 2D objects that cannot be human. Compare object structures as seen from at least two cameras. If they are almost identical it is a redundant finding that the object has a low height. - Tests of 2D object calculations in different pattern colours. If the system works properly, the result should be the same in both colours.

- Comparing identified 2D object features with other identifications, such as missing parts and 3D identification. Common fea- ture extraction is made and the result is compared with the missing parts of the pattern.

- Making use of the distribution of missing parts in the floor pattern to find 3D objects that are also present in calibration view. When the first static images are collected static 3D objects will be found. These will give missing parts of the markings and these missing parts should be the same all the time. If this is not the case an error will be given.

- Tests of made 3D object calculations in different pattern colours.

- Making use of a distribution of missing parts in the floor pattern in order to find 3D objects that have been static for a certain time.

- Deciding the position of the human from camera view calibration. When the camera is calibrated the markings are used as landmarks and it is possible to obtain a relation between the pixel positions in the cameras and the floor positions. Therefore it will be possible to determine the position of a human in the image field.

- Deciding the position of the human from the position in the pattern. The cameras can also be used to measure the relative distance between a human an the markings, this will be a redundant way to obtain the position of the human. - Deciding the position of the human from different pattern colours. With different colours of the markings, distance measurements can be made in different colours, increasing the redundancy of the determination of the human position.

Figure 17 shows an example of a system architecture adapted to safe camera supervision by testing redundant calculations in steps like those outlined above. 70 and 71 are cameras and 72 and 73 memories for read-out of the camera image devices. The cameras are separated in order to obtain 3D information from the scene. In 76, 78, 79 and 80 pattern calculations are made in different colours and for each camera resulting feature parameters in the different colours are tested in 77 and 81 . If the differences are larger than a given alarm limit, an alarm will be given. 75 is a data base with calibration camera views and correct pat- tern features parameters and test camera views with test patterns and test pattern parameters. These data are used by the pattern calculation modules 78 and 79 and in 82 and 96 pattern feature parameters for these patterns are compared with the corresponding feature parameters calculated for the calibration and test patterns stored in 75. In 84 3D calculations on known pattern elements are made in order to test that the cameras

have not been moved or that the optics has not been changed. The geometry parameters obtained are compared with the same parameters calculated during the calibration of the camera system and stored in 75. The calculations in 84 are moreover tested by the use of standard patterns given to 78 and 79 from 75 with test conditions in 85.

In box 87, 3D calculations are used from pattern gaps, whereby 2D objects are discriminated as well as static 3D objects. Object parameters are stored in the data container 86, and box 87 makes use of the data in 86 to decide what objects are static and dynamic. In box 83 normal 3D feature identification is made without the use of the patterns and in box 88 the results of the normal 3D recognition is compared with the 3D recognition based on the reference patterns. In the last step tests are made about the robot position, where 89 calculates the robot position from pattern gaps and 90 from controller signals with comparison in box 93 and where boxes 91 and 92 calculate the position of the human (dynamic object) from camera view calibration 91 and position relative pattern with test in 94. Knowing the position of robot and human decision in made in 95 what safety action to take.

Figure 18 shows how the safe camera supervision system also can be used to make the robot control safer when the robot is controlled in its manual state. The same robot cell 97 as shown before is used, but now the operator is close to the robot when he programs the robot and tests programs. The safe camera supervision is able to detect both the operator and the robot arm with redundant measurements and calculations and is therefore able to tell the controller the position of the operator in relation to the robot arm. If the robot controller has a high safety implementation the robot program coming from memory 98 is fed to the program executer 99, which generates position targets to the motion control 100 including the servo that controls the robot.

In order to have a high safety program execution 101 and motion control 102 is also made in another computer and the results of the calculations of targets and servo positions are compared in 104 and 105. Moreover, the positions of the robot arms are measured by box 103 to be able to compare with the reference positions generated by box 102 (or 100). Even if this system will be extremely safe with respect to robot control faults, it will not be able to detect errors in the robot program coming from memory 98. Beside software faults in 98 the operator may have loaded wrong programs into 98 making the robot moving towards the operator instead of away from him. To get rid of this risk, the camera supervision system 108 sends the position of the operator to the controller and in 100 the positions in the program to be executed are compared with the position of the op- erator and if there is a risk of collision, the dangerous program position will not be executed. The camera supervision system software in memory 108 could be run in the same computer as the redundant robot control software 101 - 103.

Figure 19 exemplifies that the pattern on the floor does not need to be circular, a good design is to adapt the shape of the lines to the shape of the work envelope projected on the floor. 1 1 1 are cameras that detect the link system of the parallel robot 1 10 as will as operators coming into the workspace.

The present invention is not limited to the embodiments disclosed but may be varied and modified within the scope of the following claims. For example, the shape and colour of the making of the floor can be varied in many different ways. Also the number of markings can vary from one to a large number of markings. The computer unit of supervision system can be a separate unit or the computer unit of the control system of the robot.