Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR TESTING AN AUTONOMOUS SYSTEM
Document Type and Number:
WIPO Patent Application WO/2018/036698
Kind Code:
A1
Abstract:
The invention relates to a method for testing an autonomous system (RE) of which a virtual image (VE) exists, the virtual image comprising at least one virtual image of an autonomous component (DT(AC), DT(AC'))comprising the following steps: a) Acquiring of component data providing information in rela- tion to a movement of the at least one virtual image of the autonomous component (DT(AC), DT(AC')); b) Creating, in the virtual image (VE), at least one virtual object (DT(H)) that can move within the virtual image (VE); c) Generating, in the virtual image (VE), a corpus (C) around the at least one virtual object (DT(H)) or/and the virtual image of the at least one component(DT(AC),DT(AC')), the cor- pus (C) defining a volume that cannot be entered by the vir- tual image of the at least one autonomous component (DT(AC), DT(AC')) or the virtual object(DT(H)); d) Representing, in the virtual image (VE), a movement of the at least one virtual object (DT(H)) or/and the virtual image of the at least one autonomous component (DT(AC), DT(AC'), the movement being restricted in that the volume of any cor- pus (C) cannot be entered; e) Acquiring reaction data in relation to the movement of the at least one virtual object (DT(H)) or/and the virtual image of the at least one autonomous component (DT(AC), DT(AC')); f) Evaluating a feasible course of movement of the at least one virtual object (DT(H))or/and the virtual image of the at least one autonomous component (DT(AC), DT(AC'))considering the reaction data. The invention further relates to an autonomous system.

Inventors:
KRAUTWURM FLORIAN (DE)
Application Number:
PCT/EP2017/066559
Publication Date:
March 01, 2018
Filing Date:
July 04, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
International Classes:
G05D1/02; G05B19/418
Domestic Patent References:
WO2016082847A12016-06-02
WO2003046672A22003-06-05
Other References:
CHABLAT D ET AL: "A distributed approach for access and visibility task with a manikin and a robot in a virtual reality environment", IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, IEEE SERVICE CENTER, PISCATAWAY, NJ, USA, vol. 50, no. 4, 2 August 2003 (2003-08-02), pages 692 - 698, XP011098931, ISSN: 0278-0046, DOI: 10.1109/TIE.2003.814760
AGOSTINO DE SANTIS ET AL: "SAFETY ISSUES FOR HUMAN-ROBOT COOPERATION IN MANUFACTURING SYSTEMS", VRTEST 2008 TOOLS AND PERSPECTIVES IN VIRTUAL MANUFACTURING, 2 July 2008 (2008-07-02), Italy, pages 1 - 12, XP055326021, Retrieved from the Internet [retrieved on 20161205]
Download PDF:
Claims:
Patent claims

1. Method for testing an autonomous system (RE) of which a virtual image (VE) exists, the virtual image comprising at least one virtual image of an autonomous component (D (AC) , D (AC )) comprising the following steps:

a) Acquiring of component data providing information in relation to a movement of the at least one virtual image of the autonomous component (DT (AC) , DT (AC ) ) ;

b) Creating, in the virtual image (VE) , at least one virtual object (DT (H) ) that can move within the virtual image (VE) ; c) Generating, in the virtual image (VE) , a corpus (C) around the at least one virtual object (DT (H) ) or/and the virtual image of the at least one component (DT (AC) , DT (AC )) , the cor- pus (C) defining a volume that cannot be entered by the vir¬ tual image of the at least one autonomous component (DT (AC) , DT (AC ) ) or the virtual obj ect (DT (H) ) ;

d) Representing, in the virtual image (VE) , a movement of the at least one virtual object (DT (H) ) or/and the virtual image of the at least one autonomous component (DT (AC) , DT (AC ) , the movement being restricted in that the volume of any cor¬ pus (C) cannot be entered;

e) Acquiring reaction data in relation to the movement of the at least one virtual object (DT (H) ) or/and the virtual image of the at least one autonomous component (DT (AC) , DT (AC ) ) ; f) Evaluating a feasible course of movement of the at least one virtual object (DT (H) ) or/and the virtual image of the at least one autonomous component (DT (AC) , DT (AC )) considering the reaction data.

2. Method according to claim 1, wherein one criterion for evaluating a feasible course of movement is to avoid a colli¬ sion between the at least one virtual object (DT (H) ) and the virtual image of the at least one component (DT (AC) ,

DT (AC ) ) .

3. Method according to claim 1 or 2, wherein data in relation to the evaluated feasible course of movement are transferred from the virtual image (VE) of the autonomous system to the autonomous system (RE) .

4. Method according to any of the previous claims, wherein in step a) the component data are formed by at least one of

- sensor data from a sensor on or at the autonomous component (AC, AC ) ;

- sensor data from a sensor on or at the premises of the au¬ tonomous system (RE) ;

- data in relation to a movement of the autonomous component (AC, AC ) ;

- simulation data simulating the behavior of an autonomous component (AC, AC)

- virtual reality data generated by movements within the vir- tual image (VE) , in particular data created by use of virtual reality glasses when moving through the virtual image of the autonomous system (VE) .

5. Method according to any of the previous claims, wherein at least one of the corpuses (C) created around the at least one virtual object (D (H) ) or/and the virtual image of the at least one autonomous component (D (AC) , DT (AC ) ) has the shape of a box. 6. Method according to any of the previous claims wherein the corpus (C) around the at least one virtual object (DT (H) ) or/and at least one component is variable in dependence on the size of velocity or/and direction of velocity or/and the existence of further virtual images of autonomous components (DT (AC) , DT (AC ) ) , in particular that the corpus has the shape of a box which edges are prolonged in direction of the movement or/and in relation to the size of velocity.

7. Method according to claim 6, wherein for the existence of further virtual images of autonomous components

(DT (AC) , DT (AC ) ) their number within a predefined section of the virtual image of the autonomous system (VE) or/and a type of the virtual image of the autonomous components (DT (AC) , D (AC ) , in particular their hazardous potential, is consid¬ ered .

8. Method according to claims 6 or 7, wherein the hazardous potential is determined dependent on at least one of:

-the size or/and of the real autonomous components

-the range of possible movements

-the velocity range

-the direction or/and size of acceleration;

-the weight of a moving part, in particular a robot arm, of the autonomous component (AC, AC ) .

9. Method according to any of the previous claims 6 to 8, wherein the corpus (C) is variable in that it can be switched off.

10. Method according to any of the previous claims wherein in step e) the acquiring of reaction data takes place in the virtual image of the autonomous system (VE) or the real au- tonomous system (RE) .

11. Method according to any of the previous claims wherein the testing is performed before the real autonomous system (RE) as it is represented in the virtual image (VE) is in op- eration and the component data are simulated or/and taken from already existing parts of the autonomous system (RE) .

12. Method according to any of the previous claims wherein the testing is performed when the real autonomous system (RE) is in operation and wherein for the testing, in particular a regression testing, a comparison is performed between a procedure at a first point in time and the same procedure at a later point in time. 13. Method according to any of the previous claims, wherein the corpus (C) is formed such that it comprises at least one recess (R) the volume within such recess being excluded from the volume of the corpus (C) that is not allowed to be en¬ tered .

14. Method according to the previous claim 13, wherein for the excluded volume rules for access are installed depending on the object in particular that the excluded volume is al¬ lowed to be entered or/and allowed to be entered only by the object or/and the work piece (WP) . 15. Autonomous System, in particular for use in production, comprising at least one autonomous component (AC, AC ) , for at least parts of which a virtual image (VE) exists, on which tests are performed according to a method according to one of the claims 1 to 14.

16. Piece of Software for executing the step of a method ac¬ cording to any of the claims 1 to 14 when run on a computer

17. Data carrier on which the piece of software according to claim 16 is stored.

Description:
Description

METHOD FOR TESTING AN AUTONOMOUS SYSTEM

Field of the Invention

The invention relates to a method for testing an autonomous system and an autonomous system. Background

The process of setting up an automated system, e.g. a produc ¬ tion system, can be divided in a planning phase for designing the system, an engineering phase for realizing the system, a commissioning phase for installing and testing the system, a production phase where the system goes live, and a mainte ¬ nance and optimization phase running in parallel to the pro ¬ duction phase, where the system is supervised, optimized and occurring faults are fixed.

In industrial manufacturing there is a tendency from tradi ¬ tional, centralized systems to autonomous, distributed sys ¬ tems. Autonomous distributed systems comprise components, e.g. robots, which are not controlled by a central instance and have a certain degree of autonomy for their actions.

Moreover, traditional equipment is used in combination with new technologies such as intelligent robots, CNC machines, 3D printers and other smart devices, which have an interface to a virtual simulation or/and emulation environment. Hence, in ¬ teraction between the real world and a virtual image thereof can be provided. A term used therefore for these manufactur ¬ ing systems is Cyber-physical production system. This virtual, digital copy on the IT platform, where it is running is often referred to as "digital factory". For planning, testing and operating of these systems a concept can be used where for the real or physical factory there is a virtual, digital copy reflecting certain aspects of a certain component or group of components. This virtual digi- tal copy is sometimes referred to as digital twin.

The underlying idea is to explore or control the behavior of some or all components of the physical factory without having to actually run the procedure on the physical components.

In these autonomous, distributed systems often there is coop ¬ eration between the autonomous components, e.g. robots, and human co-workers. However, this bears the danger that not on ¬ ly collision between the autonomous components might occur but also that human co-workers are injured by the autonomous components .

It is one object of the invention to offer a possibility for an effective testing of an autonomous system, in particular in regard to safety aspects.

Brief Summary of the Invention This is solved by what is disclosed in the independent claims. Advantageous embodiments are subject of the dependent claims .

The invention relates to a method for testing an autonomous system of which a virtual image exists. The virtual image comprises at least one virtual image of an autonomous compo ¬ nent .

For example, the virtual image is set up as a duplicate of the autonomous system running on a computer that provides the processing power. Input to that duplicate may comprise e.g. the architecture, hardware or data stemming e.g. from sensors of the real factory. Duplicate is not to be understood that there is an exact copy showing every single detail of the au ¬ tonomous system. Preferably certain aspects of an autonomous system are emulated in the virtual image with the help of the processing power of the computer the virtual image is run- ning.

In a step, component data providing information in relation to a movement of the at least one virtual image of an autono ¬ mous component are acquired, e.g. simulation data or data from the real autonomous system or combinations thereof are fed into the virtual image.

In the virtual image at least one virtual object that can move within the virtual image (VE) is created, e.g. a virtual human operator.

In the virtual image a corpus is generated around the at least one virtual object or/and the virtual image of the at least one component. The corpus defines a volume that cannot be entered neither by the virtual image of the at least one autonomous component nor the virtual object.

E.g. the corpus generates a buffer zone around a virtual ele ¬ ment that can then be used for calculations. In the virtual image a movement of the at least one virtual object or/and the virtual image of the at least one autono ¬ mous component is represented. The movement is restricted in that the volume of any corpus cannot be entered.

E.g. with this buffer zone a collision would occur before an actual collision - considering the actual, virtual boundaries of the elements - in the virtual image takes place.

Reaction data in relation to the movement of the at least one virtual object or/and the virtual image of the at least one autonomous component are acquired.

E.g. data are gathered for which paths taken by the virtual object a collision is prone to be happen. A feasible course of movement of the at least one virtual ob ¬ ject or/and the virtual image of the at least one autonomous component is evaluated. Thereby reaction data are considered. E.g. the reaction data are processed to determine possible course of movements such as paths taken at a certain time or/and movement for production tasks etc.

Preferably that corpus is of the shape of a box or

parallepiped . This reduces computational efforts.

According to an advantageous embodiment the corpus around the virtual object or/and at least on autonomous component is variable in dependence on the direction and size of the ve ¬ locity of the object or/and the autonomous component. In the example of a corpus of the form of a box, the box' edges are prolonged in the direction of the movement. The actual length that it is prolonged depends on the size of the velocity, preferably to its square. In addition or alternatively the actual length depends also on the transmission time and the processing time. Thus it can be taken care of delay times due to a later transmission into the real autonomous system, pro ¬ cessing times of virtual as well as real element and a safety buffer can be installed. The invention further relates to a corresponding autonomous system, computer program and data carrier.

Brief description of the drawings : Further embodiments, features, and advantages of the present invention will become apparent from the subsequent descrip ¬ tion and dependent claims, taken in conjunction with the accompanying drawings of which show: Fig. 1 a schematic view of a data transfer into a virtual environment and from a virtual environment to a re ¬ al environment; Fig. 2 a schematic graph of a real environment with auton ¬ omous components and a human coworker and a virtual environment representing a copy of the real envi ¬ ronment exchanging data;

Fig. 3 a schematic graph of a virtual image of a human op ¬ erator surrounded by a box extended in the direc ¬ tion of movement, moving towards an autonomous component in the virtual environment with the digi- tal image of the production system;

Fig.4 a schematic graph of a corpus around the virtual image of an autonomous object, said corpus having a recess to allow interaction, e.g. work piece han- dling, with a human coworker;

Fig.5 a schematic graph as that of Fig. 7, where the re ¬ cess is enlarged to facilitate interaction; Fig.6 a schematic graph showing a corpus suited for a hu ¬ man worker formed by a composition of boxes surrounding the torso and the extremities.

In the following description, various aspects of the present invention and embodiments thereof will be described. However, it will be understood by those skilled in the art that embod ¬ iments may be practiced with only some or all aspects there ¬ of. For purposes of explanation, specific details and config ¬ urations are set forth in order to provide a thorough under- standing. However, it will also be apparent to those skilled in the art that the embodiments may be practiced without the ¬ se specific details.

A testing of an autonomous system is often performed

a) before a system enters the production phase and

b) when the production has already started, for purposes of maintenance or e.g. if new software-updates are used, new component are introduced etc. For a testing before the system has entered the production phase no data from the real system are available. Hence arti ¬ ficial data generated by simulation or taken from similar en- vironments are used in the virtual environment VE for simu ¬ lating procedures in the system, e.g. production tasks in a production system.

This virtual environment VE reproduces artificially a real environment RE that not yet exists or an already existing re ¬ al environment RE or some aspects thereof. Often elements in the virtual environment VE which represent existing or planned elements in the real environment are referred to as digital twins.

For a testing of an already running system e.g. when a new software update has been installed, actual data from the real system or the real environment can be taken. Also, a mix of data, actual data and artificial data, may be used as input data, e.g. in a situation when new components are introduced into the system or any error and/or faults are injected into the system for testing purpose. With these input or component data a behavior of the system can be simulated in the virtual environment VE .

Further, a virtual object D (H) , e.g. a human or a delicate robot, is introduced into the virtual environment VE, which is moving in between digital representations of the autono ¬ mous components AC, AC . Hence there is the danger that the virtual object D (H) would be affected in a real environment, e.g. injured or destroyed, by the autonomous components if a collision occurs.

Around the virtual object DT (H) a corpus C is generated. Ac ¬ cording to the embodiment the corpus C has the simple form of a box. Alternatively the corpus is formed by a composite of a variety of volumes, e.g. reconstructing torso and extremi ¬ ties .

The virtual object D (H) is moving within the virtual envi- ronment VE . The box C around the object H enlarges the object artificially and thus provides a buffer zone or safeguard against collisions with the digital twins DT (AC) , DT (AC ) or other installations of the system represented in the virtual image of the system. Thus, by having this artificially en- larged volume it can be tested and ensured that collisions with human co-workers will be recognized by the applied col ¬ lision detection procedures or/and algorithms. Further, it can be tested and installed what appropriate actions have to be triggered at what time.

Thus a feasible course of movement can be explored. Examples for a course of movement are a way in between the digital twins DT (AC) , DT (AC ) which is possible and safe for the ob ¬ ject H, or a movement of the object H when collaborating with the digital twin DT (AC) , DT (AC) of an autonomous object AC, AC .

By automating a path that a virtual object makes through the virtual image of a plant, already at this early stage auto- mated testing of safety features can be started.

The movement data of the object H are fed or mapped into the virtual environment VE, as indicated by the arrow 1 in Fig. 1. The movement data may be created by a person wearing vir- tual reality glasses VRG when being immersed in the virtual environment VE . Thus, just by virtually walking around, ex ¬ plorative testing of the virtual commissioning can be performed . Alternatively or additionally movement data may be generated via a computer simulation e.g. on a local workspace LW, eventually also using real data from the real environment RE as indicated by arrow 3. In this way a testing of an e.g. production plant can be done, before the real production plant yet exists, which fa ¬ cilitates the planning, e.g. arrangement and radiuses of movement of autonomous objects in order not to endanger an object, in particular a human.

By the additional volumes safety measures can be implemented that account for inaccuracies of path or velocity determina- tion as well as delay times occurring due to transfer between virtual and real environment or/and processing times etc.

Thus during a virtual commissioning phase a behavior of a physical production environment is simulated or replicated with the help of a software system. Thus, many tests normally performed in a commissioning phase where the installation of the physical production environment has taken place and it needs to be verified whether specifications can be met, can thus be performed before investments and work has been done. In this virtual commissioning phase a human can be introduced as virtual object H and move through the virtual image of the future production environment and courses of movement can be explored, as explained above. In particular to approach the possible limits, the corpus, in particular the box, around the human can be disabled, so that the human can come as close to the virtual image of an element of the production plant as possible by their physical dimensions.

In sum, safety features can be tested in very early stages and also in an automated way, which saves cost and time.

Alternatively or additionally a testing is performed while a real system is already running. Then data for the simulation of the virtual environment can be taken also from the real environment or/and these data can be taken as a starting point for generating data. As the feedback of the virtual images of real elements in the real production plant is fed back or mirrored to the real en ¬ vironment as indicated by arrow 2, this can be used for test ¬ ing, in particular regression testing where differences to prior procedures are to be detected and explained. Hence, in the commissioning phase reactions in the real factory or pro ¬ duction plant can be observed, when the position of the human as virtual object H is being mapped from its virtual position into the real factory. Thus, regression testing in the real commissioned environment can be performed.

In addition, if the human as virtual object H visits several positions of the virtual image, a virtual inspection of the running system is performed, if the respective data from the real world are fed into the virtual image.

The corpus C may be of variable shape. According to one em ¬ bodiment the variability is in that the corpus can be

shrinked to no volume, i.e. switched off. This means, that a collision occurs if the boundaries of the object H itself hit the boundaries of a digital twin DT (AC) , DT (AC ) itself or the boundary of a corpus C around the digital twin DT (AC) , DT (AC ) . Thus, e.g. the limits of possible paths can be ex ¬ plored. Further embodiments where the corpus C is variable will be explained in relation with Fig. 3.

Fig. 2 depicts a real environment RE of a section of a pro ¬ duction plant which is exchanging data D with a virtual environment VE reflecting the real environment or certain aspects thereof and is used for detailing the realization of the virtual image VE .

In the real environment RE there are autonomous components AC, AC that interact with each other in order to realize a task, e.g. a production process. The autonomous components

AC, AC are e.g. robots that are adapted to perform a certain set of production tasks within a production process. The au- tonomous components AC, AC are not centrally controlled but have a defined range of autonomy to make decisions.

The range of autonomy may depend on the specific production task, the actual constellation of the autonomous components AC, AC, etc.

Within the production plant there is further an object H, e.g. a human which interacts with the autonomous components AC, AC .

The interaction may be that the human H performs production tasks together with one or more autonomous components AC, AC or that the human H is moving within the production plant.

The virtual environment VE is a virtual image of a real envi ¬ ronment RE. The virtual image is running on an IT platform. This means in particular that a real environment RE is emu ¬ lated with the help of a real computer such that a share of the real environment RE is reproduced virtually on that com ¬ puter. This enables monitoring, supervision or testing of the real environment RE, e.g. an autonomous system without inter ¬ fering, i.e. intervention into running operation. The share of the autonomous system being reproduced depends on the objective of the virtualization . E.g. only a certain aspect or part may be virtualized for its optimization or testing . Correspondingly, each element or only some of the elements of the real environment RE has a corresponding element in the virtual environment VE .

An element such as an autonomous component, e.g. a robot, has a plurality of sensors which produce sensor data.

A sensor may comprise a position detection sensor, a movement detection sensor, an acceleration sensor, a force sensor, a camera, an audio sensor, a smell sensor etc, a sensor detect ¬ ing the presence of certain substances etc.

Correspondingly, the sensor data may comprise position, e.g. spatial position, related data, velocity/acceleration or/and direction of movement/acceleration related data, data in relation to size and direction of a force, visual data, audio data, scent data, data in relation to existence and amount of certain substances etc. Alternatively to the case where each element in the real en ¬ vironment RE has a corresponding element in the virtual envi ¬ ronment VE, only certain elements of the real environment RE have a corresponding object in the virtual environment VE . This allows modeling, testing or surveillance of certain as- pects of a production plant. This goal can also be achieved with an embodiment, where all the elements of the real envi ¬ ronment RE may have a corresponding element in the virtual environment, but only data in regard to some elements are transferred from the real environment RE to the virtual envi- ronment VE or used for further computation in the virtual environment VE .

In the virtual environment VE the actual behavior of the ele ¬ ments in the real environment RE can be modeled with the help of the virtual or computerized representations of the real elements. As mentioned above, the virtual representations are sometimes referred to as digital twin.

With the help of these digital twins the production plant in the real environment RE can be modeled. The sensors of the real objects provide data which are transferred into the vir ¬ tual environment VE . There, with the help of a 3 D modeling software the digital twin is usually made identical to the real object e.g. in shape but in particular in relation to actual state, e.g. position of a robot and position of its gripping arm, motion of the various constituents of the robot etc . Therewith the future behavior of the autonomous components AC, AC can be simulated and future situations, e.g. colli ¬ sions can be determined. These simulation data are trans ¬ ferred back to the autonomous components AC, AC . They may use as information which for decision making.

For the transfer of data D between the real environment RE and the virtual environment VE various transfer modes can be used, such as wire-bound or wireless transfer methods or any combination thereof.

According to an advantageous embodiment there is a wireless connection, such as one according to an 802.11 standard from the autonomous components AC, AC to a central transmitting entity in the real environment. From that central transmit- ting entity there is a wire-bound connection, e.g. an Ether ¬ net connection.

Alternatively or additionally the data are transferred across wired connections to stationary autonomous components, such as conveyor belts or fixed robots.

In Fig. 3 an embodiment is depicted where the corpus C having the form of a box having edges parallel to the direction of movement and perpendicular thereto, i.e. a cuboid. The box is enlarged in the direction of movement. This en ¬ larging is done in relation to the velocity, i.e. for a high ¬ er velocity the edge parallel to the direction of movement becomes longer than for a lower velocity. According to an advantageous embodiment the length of the edge is calculated in this way:

First, there is a basic box determined that offers sufficient space in all directions around the human in order to remain unharmed. Then, there is an additional edge length added in the direction of movement. This additional edge length is calculated such that the distance presumably covered during the travel time of the signal from the human or endangered object to the virtual entity and back plus the respective processing times is still inside the box. This estimate is done based on previous measurement values of velocity

/direction and processing times. Preferably also inaccuracies of position, velocity or/and time taking measurements etc. are taken into account.

In addition in Fig. 3, also the autonomous component AC is equipped with a box as surrounding corpus C. By using surrounding corpuses not only for the object at risk, in partic- ular the human, the interconnection of the individual move ¬ ments of the autonomous components and objects at risk can be considered more easily. Advantageously the edge length of both safety boxes is extended in the direction of movement. By equipping not only the virtual object D (H) with a corpus C it is computationally advantageous to consider the movement and velocities of the virtual image of the autonomous compo ¬ nent D (AC) , especially because of the distribution of con ¬ trol . In Fig. 4 an embodiment is depicted wherein in the corpus C around the virtual image of an autonomous component DT (AC) , DT (AC ) a recess R is formed. The volume within that recess is excluded from the volume of the corpus that cannot be en ¬ tered. Additional rules may be applied for the volume of the recess R, e.g. that the recess volume may be entered general ¬ ly or be entered by certain objects, e.g. only by humans.

According to a further embodiment a corpus C is defined dif ¬ ferently for different (digital twins of) objects. E.g. there is a corpus type for human actors, a further corpus type for cyber physical components, a corpus type for work pieces or a corpus type for safety zones. Having made this distinction, a finer set of rules can be defined, in order to allow interac ¬ tions in the course of the production process but still en- sure the safety of humans and components.

According to an exemplary set of rules no collisions may be allowed between humans and cyber physical components, in be- tween work pieces, between cyber physical components and safety zones, in between cyber physical components.

"Collisions" or rather interactions may be allowed between humans or cyber physical components and work pieces.

Thus, interaction or/and contact necessary in the production process is possible in small, defined corridors. Thereby still dangerous collisions can be avoided. According to Fig. 5 the recess is enlarged to facilitate the handling of a work piece WP, in particular when a set of rules for intrusion as above is used. Thus, e.g. a large cor ¬ ridor may be defined to enable an easy handling whilst still ensuring that no dangerous collision happens.

In Fig. 6 a human is depicted which is wearing a detector for position determination. Based on these detector data in the virtual environment a corpus C attached to the human can be generated. In the depicted example the corpus C is formed by a plurality of boxes surrounding the body and the arms. The number of boxes used for forming the corpus C is depending on the working environment and the duties of the human coworker.

Although the present invention has been described in accord- ance with preferred embodiments, it is obvious for the person skilled in the art that modifications or combination between the embodiments, fully or in one or more aspects, are possi ¬ ble in all embodiments. Parts of the description have been presented in terms of operations performed by a computer system, using terms such as data and the like, consistent with the manner common ¬ ly employed by those skilled in the art to convey the sub ¬ stance of their work to others skilled in the art. As is well understood by those skilled in the art, these quantities take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, and otherwise manipu ¬ lated through mechanical and electrical components of the computer system; and the term computer system includes general purpose as well as special purpose data processing ma ¬ chines, routers, bridges, switches, and the like, that are standalone, adjunct or embedded.

Additionally, various operations will be described as multi ¬ ple discrete steps in turn in a manner that is helpful in un ¬ derstanding the present invention. However, the order of description should not be construed as to imply that these op ¬ erations are necessarily order dependent, in particular, the order of their presentation.

Reference in the specification to "one embodiment" or

"an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention.

The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.