Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-STAGE OBJECT POSE ESTIMATION
Document Type and Number:
WIPO Patent Application WO/2023/169838
Kind Code:
A1
Abstract:
The invention relates to the determination of a multidimensional pose of an object. The respective method for estimating the multi-dimensional pose mDOF of the object OBJ utilizes a 2D image IMA of the object OBJ and a plurality of 2D templates TEMPL(i) which are generated from a 3D model MOD of the object OBJ in a rendering procedure from different known virtual viewpoints vVIEW(i). In a stage S2 of template matching, one template TEMPL(J) is identified which matches best with the image IMA. In a stage S3 of correspondence determination, a representation repTEMPL(J) of the identified template TEMPL(J) is compared with a representation repIMA of the image IMA to determine 2D-3D-correspondences 2D3D between pixels in the representation repIMA of the image IMA and voxels of the 3D model MOD of the object OBJ. The 2D-3D- correspondences are further processed in a stage S4 of pose estimation to estimate the multi-dimensional pose mDOF based on the 2D-3D-correspondences 2D3D.

Inventors:
SHUGUROV IVAN (DE)
ILIC SLOBODAN (DE)
Application Number:
PCT/EP2023/054676
Publication Date:
September 14, 2023
Filing Date:
February 24, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
International Classes:
G06V10/46; G06T7/73; G06V10/82
Domestic Patent References:
WO2020156836A12020-08-06
Foreign References:
US20190026917A12019-01-24
EP3511904B12020-05-27
EP3905194A12021-11-03
US20190026917A12019-01-24
Other References:
PAUL J BESLNEIL D MCKAY: "Method for registration of 3-d shapes", SENSOR FUSION IV: CONTROL PARADIGMS AND DATA STRUCTURES, 1992
Attorney, Agent or Firm:
SIEMENS PATENT ATTORNEYS (DE)
Download PDF:
Claims:
Claims

1. Method for estimating a multi-dimensional pose mDOF of an object OBJ based on an image IMA of the object OBJ, wherein

- in a preparational stage, the image IMA depicting the ob- ject OBJ and a plurality of templates TEMPL(i) are provid- ed, wherein the templates TEMPL(i) are generated from a 3D model MOD of the object OBJ in a rendering procedure, wherein different templates TEMPL(i), TEMPL(j) with i/j of the plurality are generated by rendering from different known virtual viewpoints vVIEW(i), vVIEW(j) on the model MOD,

- in a stage S2 of template matching, at least one template TEMPL(J) from the plurality of templates TEMPL(i) is iden- tified which matches best with the image IMA, characterized in that

- in a stage S3 of correspondence determination, a represen- tation repTEMPL(J) of the identified template TEMPL(J) is compared with a representation repIMA of the image IMA to determine 2D-3D-correspondences 2D3D between pixels in the image IMA and voxels of the 3D model MOD of the object OBJ, and

- a stage S4 of pose estimation, in which the multi- dimensional pose mDOF is estimated based on the 2D-3D- correspondences 2D3D.

2. Method according to claim 1, wherein in a stage SI of ob- ject detection, preferably executed before the second stage S2 of template matching, a segmentation mask SEGM is generat- ed from the image IMA which identifies those pixels of the image IMA which belong to the object OBJ.

3. Method according to claim 2, wherein for the generation of the segmentation mask SEGM a semantic segmentation is per- formed by dense matching of features of the image IMA to an object descriptor tensor ok, which represents the plurality of templates TEMPL(i).

4. Method according to any one of claims 2 to 3, wherein in the stage SI of object detection

- in a first step SI.1 an object descriptor tensor ok = FkE(MOD) for the model MOD of the object OBJ is computed utilizing a feature extractor FkE from the templates TEMPL(i), wherein the object descriptor tensor ok repre- sents all templates TEMPL(i),

- in a second step SI.2 feature maps fk = FFE(IMA) are computed utilizing a feature extractor FFE for the image IMA,

- in a further step SI.4 the binary segmentation mask SEGM is computed based on a correlation tensor ck which results from a comparison of image features expressed by the fea- ture maps fk of the image IMA and the object descriptor tensor ok .

5. Method according to claim 4, wherein in the further step SI.4 per-pixel correlations between the feature maps fk of the image IMA and features from the object descriptor tensor ok are computed, wherein each pixel in the feature maps fk of the image IMA is matched to the object descriptor ok, which results in the correlation tensor ck, wherein a particular correlation tensor value for a particular pixel (h,w) of the image IMA is defined as ckwxyz = corr(f£w, okyz), wherein corr represents a correlation function, preferably according to a Pearson correlation.

6. Method according to any one of claims 1 to 5, wherein the stage S4 of pose estimation applies a PnP+RANSAC procedure to estimate the multi-dimensional pose mDOF from the 2D-3D- correspondences 2D3D.

7. Method according to any one of claims 1 to 6, wherein in the stage S3 of correspondence determination

- in a first step S3.1, 2D-2D-correspondences 2D2D between the representation repTEMPL(J) of the identified template TEMPL(J) and the representation repIMA of the image IMA are computed BY A TRAINED NETWORK ANN3, - in a second step S3.2, the 2D-2D-correspondences 2D2D are further processed under consideration of the known virtual viewpoint vVIEW(J) to provide the 2D-3D-correspondences 2D3D between pixels in the image IMA belonging to the ob- ject OBJ and voxels of the 3D model MOD of the object OBJ.

8. Method according to claim 7, wherein, in the first step S3.1 for the computation of the 2D-2D-correspondences 2D2D, the representation repIMA of the image IMA and the represen- tation repTEMPL(J) of the identified template TEMPL(J) are correlated and the 2D-2D-correspondences 2D2D are computed based on the correlation result.

9. Method according to any one of claims 7 to 8, wherein in the first step S3.1 for the computation of the 2D-2D- correspondences 2D2D

- the representation repIMA of the image IMA is an image fea- ture map repIMAfm of at least a section of the image IMA which includes pixels belonging to the object OBJ and

- the representation repTEMPL(J) of the identified template TEMPL(J) is a template feature map repTEMPL (J)fm of the identified template TEMPL(J).

10. Method according to claim 9, wherein, in the first step S3.1 for the computation of the 2D-2D-correspondences 2D2D,

- a 2D-2D-correlation tensor ck is computed by matching each pixel of one of the feature maps repIMAfm, TEMPL(J)fm with all pixels of the respective other feature map TEMPL(J)fm, repIMAfm,

- the 2D-2D-correlation tensor ck is further processed to de- termine the 2D-2D-correspondences 2D2D.

11. Method according to any one of claims 1 to 10, wherein in the stage S2 of template matching for identifying the tem- plate TEMPL(J) from the plurality of templates TEMPL(i) matching best with the image IMA,

- in a first step S2.1 for each template TEMPL(i) a template feature map TEMPL(i)fm is computed at least for a template foreground section of the respective template TEMPL(i), which contains the pixels belonging to the model MOD,

- in a second step S2.2 a feature map IMAfm is computed at least for an image fore- ground section of the image IMA, which contains the pixels belonging to the object OBJ,

- in a third step S2.3 for each template TEMPL(i) a similarity sim(i) is computed for the respective template feature map TEMPL(i)fm and the image feature map IMAfm, wherein the template TEMPL(J) for which the highest similari- ty sim(J) is determined in the third step S2.3 is chosen to be the identified template TEMPL(J).

12. Method according to claim 11, wherein the image fore- ground section is cropped from the image IMA by utilizing the segmentation mask SEGM generated by the method according to any one of claims 2 to 5.

13. System for estimating a multi-dimensional pose mDOF of an object OBJ based on a representation IMA of the object OBJ, comprising a computer (120) implemented control system (130) which is configured to execute a method according to any one of claims 1 to 12.

14. System according to claim 13, comprising a sensor (110) for capturing the representation IMA of the object OBJ, wherein the sensor (130) is connected to the control system (130) for providing the representation IMA to the control system (130) for further processing.

15. System according to claim 14, wherein the sensor (110) is a camera for capturing optical images, preferably RGB images.

Description:
Description

Multi-stage object pose estimation

The invention relates to the determination of a multi- dimensional pose of an object.

Object detection and multi-dimensional pose estimation of the detected object are regularly addressed issues in computer vision since they are applicable in a wide range of applica- tions in different domains. Just for example, autonomous driving, augmented reality, robotics, as well as several med- ical applications are hardly possible without fast and pre- cise object localization in multiple dimensions.

In recent years, several computer vision tasks such as detec- tion and segmentation have experienced a tremendous progress thanks to the development of deep learning. Successful 2D ob- ject detection alone is insufficient for real-world applica- tions which require 6D object pose information such as in the context of augmented reality, robotic manipulation, autono- mous driving etc. Therefore, the ability to recover the pose of the objects in 3D environments which is also known as the 6D pose estimation problem is essential for such applica- tions.

The pose estimation often applies one or more pre-trained ar- tificial neural networks ANN (for the sake of brevity, an "artificial neural network ANN" will occasionally be named "network ANN" in the following) to estimate a pose of an ob- ject in a scene based on images of that object from different perspectives and based on a comprehensive data base, e.g. de- scribed in EP3511904B1, WO2020/156836A1, EP3905194A1. Fur- thermore, US2019/026917A1 describes an approach for aligning a model of an object to an image of the object. In the end, the alignment results in discretized pose parameters which are propagated to the object. Consequently, a rapid development of high quality 6D pose es- timation methods is observable thanks to advances in data preparation, evaluation tools, and the methods themselves. However, striving for high accuracy, aspects like scalability and generalization to new objects which have not been detect- ed before are often neglected by the top performing methods. Most of them involve lengthy annotation of real data or ren- dering of synthetic training data. Furthermore, a necessary step is to train the detector on a particular object to make the method object-specific.

Only few works try to generalize to new objects, but they mainly focus on objects of the same category. Others, like "CorNet" and "LSE" train networks to estimate local shape de- scriptors for each object's pixel. They focus on 3D corners and objects that share similar geometry, such as the ones be- longing to the "T-Less" dataset. Further approaches extend to generalize to unseen objects, but they still use a standard 2D detector for object localization and estimate rotation of new unseen object by generalized template matching.

Therefore, a method and a system are required which serve the need to solve the above problems and to enable multi- dimensional object pose estimation even for newly seen ob- jects. This is solved by the method suggested in claim 1 and by the system suggested in claim 13. Dependent claims de- scribe further advantageous embodiments.

A method for estimating a multi-dimensional pose mDOF of an object OBJ based on an image IMA of the object OBJ, includes a a preparational stage, in which the image IMA depicting the object OBJ and a plurality of templates TEMPL(i) are pro- vided. The templates TEMPL(i) are generated from a 3D model MOD of the object OBJ in a rendering procedure, wherein dif- ferent templates TEMPL(i), TEMPL(j) with i/j of the plurality are generated from the model MOD by rendering from different respective known virtual viewpoints vVIEW(i), vVIEW(j) on the model MOD. For example, the known virtual viewpoints vVIEW(i) might be equally distributed on a sphere enclosing the model MOD of the object OBJ, so that a pose of the model of the ob- ject in a respective template TEMPL(i) can be assumed to be known. In a stage S2 of template matching, at least one tem- plate TEMPL(J) from the plurality of templates TEMPL(i) is identified and chosen, respectively, which matches best with the image IMA. The inventive concept introduced herein in- cludes further stages S3 and S4. In subsequent stage S3 of correspondence determination, a representation repTEMPL(J) of the identified template TEMPL(J), e.g. the identified tem- plate TEMPL(J) itself or, preferably, a feature map of the identified template TEMPL(J), is compared with a representa- tion repIMA of the image IMA, e.g. the image IMA itself or a section of the respective image IMA which contains pixels which belong to the object OBJ or, preferably, a feature map of the image IMA or of such section of the image IMA, to fi- nally determine 2D-3D-correspondences 2D3D between pixels in the representation repIMA of the image IMA belonging to the object OBJ and voxels of the 3D model MOD of the object OBJ. In stage S4 of pose estimation, the multi-dimensional pose mDOF is estimated based on the 2D-3D-correspondences 2D3D. The fourth stage S4 of pose estimation might apply a PnP+RANSAC procedure to estimate the multi-dimensional pose mDOF from the 2D-3D-correspondences 2D3D.

For the sake of clarity, the term "stage" in the context of the method introduced herein has the meaning of a method step .

Preferably, in a stage SI of object detection, which is pref- erably executed before the second stage S2 of template match- ing and by a trained network ANN1, a binary segmentation mask SEGM is generated from the image IMA, wherein the mask SEGM identifies those pixels of the image IMA which belong to the object OBJ and which shows a location of a visible part of the object OBJ in the image IMA, respectively. Above and in the following, the formulation "pixels belonging to" the object OBJ or, as the case may be, to the model MOD means those pixels of a respective image IMA or TEMPL(i) which represent a respective visible part of the object OBJ or model MOD.

For the generation of the segmentation mask SEGM, a semantic segmentation is performed by ANN1 by dense matching of fea- tures of the image IMA to an object descriptor tensor o k , which represents the plurality of templates TEMPL(i).

Moreover, the first stage SI of object detection comprises a plurality of steps Sl.l, SI.2, SI.4. In the first step Sl.l, an object descriptor tensor for the model MOD of the object OBJ is computed by a trained network ANN1 utiliz- ing a feature extractor from the templates TEMPL(i), wherein the object descriptor tensor o k collects and repre- sents, respectively, all templates TEMPL(i) rendered from the virtual viewpoints vVIEW(i) around the object's model MOD. In the second step SI.2, which might be executed before, after, or in parallel to the first step Sl.l, feature maps f k = F FE (IMA) with k=l,...,K are computed by the network ANN1 utiliz- ing a feature extractor F FE for the image IMA. In the further step SI.4, the binary segmentation mask SEGM is computed by the network ANN1 based on a correlation tensor c k which re- sults from a comparison of image features expressed by the feature maps f k of the image IMA and the object descriptor tensor o k .

Therein, in the further step SI.4, per-pixel correlations be- tween the feature maps f k of the image IMA and features from the object descriptor tensor o k are computed, wherein each pixel in the feature maps f k of the image IMA is matched to the entire object descriptor o k , which results in the corre- lation tensor c k , wherein a particular correlation tensor value for a particular pixel (h,w) of the image IMA is de- fined as wherein corr represents a cor- relation function, preferably, but not necessarily, according to a Pearson correlation.

The third stage S3 of correspondence determination comprise at least steps S3.1 and S3.2, executed by a by a trained net- work ANN3. In the first step S3.1, 2D-2D-correspondences 2D2D between the representation repTEMPL(J) of the identified tem- plate TEMPL(J) and the representation repIMA of the image IMA are computed. In the second step S3.2, the 2D-2D- correspondences 2D2D computed in the first step S3.1 are fur- ther processed under consideration of the known virtual view- point vVIEW(J) to provide the 2D-3D-correspondences 2D3D be- tween pixels in the representation repIMA of the image IMA belonging to the object OBJ and voxels of the 3D model MOD of the object OBJ.

This utilizes that object OBJ poses are available for all templates TEMPL(i), including the identified template TEMPL(J), due to the known virtual camera poses vVIEW(i) cor- responding to the templates TEMPL(i). Therefore, for each pixel on a template TEMPL(i), it is known to which 3D point or "voxel" of the model MOD it corresponds. Thus, 2D-3D- correspondences between a template TEMPL(i) and the model MOD are known from the outset for all templates TEMPL(i) right from the beginning, as soon as the templates TEMPL(i) have been generated for the corresponding virtual viewpoints vVIEW(i). Therefore, when 2D-2D-correspondences 2D2D between repIMA and a template TEMPL(J) are known from step S3.1, the corresponding 2D-3D-correspondences 2D3D between the image representation repIMA and the model MOD of the object can be easily derived.

In the first step S3.1, for the computation of the 2D-2D- correspondences 2D2D, the representation repIMA of the image IMA and the representation repTEMPL(J) of the identified tem- plate TEMPL(J) are correlated and the 2D-2D-correspondences 2D2D are computed by a trained artificial neural network ANN3 based on the correlation result. Therein, in the first step S3.1, for the computation of the 2D-2D-correspondences 2D2D, the representation repIMA of the image IMA is an image feature map repIMAfm of at least a sec- tion of the image IMA which includes all pixels belonging to the object OBJ and the representation repTEMPL(J) of the identified template TEMPL(J) is a template feature map rep- TEMPL(J)fm of the identified template TEMPL(J).

Furthermore, in the first step S3.1 for the computation of the 2D-2D-correspondences 2D2D, a 2D-2D-correlation tensor c k , which is a measure for a similarity between repIMA and TEMPL(J), is computed by matching each pixel of one of the feature maps repIMAfm, TEMPL(J)fm with all pixels of the re- spective other feature map TEMPL(J)fm, repIMAfm, e.g. each pixel of the image feature map repIMAfm is matched with all pixels of the template feature map TEMPL(J)fm. Then, the 2D- 2D-correlation tensor c k is further processed by the trained artificial neural network ANN3 to determine the 2D-2D- correspondences 2D2D.

The second stage S2 of template matching for identifying the template TEMPL(J) from the plurality of templates TEMPL(i) matching best with the image IMA comprises at least steps S2.1, S2.2 and S2.3, executed by a network ANN2. In the first step S2.1, for each template TEMPL(i) a template feature map TEMPL(i)fm is computed at least for a template foreground section of the respective template TEMPL(i), which contains the pixels belonging to the model MOD, or for the whole tem- plate TEMPL(i). In the second step S2.2, a feature map IMAfm is computed at least for an image foreground section of the image IMA, which contains the pixels belonging to the object OBJ, or for the whole image IMA. The steps S2.1 and S2.2 can be performed in any desired order. In the third step S2.3, for each template TEMPL(i) and each i, respectively, a simi- larity sim(i) is computed for the respective template feature map TEMPL(i)fm and the image feature map IMAfm, wherein the template TEMPL(J) for which the highest similari- ty sim(J) is determined in the third step S2.3 is chosen to be the identified template TEMPL(J).

Therein, the image foreground section is cropped from the im- age IMA by utilizing the segmentation mask SEGM generated earlier. The template foreground sections of the templates TEMPL(i) are intrinsically known because the templates TEMPL(i) anyway depict the object OBJ in a known pose. There- fore, for each template TEMPL(i) a ground truth segmentation mask is available for cropping the template foreground sec- tions from the templates.

A corresponding system for estimating a multi-dimensional pose mDOF of an object OBJ based on a representation IMA of the object OBJ, comprising a computer implemented control system is configured to execute the method described above.

The system comprises a sensor for capturing the representa- tion IMA of the object OBJ, wherein the sensor is connected to the control system for providing the representation IMA to the control system for further processing.

Preferably, the sensor is a camera for capturing optical im- ages, preferably RGB images.

In summary, the proposed approach introduces a multi-stage one shot object detector and multi-dimensional pose estima- tion framework which does not require training on the target objects but only an image and a model of the object and thus generalizes to new unseen objects. The proposed approach uses an input image IMA of the object OBJ and a 3D textured model MOD of the object OBJ as input at test time. The core aspect is to represent the 3D model MOD with a number of 2D tem- plates TEMPL(i), e.g. in the form of RGB images, rendered from different viewpoints, e.g. on a sphere enclosing the model MOD of the object OBJ, e.g. with the model MOD in the center of the sphere. This enables CNN-based features to be directly extracted from the image IMA and jointly and densely matched against descriptors of all templates TEMPL(i). Then, an approximate object orientation is estimated via template matching. Subsequently, dense 2D-2D-correspondences are es- tablished between the detected object OBJ and the matched template of the previous step for which the virtual viewpoint and therewith the pose is known. This allows to establish dense 2D-3D-correspondences between the pixels of the image IMA of the object OBJ and the corresponding object model MOD. The final pose can be computed based on the 2D-3D- correspondences, e.g. by PnP+RANSAC algorithms (introduced below).

Advantageously, the method proposed herein is a scalable, fast one-shot multi-dimensional pose estimation method which fully generalizes to new unseen objects and scenes and does not require real data for training.

The proposed multi-stage object pose estimation method can be applied, for example, in robotic manipulation, augmented re- ality, visual servoing, etc. to help machines or intelligent agencies better understand the 3D environment. The multi- dimensional object pose estimation is one of the core abili- ties to support the machine operating in the real world. It is the bridge for these applications from the 2D images to the 3D world. By accurately localizing the objects and esti- mating their poses, it can improve the action taken efficiency and the task success rate.

The proposed approach has key cost and time advantages, name- ly no need for complicated and time-consuming data annotation step and no need to re-train the networks, based on deep neu- ral networks, for each particular object. These features make the approach highly relevant for various applications in aug- mented reality, robotics, robot grasping, etc., allowing those methods to operate in more dynamic scenarios involving various objects. More use cases of the solution proposed herein are imaginable and the concrete introduced examples must not be understood to be limiting the scope of the invention.

It is to be understood that the elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, in case the dependent claims append- ed below depend from only a single independent or dependent claim, it is to be understood that these dependent claims can, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or de- pendent, and that such new combinations are to be understood as forming a part of the present specification.

In the following, possible embodiments of the different as- pects of the present invention are described in more detail with reference to the enclosed figures. The objects as well as further advantages of the present embodiments will become more apparent and readily appreciated from the following de- scription of the preferred embodiments, taken in conjunction with the accompanying figure in which:

FIG 1 shows an image IMA of a real world scene SCN with sev- eral objects, including an object OBJ of interest,

FIG 2 shows the object OBJ,

FIG 3 shows a system for pose estimation,

FIG 4 shows a scenario with known virtual camera viewpoints for generating templates TEMPL(i),

FIG 5 shows a flow chart of a method MET for pose estima- tion,

FIG 6 shows a flow chart of a first stage SI of the method MET,

FIG 7 shows a flow chart of a second stage S2 of the method MET,

FIG 8 shows a flow chart of a third stage S3 and a fourth stage S4 of the method MET. Detailed description

FIG 1 shows an image IMA of an exemplary real world scene SCN with a plurality of objects, including an object OBJ of in- terest for which the multi-dimensional pose shall be deter- mined. The object OBJ is also depicted in FIG 2 for the sake of clarity of what the object of interest is in this exempla- ry case.

FIG 3 shows a system 100 including a sensor 110 for capturing a representation IMA of the object OBJ. For example, the sen- sor 110 can be a camera and the representation IMA can be an image, e.g. a two-dimensional RGB image. Moreover, the system 100 comprises a control system 130, e.g. implemented on and executed by a computer 120 of the system 100 by means of a respective software. The camera 110 is connected to the com- puter 120 so that captured images IMA can be transferred to the computer 120 for further processing by the control system 130 as described below. Such connection can be a wired or a wireless connection. The images IMA transferred to the com- puter 120 can be stored in a memory 140 of the computer 120 so that the control system 130 can access the images IMA from the memory 140.

A scene SCN, e.g. like the one shown in FIG 1, can be imaged by the camera 110 in order to produce an RGB image IMA to be further processed by the control system 130 as described be- low to estimate the multi-dimensional pose of the object OBJ in the scene SCN. The camera 110 is arranged and configured to capture the image IMA such that it at least depicts the object OBJ of interest and typically, but not necessarily, parts of its environment, possibly including further objects. As indicated, the presentation in FIG 1 might actually corre- spond to the appearance of such an RGB image IMA captured by the camera 110.

The captured image IMA of the object OBJ is the basis and an input, respectively, for the method MET for estimating the multi-dimensional pose mDOF of the object OBJ. In addition, the method MET utilizes as input data a three-dimensional model MOD of the object OBJ, e.g. a CAD model, which might be stored in and provided by the memory 140.

For the following, the 3D model MOD of the object OBJ is rep- resented by a large number of 2D query templates TEMPL(i) with i=l,...,NT and NT representing the number of such tem- plates, e.g. NT=100. Thus, the idea is to describe the 3D model MOD of the object OBJ by a number of viewpoint based templates TEMPL(i), obtained by rendering the object model MOD under different virtual rotations and perspectives, re- spectively. This brings the problem closer to the standard 2D one shot methods.

The individual templates TEMPL(i) are obtained by rendering the model MOD from various known virtual camera viewpoints vVIEW(i) placed on a sphere SPH enclosing the model MOD of the object OBJ, e.g. with the model MOD in the center of the sphere SPH. Such rendering is known per se and will not be detailed herein. This setup is exemplarily shown in FIG 4. Each one of the dots in FIG 4 stands for a known virtual cam- era viewpoint vVIEW(i) for which and from which's perspective on the model MOD a respective template TEMPL(i) is rendered. Preferably, the dots and virtual camera viewpoints are equal- ly distributed over the surface of the sphere SPH so that the resulting entirety of templates TEMPL(i) represents views on the model MOD from each perspective, without preferred view- ing directions.

The control system 130 is configured to perform the method MET as described in the following. FIG 5 shows a high level flow chart of the method MET which is essentially a multi- stage method comprising four stages S1-S4. The detailed pro- cedures in the individual stages S1-S4 will be described in connection with the subsequent figures. However, it might be mentioned here already that the method MET is implemented ex- emplarily such that each each stage of the method MET applies a separate artificial neural network ANN, in the first stage SI for binary segmentation of the image IMA, in the second stage S2 for template matching and initial viewpoint estima- tion, and in the third stage S3 for 2D-2D matching.

As indicated, the inputs to the method MET and to the first stage SI, respectively, are the RGB image IMA and the 3D mod- el MOD of the object OBJ, wherein the 3D model MOD is actual- ly represented by the plurality of templates TEMPL(i).

In the first, "object detection" stage SI, an object segmen- tation conditioned on the 3D model MOD is performed or, in other words, a detection of the object OBJ represented with its textured 3D model MOD in the RGB image. For this, a se- mantic segmentation is performed by dense matching of fea- tures of the image IMA to an object descriptor o k , which is represented by a sparse set of templates TEMPL(i) as de- scribed in connection with FIG 6. The output of the first stage SI is a binary segmentation mask SEGM which shows the location of the visible part of the object OBJ in the input image IMA. Correspondingly, the segmentation mask has the same dimensions as the image IMA.

The semantic segmentation is preferably a one shot semantic segmentation: "One shot" detectors are a family of 2D object detection methods that generalize to new unseen objects. Such one shot detectors use two disjoint sets of object classes for training and testing. This eliminates the need for manual annotation of training data for the classes used during test- ing and the need to re-train the detector on them. During in- ference they detect a new object represented with a single template in an input target image. These methods typically use the "Siamese" network to match precomputed features from a template to a target image.

After the first stage SI, the object OBJ is localized in the image IMA, but its pose is still unknown. The binary segmentation mask SEGM generated in the first stage SI of the method MET functions as an input to the sec- ond stage S2 of the method MET. Additionally, the second stage S2 again uses the templates TEMPL(i) as well as the im- age IMA as input data.

In the second, "template matching" stage S2, an initial rota- tion estimation is performed via a template matching approach to a dense set of templates TEMPL(i) covering the entire vir- tual pose space constrained to the sphere SPH. Therewith, the second stage S2 chooses one template TEMPL(J) from the plu- rality of templates TEMPL(i) with 1<J<NT with the highest similarity to the image IMA.

The template TEMPL(J) chosen in the second stage S2 is fur- ther processed in the third, "correspondence determination" stage S3 of the method MET together with the image IMA as an additional input. The second stage S2 includes the determina- tion of dense 2D-3D-correspondences 2D3D between the image IMA and the 3D object model MOD by 2D-2D-correspondence matching of the image IMA to the chosen template TEMPL(J), which represents the object OBJ in the known pose. Thus, the output of the third stage S3 are the 2D-3D-correspondences 2D3D.

In the fourth, "pose estimation" stage S4, the multi- dimensional pose mDOF, preferably a 6D pose, is estimated based on the 2D-3D-correspondences 2D3D determined in the third stage S3. The estimation might apply known pose estima- tion procedures like "PnP" (Vincent Lepetit, Francesc Moreno- Noguer, Pascal Fua: "Epnp: An accurate o (n) solution to the pnp problem"; IJCV; 2009) in combination with "RANSAC" (Mar- tin A Fischler, Robert C Bolles: "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography"; Communications of the ACM; 1981) or Kabsch (Paul J Besl, Neil D McKay: "Method for registra- tion of 3-d shapes"; In Sensor fusion IV: control paradigms and data structures; 1992). In other words, the "PnP+RANSAC" procedure is applied to estimate the multi-dimensional pose mDOF from the 2D-3D-correspondences 2D3D.

In the following, feature maps will be introduced and further processed which are computed for various input data sets. The overall approach for feature extraction to generate feature maps foresees two feature extractors F EE and F k E which are ac- tually realized by the same network ANN1. Assuming that denotes an image of size W xH, e.g. the image IMA or a tem- plate TEMPL(i), which implicitly depends on the object model MOD and its pose T E SE(3), the first feature extractor F EE (I)E ^H k xw k xD k uses thg pre-trained network ANN1 to compute a fea- ture map of depth dimension D k from the image I, wherein such F EE (I) is used to extract features from such image I in all stages of the method MET. In practice, pre-computed feature maps from several depth levels of the network ANN 1 can be used, which are indexed by k£ N;l< k< N, where N is a num- ber of convolution layers of the network ANN1. In other words, assuming that the network ANN1 is a convolutional neu- ral network (CNN) with several layers, each layer k or at least more than one of the layers produces a feature map of its own. Later layers produce more global features because of a larger field of view, but have lower spatial resolution. Feature maps from earlier layers have higher spatial resolu- tion but smaller field of view. F k E (I)6 IR Dk stands for a fea- ture extractor which extends F EE by spatial averaging along height and width for each depth dimension of the feature map to produce a single vector of length D k . In concrete, F k E is used to extract features from templates TEMPL(i) in the first stage SI of the method MET, while F EE is used in the second S2 and third stage S3 to extract features from templates TEMPL(i). Regarding the image IMA, F EE is applied in all stages for feature extraction.

Coming back to the first stage SI, FIG 6 shows a flow chart of the procedure under the first stage SI. As mentioned, the first stage SI of the proposed method MET is responsible for one shot object localization of the object OBJ in the image IMA via binary semantic segmentation and generates a binary segmentation map SEGM.

In a first step SI.1 of the first stage SI, a 4D single de- scriptor tensor o k is computed by a trained network ANN1 from the pre-rendered templates TEMPL(i), so that the input image IMA can be matched subsequently to all the templates TEMPL(i) in a single shot, as opposed to the standard one shot object detection where each query template is treated independently. The 4D descriptor tensor o F E (MOD) G IR collects and represents, respectively, all templates TEMPL(i) rendered from the virtual viewpoints vVIEW(i) on the sphere SPH around the object's model MOD, where the first two dimensions (X,Y) stand for a virtual camera position w.r.t. an object coordi- nate system, e.g. polar coordinates, while the third dimen- sion Z stands for in-plane rotations. The 4-th dimension D k refers to feature maps extracted from each template TEMPL(i) at multiple depth levels k=l,...,K of the neural network ANN1, e.g. K=3. Each element of the descriptor tensor o k is one viewpoint template TEMPL(i) represented with the correspond- ing feature map of the respective query template TEMPL(i). It is defined as o k yz stands for a template TEMPL(i) and R is a rotation matrix representing a virtual viewpoint on the sphere SPH enclosing the model MOD of the object OBJ, e.g. with the model M in the center of the sphere. Actually, features from each template TEMPL(i) are precomputed using the feature extractor F k E , be- fore they are combined to generate the tensor o k .

In a second step SI.2 of the first stage SI, which might be executed before, after, or in parallel to the first step Sl.l, the network ANN1 computes feature maps for the image IMA.

In a third step SI.3 of the first stage, the network ANN1 re- ceives as an input the descriptor tensor o k as well as the feature map f k of the previous steps Sl.l, SI.2 and predicts a correlation tensor c k based on comparison of image features expressed by the feature map f k = F FE (IMA)E ]^ HkxwkxDk o f the image IMA and the 4D descriptor tensor o k = F k E (MOD)6 IR xk xY k xz k xD k for the mo del MOD of the object OBJ. This builds on the idea of dense feature matching, further extended for the task of object detection.

The core principle of the comparison in step SI.3 is to com- pute per-pixel correlations between the feature maps f k of the image IMA and the features from the object descriptor o k . Each pixel in the feature map f k of the input image IMA is matched to the entire object descriptor o k , which results in a correlation tensor c k E ]]j Hkx wkx xkx ykx zk . For a particular pix- el (h,w) of the input image IMA, the correlation tensor value is defined as c k wxyz = corr(f£ w , o k yz ). For example, "corr" might denote the "Pearson" correlation, introduced by Karl Pearson.

In step SI.4 the correlation tensor c k further processed to finally generate the segmentation mask SEGM. First, the cor- relation tensor c k is flattened to a 3D tensor c k 'E j R H k xiv k x(x A fcr A fcz A fc)> This way, each pixel of the input image IMA feature map f k gets the list of all correlations of its fea- ture vector with all the feature vectors of the descriptor o k . The 3D tensor is further processed by standard convolu- tional layers after that.

The flattened correlation tensor c k ' can be used in two ways. First, the computed correlations of the flattened correlation tensor c k ' are used directly as an input to the network ANN1 to produce the binary segmentation mask SEGM. For that, the flattened tensor c k ' is further processed by a 1X1 convolu- tional layer to reduce the number of dimensions from (X k Y k Z k ) to L k . The resulting tensor c k " is directly used as input features to the subsequent convolutional layers of the net- work ANN1. As the templates TEMPL(i) are always ordered in the same way in the tensor o k , the network ANN1 can learn from the computed correlations during training and then gen- eralize to unseen objects at test time. The second use of the correlations is to compute a pixel-wise attention for the feature maps f k of the image IMA. Pixel- wise attention allows to effectively incorporate the original image IMA features into a feature tensor created by stacked tensors f k (introduced below) and c k " and use it for more precise segmentation.

Raw pixel-wise attention at the feature map level k is de- fined simply as a sum of all (X k Y k Z k ) correlations for a giv- en pixel

Since simple attention can be very noisy in the early layers in comparison to later layers, conditioning attentions on the attention from the last level is proposed, which tend to be more precise but have lower resolution, and thresholding at- tention values below the mean value:

Therein, V represents a bilinear upsampling of the attention from the last level A kl to the size of A k . A kl itself is thresholded but not conditioned on anything. All the values are scaled to lie between 0 and 1. With the attention maps at hand, feature maps f k of the image IMA are transformed as follows : f k = A k •f k - (1 - A k )■f k

Subsequently, the attended features are processed by a lx 1 convolutional layer of ANN1 to decrease the dimensionality. Stacked f k and c k are used jointly by the subsequent layers. Overall, this section of the first stage resembles the "UNet" approach (Olaf Ronneberger, Philipp Fischer, Thomas Brox: "U- net: Convolutional networks for biomedical image segmenta- tion"; In International Conference on Medical image computing and computer-assisted intervention, 2015) of feature maps up- sampling followed by convolutional layers until the initial image size is reached. The core difference to that approach is that the network ANN1 uses stacked f k and c k at each level instead of skip connections.

Finally, the trained network ANN1 predicts a per-pixel proba- bility, e.g. embodied as a binary segmentation mask SEGM, that a pixel of image IMA contains a visible part of the ob- ject OBJ. For example, the per-pixel "Dice" loss might be used to train the network and handle imbalanced class distri- bution. The determination of the segmentation mask SEGM is equivalent to the detection of the object OBJ in the image IMA.

Thus, in training time the network ANN1 is trained by maxim- izing the predicted probabilities for pixels with the object OBJ and minimizing probability for pixels without the object OBJ. In test time, pixels with a predicted probability of, for example, more than 0.5 are considered as visible parts of the object OBJ.

Coming back to the second stage S2, FIG 7 shows a flow chart of the procedure under the second stage S2. The second stage S2 applies a second network ANN2 and receives and processes as input data the predicted segmentation mask SEGM of the first stage SI, the image IMA, and the templates TEMPL(i) to choose one template TEMPL(J) from the plurality of templates TEMPL(i) which matches best with the image IMA. The chosen template TEMPL(J) and the corresponding known virtual camera position and viewpoint, respectively, represent an initial viewpoint estimation.

For this initial viewpoint estimation, template matching via "manifold learning" can be applied by ANN2 which scales well to a large number of objects and which generalizes to unseen objects. For this task, the same feature extractor F FE is ap- plied as in the first stage SI. However, only the features from the last layer, i.e. k=K, are employed. Moreover, one 1X1 convolutional layer can be added to decrease the dimen- sions from H ~X W ~X D to H ~X W ~X D*'.

In step S2.1 of the second stage S2, for each template TEMPL(i) a feature map is compute d by ANN2 from a foreground section in the respective template TEMPL (i) which is representing the object OBJ in that tem- plate TEMPL (i).

In step 2.2 of the second stage S2, which might be executed before, after, or in parallel to the first step S2.1, a fea- ture map computed by ANN2 from the fore- ground section in the image IMA representing the object OBJ, using the segmentation mask SEGM predicted in the first stage SI to identify such foreground section.

In each case, a "foreground section" of an image, e.g. a tem- plate TEMPL (i) in step S2.1 and the image IMA in step S2.2, denotes that section of the respective image which contains pixels belonging to the object OBJ of interest. Such fore- ground section can be found by applying a respective segmen- tation mask. W.r.t. the image IMA, the segmentation mask SEGM has been generated in the first stage SI. The templates TEMPL (i) anyway depict the object OBJ in a known pose. There- fore, for each template TEMPL (i) a ground truth segmentation mask is available. In the following, such a "foreground sec- tion" is occasionally denoted as a "patch".

Analogously to the first stage SI, similarity of two patches is estimated by ANN2 in step S2.3 of the second stage S2 by computing for each i and each template TEMPL (i), respective- ly, per-pixel correlations between the feature map f of the foreground section of the image IMA and the feature map t(i) of the foreground section of the respective template TEMPL (i), using As above, "corr" might denote Pearson correlation. With f and t(i) being tensors, correlations are separately compared for each „pixel" (h,w) and then summed up.

The template TEMPL (J) corresponding to that template feature map t(/) for which the best correlation is determined, i.e. with the highest similarity to the detected object OBJ in the image IMA, is chosen as a match to be further processed in the third stage S3.

As mentioned earlier, virtual camera viewpoints and poses for each template TEMPL(i) are well known so that the matched, chosen template TEMPL(J) or at least an index or identifier ID (J) of that template TEMPL(J) gives an initial rotation es- timation of the detected object and allows conclusion on the pose of the object OBJ. In any way, the output of the second stage S2 is a representation rTEMPL(J) of the chosen template TEMPL(J), e.g. the chosen template TEMPL(J) itself, an iden- tifier ID (J) of the chosen template TEMPL(J), or the virtual viewpoint of the respective virtual camera pose to generate TEMPL (J).

The training of the network ANN2 aims to increase similarity for patches which depict objects with very similar rotations and at the same time penalize similarity for distant rota- tions. A specific query template TEMPL (J) with the highest similarity to the detected object in the target image IMA is chosen as a best match at test time. A modified triplet loss with dynamic margin is leveraged by optimizing

Therein, m is set to the angle between rotations of the ob- ject in the anchor and the puller patches. Using the termi- nology from "3d object instance recognition and pose estima- tion using triplet loss with dynamic margin" by Sergey Zakha- rov, Wadim Kehl, Benjamin Planche, Andreas Hutter, Slobodan Ilic in IROS, 2017, f anchor i s descriptor of a randomly chosen object patch. f + corresponds to a puller -a template in the pose very similar to the pose in the anchor, while f~ corre- sponds to the pusher with a dissimilar pose.

During training of ANN2, the rotation matrix is converted from an egocentric to an allocentric coordinate system. This conversion ensures that the visual appearance of the object is dependent exclusively on the rotational component of the SE(3} pose.

Coming back to the third stage S3, FIG 8 shows a flow chart of the procedure under the third stage S3. The third stage S3 applies a third network ANN3 to process input data comprising the representation rTEMPL(J) from the second stage S2 and the image IMA to generate 2D-2D-correspondences 2D2D of TEMPL(J) and IMA and to generate 2D-3D-correspondences 2D3D from those 2D-2D-correspondences 2D2D.

Thus, the goal of the third stage S3 is to provide necessary data to enable the multi-dimensional pose estimation in the fourth stage S4 which can be based the 2D-3D-correspondences 2D3D.

In more detail, after the initial viewpoint estimation of the second stage S2, resulting in the identification of the matched, chosen template TEMPL(J), the image IMA with the de- tected object OBJ, the pose of which is not known, is availa- ble as well as the matched template TEMPL(J), the pose of which is known. The trained network ANN3 computes in a first step S3.1 of the third stage S3 dense 2D-2D correspondences 2D2D between the image and its representation repIMA, respec- tively, and the chosen template TEMPL(J). Therein, the repre- sentation repIMA of the image IMA can be the image IMA itself or, preferably, a section of the respective image IMA which contains pixels which belong to the object OBJ, i.e. a "patch" in the wording introduced above. The patch and the representation repIMA, respectively, can be generated from the whole image IMA by utilizing the segmentation mask SEGM.

The 2D-2D correspondences 2D2D are computed based on this in- put repIMA and TEMPL(J). Preferably, this essentially means that corresponding feature maps repIMAfm, TEMPL(J)fm of re- pIMA and TEMPL(J) are computed as introduced above and corre- lated and the network ANN3 predicts the 2D-2D-correspondences 2D2D for repIMA and TEMPL(J) based on the correlation of those feature maps repIMAfm, TEMPL(J)fm.

Therein, the architecture of the 2D-2D matching for determin- ing the 2D-2D-correspondences 2D2D follows the general idea of dense feature matching. Each pixel of the feature map / fc =repIMAfm of the image representation repIMA is matched with all pixels of the template feature map t fc =TEMPL (J)fm of the chosen template TEMPL(J) to form a 2D-2D-correlation ten- sor c k . The 2D-2D-correlation tensor c k does not yet repre- sent the 2D-2D-correspondence 2D2D, but it is a measure for the similarity between repIMA and TEMPL(J). Then, the network ANN3, which is trained to predict 2D-2D-correspondences from the 2D-2D-correlation tensor c k , computes the 2D-2D- correspondences 2D2D. To be more concrete, the network ANN3 processes the 2D-2D-correlation tensor c k to predict three values for each pixel of the image representation repIMA, namely a binary foreground / background segmentation mask, which is required to show which points of the object are vis- ible in the template TEMPL(J), and a 2D coordinate of the corresponding pixel on the template TEMPL(J), and, in other words, a probability that the particular pixel is visible in the template TEMPL(J) and two values for the coordinate of the corresponding point on the template TEMPL(J).

Based on the 2D-2D-correspondences 2D2D, the network ANN3 can provide 2D-3D-correspondences 2D3D between the object pixels in repIMA and the corresponding voxels of the 3D model MOD of the object OBJ in the second step S3.2 of the third stage S3. This utilizes that object OBJ poses are available for all templates TEMPL(i), including the chosen template TEMPL(J), due to the known virtual camera poses vVIEW(i) corresponding to the templates TEMPL(i). Therefore, for each pixel on a template TEMPL(i), it is known to which 3D point or "voxel" on the model MOD it corresponds. Thus, 2D-3D-correspondences between a template TEMPL(i) and the model MOD are known for all templates TEMPL(i). Therefore, when 2D-2D-correspondences 2D2D between repIMA and a template TEMPL(J) are known, the corresponding 2D-3D-correspondences 2D3D between the image representation repIMA and the model MOD of the object can be easily derived.

The network ANN3 is trained in a fully supervised manner. During training, a random object crop image I o jjj with its as- sociated pose T ob jG SE(3) is sampled from a synthetic dataset. Then, a random template I tmp is picked together with its pose T tmp G SE(3), so that T ob j and T tmp are relatively close. Avail- ability of object poses allows to compute per-pixel 2D-3D- correspondence maps for both patches. Let C:JVCX SE(3)-> [0,l] WxHx3 denote 2D-3D-correspondences for the object ren- dered in the given pose T ob jG SE(3). Correspondences are com- puted as "Normalized Object Coordinates" (NOCS). Its inverse C -1 recomputes correspondences with respect to the unnormal- ized object coordinates, which correspond to the actual 3D object coordinates. It allows to define a 2D correspondence pair distance in the model's 3D coordinate space:

Therein, p and p' are pixel coordinates in the image and tem- plate patches, respectively. Ground truth dense 2D-2D- correspondences are established by simply matching pixel pairs which correspond to the closest points in the 3D coor- dinate system of the model. More formally, for a point pG I ob j its corresponding template point is computed as Due to the random sampling, an outlier aware rejection is em- ployed for 2D-2D-correspondences with large spatial discrep- ancy .

Analogously to the one-shot localization, the segmentation loss is defined as a per-pixel "Dice" loss. Atop of that, the network predicts a discrete 2D coordinate using a standard cross-entropy classification loss, because dense correspond- ence prediction tends to work better if correspondence esti- mation is posed as a classification problem.

Coming back to the fourth stage S4, FIG 8 also shows a flow chart of the procedure under the fourth stage S4. The fourth stage S4 applies a fourth network ANN4 to process the 2D-3D- correspondences 2D3D generated in the third stage S3 to esti- mate the multi-dimensional pose mDOF, e.g. six-dimensional 6D0F, of the object OBJ in step S4.1 of the fourth stage S4. For this, the network ANN4 applies known techniques, e.g. PnP+RANSAC .

In concrete, templates TEMPL(i) can be rendered at 25 frames per second with 128x128 resolution and models downsampled to 5K faces on a standard NVIDIA Geforce RTX 2080 TI. With standard equipment, it takes around one hour to render 9OK templates for the second stage S2 of the network. However, additional ablation studies showed that using 5K templates instead of 90K results in similar performance. Rendering 5K templates takes 200 seconds. The process has to be done only once for each new object.

While the present invention has been described above by ref- erence to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing de- scription be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combi- nations of embodiments are intended to be included in this description. Thus, the invention is not restricted to the above illustrated embodiments but variations can be derived by a person skilled in the art without deviation from the scope of the invention.