Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SEGMENTING AND DENOISING DEPTH IMAGES FOR RECOGNITION APPLICATIONS USING GENERATIVE ADVERSARIAL NEURAL NETWORKS
Document Type and Number:
WIPO Patent Application WO/2019/090213
Kind Code:
A1
Abstract:
A method of removing noise from a depth image includes presenting real-world depth images in real-time to a first generative adversarial neural network (GAN), the first GAN being trained by synthetic images generated from computer assisted design (CAD) information of at least one object to be recognized in the real-world depth image. The first GAN subtracts the background in the real-world depth image and segments the foreground in the real-world depth image to produce a cleaned real-world depth image. Using the cleaned image, an object of interest in the real-world depth image can be identified via the first GAN trained with synthetic images and the cleaned real-world depth image. In an embodiment the cleaned real-world depth image from the first GAN is provided to a second GAN that provides additional noise cancellation and recovery of features removed by the first GAN.

Inventors:
PLANCHE BENJAMIN (DE)
ZAKHAROV SERGEY (DE)
WU ZIYAN (US)
ILIC SLOBODAN (DE)
Application Number:
PCT/US2018/059191
Publication Date:
May 09, 2019
Filing Date:
November 05, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
SIEMENS CORP (US)
International Classes:
G06V10/30; G06V10/764
Other References:
KONSTANTINOS BOUSMALIS ET AL: "Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 December 2016 (2016-12-16), pages 1 - 15, XP080744869, DOI: 10.1109/CVPR.2017.18
DIVAKAR NITHISH ET AL: "Image Denoising via CNNs: An Adversarial Approach", 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 21 July 2017 (2017-07-21), IEEE, Los Alamitos, CA, USA, pages 1076 - 1083, XP033145887, DOI: 10.1109/CVPRW.2017.145
KIANA EHSANI ET AL: "SeGAN: Segmenting and Generating the Invisible", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201, 29 March 2017 (2017-03-29), OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, pages 1 - 10, XP080753049
PEDRO COSTA ET AL: "Towards Adversarial Retinal Image Synthesis", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 31 January 2017 (2017-01-31), XP080752571
JIE LI ET AL: "WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY,, 23 February 2017 (2017-02-23), XP080748369, DOI: 10.1109/LRA.2017.2730363
ZHANG XIN ET AL: "Fast depth image denoising and enhancement using a deep convolutional network", 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 20 March 2016 (2016-03-20), IEEE, Piscataway, NJ, USA, pages 2499 - 2503, XP032901053, DOI: 10.1109/ICASSP.2016.7472127
Attorney, Agent or Firm:
BRINK,John D. Jr. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of removing noise from a depth image comprising: presenting a real-world depth image in real-time to a first generative adversarial neural network (GAN), the first GAN being trained by synthetic images generated from computer assisted design (CAD) information of at least one object to be recognized in the real-world depth image; in the first GAN, subtracting the background in the real-world depth image; in the first GAN, segmenting the foreground in the real-world depth image to produce a cleaned real-world depth image.

2. The method of claim 1 , further comprising: identifying an object of interest in the real-world depth image via the first GAN and the cleaned real-world depth image.

3. The method of claim 1 , further comprising: providing the cleaned real-world depth image to a second GAN to provide additional noise cancellation and recovery of some features removed by the first GAN.

4. The method of claim 1 , further comprising: training the first GAN using synthetic images generated from the CAD information, wherein the CAD information is augmented by: adding simulated distortion to the synthetic images.

5. The method of claim 4, further comprising: adding random background elements to the synthetic image used to train the first

GAN.

6. The method of claim 4, wherein training the first GAN further comprises: providing the first GAN with training data in the form of real pairs of images comprising the cleaned real-world depth image and a synthetic image having no noise and no background stacked to create a real pair.

7. The method of claim 6, wherein training the first GAN further comprises: providing the first GAN with training data in the form of real pairs of images comprising the cleaned real-world depth image and an image from the output of the first GAN stacked to create a fake pair.

8. The method of claim 4, wherein adding distortion to the synthetic images comprises: a linear transform of a target object in the synthetic image.

9. The method of claim 4, wherein adding distorting to the synthetic images comprises: combining random background data into the synthetic image.

10. The method of claim 4, wherein adding distorting to the synthetic images comprises: inserting an object into the synthetic image that at least partially occludes a target object in the synthetic image.

1 1. The method of claim 1 , further comprising: implementing the first GAN using an Image-to-image GAN architecture.

12. The method of claim 1 , further comprising: implementing the first GAN as a U-Net GAN architecture.

13. A system for removing noise from a captured real-world depth image comprising: a first generative adversarial neural network (GAN), the first GAN being trained with synthetic images derived from three-dimensional computer assisted drafting (CAD) information for a target object to be recognized in the capture real-world depth image, wherein the first GAN is configured to receive the real-world depth image and output a cleaned image to resemble one of the synthetic images; a second GAN configured to receive an output of the first GAN, the second GAN being trained with the synthetic images used to train the first GAN, wherein the second GAN operates to fine tune the cleaning of the real-world depth image, including removing additional noise from the cleaned depth image or restoring features of the target object.

14. The system of claim 13, further comprising: the first GAN configured to identify an object of interest in the real-world depth image via the first GAN comparing a synthetic image and the cleaned real-world depth image.

15. The system of claim 13, wherein: the first GAN is trained using synthetic images generated from the CAD information, wherein the CAD information is augmented by adding simulated distortion to the synthetic images.

16. The system of claim 15, wherein random background elements are added to the synthetic image used to train the first GAN.

17. The system of claim 15, wherein training the first GAN further comprises: providing the first GAN with training data in the form of real pairs of images comprising the cleaned real-world depth image and a synthetic image having no noise and no background stacked to create a real pair.

18. The system of claim 17, wherein training the first GAN further comprises: providing the first GAN with training data in the form of real pairs of images comprising the cleaned real-world depth image and an image from the output of the first GAN stacked to create a fake pair.

19. The system of claim 15, wherein adding distortion to the synthetic images comprises: a linear transform of a target object in the synthetic image.

20. The system of claim 15, wherein adding distorting to the synthetic images comprises: combining random background data into the synthetic image.

21. The system of claim 15, wherein adding distorting to the synthetic images comprises: inserting an object into the synthetic image that at least partially occludes a target object in the synthetic image.

22. The system of claim 13, wherein the first GAN is implemented using an Image- to-image GAN architecture.

23. The system of claim 13, wherein the first GAN is implemented using as a U-Net GAN architecture.

Description:
SEGMENTING AND DENOISING DEPTH IMAGES FOR RECOGNITION APPLICATIONS USING GENERATIVE ADVERSARIAL NEURAL NETWORKS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent Application Serial No. 62/581 ,282 filed November 3, 2017 entitled, "Segmenting and Denoising Depth Images for Recognition Applications Using Generative Adversarial Neural Networks", which is incorporated by reference herein in its entirety.

TECHNICAL FIELD

[0002] This application relates to imaging. More particularly, the application relates to automated recognition of objects in an image.

BACKGROUND

[0003] As machine automation continues to develop, one important aspect is to recognize the presence and state of objects in proximity to operations. For example, image sensors that detect optical information, including depth information may be used to capture images of regions of a plant. A human viewing the images may easily recognize objects in the image based on prior knowledge. However, it is not as simple to have a machine "view" the images and identify objects in the image. Various factors, including environmental conditions, the condition of the sensors, the orientation of the object, and additional unimportant objects captured in the background or foreground of the image create variations in the captured images that make it difficult to teach a machine how to make these determinations. [0004] To recognize specific objects, pre-existing images of those objects may be presented to a machine learning network, which can then classify objects in a captured image with the training data the network has previously been given access to. To reduce the time and expense of generating and annotating real-world images for training the neural networks, methods have been developed which generate synthetic images of the objects from three dimensional (3D) computer aided design (CAD) data. Discrepancies (noise, cluttering, etc.) between the synthetic depth images often used for the training of recognition methods, and the target real-world depth scans must be addressed to achieve accurate object recognition. This gap between the two image domains (real and synthetic) deeply affects the accuracy of the recognition algorithms.

[0005] In particular, recent progress in computer vision has been dominated by deep neural networks trained with large amount of accurately labeled data. But collecting and annotating such datasets is a tedious and in some contexts impracticable task. Accordingly, a recent focus in approaches has relied solely on synthetically generated data from 3D models for their training, using 3D rendering engines.

[0006] So far, research has been mostly focusing on bridging the realism gap by improving the generation of the synthetic depth images. We propose to tackle this problem from the opposite perspective, i.e. processing the real images in production (segmenting and enhancing) to bring them closer to the synthetic images the recognition algorithms have been trained with.

[0007] Previous work has included attempts to statistically simulate and apply noise impairing depth images. For example, a previous study proposed an end-to-end framework which simulates the whole mechanism of structured-light sensors, generating realistic depth data from three-dimensional (3D) computer assisted design (CAD) models by comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc.. In addition to covering a wider range of sensors than previous methods, this approach also provided more realistic data, consistently and significantly enhancing the performance of neural network algorithms for different 3D recognition tasks, when used for their training.

[0008] Other work has built on this concept by using a GAN-based process to improve the realism of the generated depth scans and apply some pseudo-realistic backgrounds to them. However, using simulated data cannot always accurately represent real-world images to train neural networks. Methods and systems that can train recognition networks using data more representative of real-world images would be beneficial.

SUMMARY

[0009] A method and system for generating realistic images for training of recognition networks includes processing the actual real-world images to be recognized to make them look like the noiseless synthetic data used to train the algorithms. 1.

A method of removing noise from a depth image comprises presenting a real- world depth image in real-time to a first generative adversarial neural network (GAN), the first GAN being trained by synthetic images generated from computer assisted design (CAD) information of at least one object to be recognized in the real-world depth image, in the first GAN, subtracting the background in the real-world depth image and in the first GAN, segmenting the foreground in the real-world depth image to produce a cleaned real-world depth image. [0010] In some embodiments, the method may further include identifying an object of interest in the real-world depth image via the first GAN and the cleaned real-world depth image. In other embodiments, the method further includes providing the cleaned real- world depth image to a second GAN to provide additional noise cancellation and recovery of some features removed by the first GAN. When training the first GAN, the synthetic images used to train the GAN may be augmented by adding simulated distortion to the synthetic images. In addition, random background elements are added to the synthetic image used to train the first GAN.

[0011] When training the GAN, training data may be in the form of real pairs of images comprising the cleaned real-world depth image and a synthetic image having no noise and no background stacked to create a real pair and in the form of real pairs of images comprising the cleaned real-world depth image and an image from the output of the first GAN stacked to create a fake pair.

[0012] When augmenting the synthetic images, the distortion added may include a linear transform of a target object in the synthetic image, combining random background data into the synthetic image or inserting an object into the synthetic image that at least partially occludes a target object in the synthetic image.

[0013] The first and second GAN may be implemented in any GAN architecture, including by not limited to an Image-to-image GAN architecture or a U-Net GAN architecture.

[0014] A system for removing noise from a captured real-world depth image includes a first generative adversarial neural network (GAN), the first GAN being trained with synthetic images derived from three-dimensional computer assisted drafting (CAD) information for a target object to be recognized in the capture real-world depth image, wherein the first GAN is configured to receive the real-world depth image and output a cleaned image to resemble one of the synthetic images and a second GAN configured to receive an output of the first GAN, the second GAN being trained with the synthetic images used to train the first GAN, wherein the second GAN operates to fine tune the cleaning of the real-world depth image, including removing additional noise from the cleaned depth image or restoring features of the target object.

BRIEF DESCRIPTION OF THE DRAWINGS

[0015] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:

[0016] FIG. 1 is a block diagram for training a first GAN according to embodiments of this disclosure.

[0017] FIG. 2 is a block diagram for training a second GAN according to embodiments of this disclosure.

[0018] FIG. 3 is a block diagram of a pipelined method for processing a real-world depth image according to aspects of embodiments of this disclosure. [0019] FIG. 4 is an illustration of examples of synthetic images generated during training of a first GAN according to aspects of embodiments of this disclosure.

[0020] FIG. 5 is an illustration of examples of cleaned images generated by the first GAN according to aspects of embodiments of this disclosure.

[0021] FIG. 6 is an illustration of noise types that may be used to augment training data according to aspects of embodiments of this disclosure.

[0022] FIG. 7 is an illustration of the use of a warping factor on target objects according to aspects of embodiments of this disclosure.

[0023] FIG. 8 is an illustration of examples of augmenting training data by inserting occluding objects relative to a target object according to aspects of embodiments of this disclosure.

[0024] FIG. 9 is a process flow diagram for a method of cleaning real-world depth images to resemble synthetic training data according to aspects of embodiments of this disclosure.

[0025] FIG. 10 is a computer system that may be used to implement aspects of embodiments described in this disclosure.

DETAILED DESCRIPTION

[0026] A method and system to improve depth-based recognition applications by preprocessing input depth data to extract and denoise the foreground, facilitates further operations (e.g. object recognition, pose estimation, etc.). This preprocessing is done by applying real-time segmentation, which may be followed by smoothing the depth images using generative adversarial neural networks purely trained over synthetic data. [0027] Recent advances in computer vision are dominated by deep neural networks trained with a large amount of accurately labeled data. Collecting and annotating such datasets is a tedious, and in some contexts impracticable task. Therefore, many recent approaches rely solely on synthetically generated data from 3D models for their training, using 3D rendering engines. For depth images however, discrepancies between the modeled images with real scans noticeably affect the performance of these approaches.

[0028] To this point, research has mostly focused on bridging the gap between modeled and real-world images by improving the generation of synthetic depth images used to train the neural networks. According to embodiments described herein, this problem is approached from the opposite perspective. That is, processing the real-world depth images in production ( by segmenting and enhancing) to bring the real-world images closer to the modeled synthetic images that the recognition algorithms are trained with. Previous approaches attempt to statistically simulate and apply noise impaired depth images. One study proposed an end-to-end framework to simulate the whole mechanism of structured-light sensors, generating realistic depth data from 3D CAD models by comprehensively modeling relevant factors such as sensor noise, material reflectance, surface geometry, etc. Aside from covering a wider range of sensors than previous methods, this approach resulted in more realistic data, consistently and significantly enhancing the performance of neural network algorithms for different 3D recognition tasks, when used for their training. In other work, this simulation pipeline is extended by using a GAN-based process to improve the realism of the generated depth scans and apply some pseudo-realistic backgrounds to the modeled depth images. [0029] According to embodiments of the present invention, the problem is considered from an opposite point of view. Rather than attempting to generate realistic images used to train the recognition methods in order to allow the recognition techniques to deal with real images afterwards, methods and systems herein process the real-world depth images to be recognized. This processing makes the images appear similar to the noiseless synthetic data that was used to train the algorithms. To achieve this, the real scans are passed through deep generative adversarial neural networks (GANs) that are trained to map the real-world depth images to the corresponding synthetic modeled images.

[0030] In addition to this inversion of the real image discrepancy problem, a key contribution to solving the problem is the adoption of a depth sensor simulation pipeline in combination with an extensive data augmentation procedure to generate realistic and challenging synthetic data for the training of segmenting/denoising GAN(s). This solution does not rely on the availability of real images and their ground-truth information (unlikely to be available in many industrial applications), which provides a real advantage. Furthermore, it can be demonstrated that GANs trained using these novel techniques fare well when used after training to preprocess real-world scans. According to some embodiments an additional contribution may be achieved through the use of the optional use of two consecutive GANs (a first one for segmentation and partial denoising and a second one to refine the results).

[0031] According to an embodiment, a solution to generate segment and denoise the foreground of depth images applies generative adversarial neural networks (GANs) trained to map realistic scans to noiseless uncluttered ones. The pipeline includes a primary GAN trained to subtract the background and segment the foreground to partially denoise the results and recover some missing parts. Optionally, a second GAN is trained to further denoise and recover based on the results of the first process. Both GANs are trained only on synthetic data generated from the 3D models of the target objects. Accordingly, the solution is highly adaptive and easily deployable. By making real scans appear like synthetic images, the accuracy of recognition methods trained on synthetic data is improved, aiding in closing the discrepancy bridge experienced in the present state of the art.

[0032] The proposed method doesn't require real-world depth images and their ground-truth information, which are usually tedious if not impossible to obtain. The solution can be trained over realistic modeled images generated by an enhanced sensor simulation pipeline that simulates sensor noise and environmental factors. The pipeline is configured to generate the following from 3D models: 1 ) depth images with realistic noise and realistic or random background (input of the first GAN); and 2) the equivalent images without noise and background (same viewpoint, clean depth - target of the both GANs).

[0033] In addition to the use of a sensor simulation pipeline to obtain realistic training data, an extensive data augmentation procedure is used online when feeding the training images to the GANs. Every iteration, the input images undergo a series of random transformations such as background noise, foreground object distortion, random occlusions, small linear transformations (e.g. translation). This randomized procedure makes the training data much more challenging for the GANs and compensates for possible biases of the simulation pipeline. [0034] According to an embodiment, the solution uses two GANs, each made of two deep convolutional neural networks (CNNs). A first generator network is trained to take as input a real depth scan and to return an image that resembles a synthetic image, using synthetic images as targets during training of the first generator network (performing image-to-image translation / style transfer) A second discriminator network learns to classify between real and synthesized pairs of images, and evaluates the results of the first network. The pair of GANs use standard architectures for their networks (e.g., DCGAN/lmage-to-image translation GAN) edited to process multichannel, depth images ( e.g., 16bpp).

[0035] The first, primary or generator GAN is trained to segment the foreground out of the input real images and then to smoothen or recover the object shape. This is done by trying to map realistic images to their background-less, noiseless equivalent. In other words, rather than trying to provide simulated training data to closely approximate real- world conditions, this approach starts with the real-world images and attempts to transform them to resemble the images modeled from the CAD data that are used to train the GAN.

[0036] The second GAN may be considered optional and is trained to map the images output by the first GAN again to their corresponding noiseless modeled images (also background-less). In this way the second GAN may focus on further smoothing and recovering the target objects in the image. The second GAN does not need to learn the segmentation already done by the first GAN.

[0037] Optionally, real depth scans may be used to fine-tune the method. For each real-world image, a 3D model of its foreground and the viewpoint information is needed as ground-truth. Using the simulation pipeline, a noiseless image of the foreground from the same viewpoint can thus be generated. This synthetic image is used both 1 ) as a mask to crop the foreground out of the real image, obtaining a background-less real scan which will be used as a target of the first GAN as well as an input to the second GAN; and 2) as the target image of the second GAN.

[0038] A method for cropping and removing noise from captured depth scans is described here and comprises two main steps:

[0039] 1 . The use of a generative adversarial neural network (GAN) to extract the foreground out of the input real scans, and partially smoothen the results while recovering part of the object's shape; and

[0040] 2. The use of an optional second GAN to further cancel the sensor noise and fill the missing parts of the foreground.

[0041] FIG. 3 is a block diagram of a method for cropping and removing noise from captured depth image scans according to an embodiment of the present invention. Real-world depth images 301 are captured and used as input to a first pre-processor GAN 310. Pre-processor GAN 1 is trained using synthetic images that are derived from information contained in CAD files that contain the design of the object in the image scans. Using these synthetic images as a target, pre-processor GAN 1 outputs a cleaned image 31 1 representative of the real-world image 301 , cropped and with background and foreground noise removed. A second GAN 320 may be added to receive the cleaned images 31 1 output from the first GAN 310. The second GAN 320 is also trained using the synthetic images generated from 3D CAD information and serves to fine tune the output image 31 1 from the first stage and further clean or restore elements of the object in a fine-tuned output image 321. The fine-tuned output image 321 may be used in other processing including object recognition applications or pose estimation for the object.

[0042] FIG. 9 is a process flow diagram for a two stage cleaning process for captured real-world depth images according to an embodiment of the present invention. In a first step, real-world depth image scans are provided to a first GAN 910 that is trained with synthetic images generated from 3D CAD information for the object to be recognized. The real-world depth images are cleaned using the first GAN to produce a cleaned depth image 920. The images cleaned by the first GAN may be provided to a second GAN trained by the synthetic images, which fine tunes the cleaning of the real- world depth images to produce fine-tuned cleaned images 930. The second GAN may provide additional noise reduction or feature restoration to the output images that fine tunes the process of the first GAN. The fine-tuned cleaned depth images output by the second GAN may then be used for other applications, such as object recognition or pose estimation of the objects captured in the original real-world depth images.

[0043] Once trained using the chosen rendering method, the whole pipeline smoothly chains the different steps, processing in real-time the depth images which can be used as input to recognition algorithms trained on synthetic data from the same rendering process.

[0044] Now the details of each step and the accompanying training process will be described. [0045] A preprocessing GAN is used as the first or primary GAN. In order to train the primary GAN, the following requirements must be met. For training, the first GAN requires:

• A 3D model of each object of the target dataset;

• A rendering pipeline configured to simulate a target sensor, generating realistic depth images;

• One or more background generation methods (e.g. simplex noise, patches from depth scenes and the like).

[0046] The architecture for the primary GAN may be selected from the following options. In preferred embodiments, the following two GAN architectures are chosen to generate a cleaner, uncluttered image from the input real-world image. While these two architectures may be used depending the target use-case, it is possible that other GAN architectures may be considered and fall within the scope of this disclosure.

[0047] Imaqe-to-lmaqe GAN

[0048] A standard image-to-image GAN architecture and its loss function may be used for the primary GAN. The architecture of the discriminator (second GAN) network follows the DCGAN architecture: a deep convolutional network with Leaky ReLUs and sigmoid activation for output. It takes as input the original realistic image, and either the target noiseless background-less one ("real" pair) or the output from the generator (first GAN) ("fake" pair), stacked into a single image. Since the role of the discriminator is to identify the "fake" pairs from the "real" ones, the activation layer represents its deductions, each activation of this layer representing the discriminator's guess for a patch of the input data. A binary cross entropy loss function is used.

[0049] As a second option, the generator (first GAN) neural network, a U-Net architecture is used with the original real-world depth data as input, and the generator's activation layer returning a cropped image. To train the generator to make the input data similar to the target real-world data and to fool the discriminator, the generator's loss function is a combination of a cross entropy evaluation of the output and target images, and the reversed discriminator loss. Both networks are edited to process depth images (16bpp).

[0050] Image-to-image GAN extended with Task-Specific Loss

[0051] In some embodiments, the previous architecture solution may be extended by considering the target recognition network while training the GAN. This task-specific method is trained on synthetic data and may be used as another "pseudo-discriminator" during the training of the GAN using a fixed task-specific network.

[0052] The images from the generator are given to the trained (fixed) recognition network, to compare the output of this network to the ground-truth noiseless image. This distance between the 2 feature vectors/estimations (vector/estimation on the GAN output versus on the ground-truth z-buffer image) would be used as a third loss (along with the generator loss and discriminator loss) to train the generator. This permits the GAN to be more "aware" of the semantic information (e.g., the different objects' classes and poses).

[0053] This optional extension of the GAN architecture may be used when: • The target recognition method has been trained and fixed already and includes a neural network which can easily back-propagate the task-specific loss.

• The GAN is receiving too much variation among the target objects and needs to be more aware of the objects' class information to recover missing parts (e.g. for use- cases with partial occlusion).

[0054] Training

[0055] FIG. 1 shows a block diagram depicting the training of the first GAN (generator) 130 according to embodiments of the present invention. 3D CAD information 101 is used to generate synthetic images of an object to be recognized. Z- Buffer information is used for rendering 103 and noise 1 13 is added to simulate inconsistencies, such as sensor variations, surface geometries and light reflections off of surfaces of the object by way of example. Depth sensor information 105 is used to generate simulated views of the modeled object and are augmented using simulation pipeline 1 15. The simulation pipeline 1 15 adds information such as using a huge dataset of realistic scans generated by the simulation pipeline 1 13 from a multitude of different viewpoints plus optionally noiseless synthetic images 1 1 1 as well, and background 1 15 blended in as input, and the equivalent noiseless images without background as target, the GAN is trained to segment the foreground out. This training is taking place as follows:

[0056] At Every iteration, • The input images 131 , 133 generated from the 3D CAD data 101 are randomly augmented by the simulation pipeline 1 15 to make them more challenging as described in greater detail below;

• The discriminator is trained both on a "real" pair and "fake" pair, using the latest state of the generator 130;

• The generator 130 is trained over a batch of input/target data 131 , 133. Once converged, the weights of the first GAN are fixed and saved 135.

[0057] Data Augmentation

[0058] Every iteration, the input images 231 (noiseless or pseudo-realistic) undergo a series of random transformations via simulation pipeline 1 15 such as:

• Linear Transforms (e.g. translation)

• The target objects may undergo small X-Y translations, to cover cases when detected objects aren't perfectly centered in the real images. However, it is not desired to apply linear transforms that are excessively large, or the GAN may start recognizing peripheral elements in the real images (e.g. other target objects appearing in the background).

[0059] Background

• Random background data 1 17 may be generated and added to the synthetic images to provide the generator GAN 1 130 with additional basis for distinguishing objects from different background scenarios. The background information 107 may include randomly generated elements 117 that are combined 127 with the augmented synthetic image data 125 to provide augmented input images 133 to the generator GAN 130.

[0060] Background noise:

• In order to better simulate possible background variations, several noise types 1 13 are introduced in some embodiments. These noise types are often used in procedural content generation: fractal Perlin noise, cellular noise and white noise. FIG. 6 is a visual depiction of these noise types shown Perlin 601 , cellular 603 and white 605 noise types. These noise patterns are generated using a vast frequency range further increasing the number of possible background variations.

[0061] Foreground object distortion:

• Perlin noise generator is used to create two vector fields. The first field represents the X component of the vector, whereas the second one represents the Y component. Then, the inverse warping procedure is used to generate a warped image by treating the stored vectors as offset values for each pixel of the original image. Since noise values range span the [-1 ; 1 ] range by design, we introduce a multiplication warping factor, which allows for more severe distortions.

[0062] FIG. 7 is an illustration of an image as the image is subjected to the multiplication warping factor. Image 701 shows an image with a warping factor of zero, image 703 shows the image when a warp factor of 2 is applied, image 705 shows the image when a warp factor of 6 is applied, image 707 shows the image when the warp factor is 8 and image 709 shows the image when the warp factor of 10 is applied to the original image.

[0063] Random occlusions:

• Occlusions are introduced to serve two different purposes: First is to teach a network to reconstruct the parts of the object that are partially occluded. Second purpose is to enforce the invariance to additional objects within the patch, i.e. to ignore them, treat them as a background. Occlusion objects are generated by walking around the circle taking random angular steps and random radii at each step. Then the generated polygons are filled with arbitrary depth values and painted on top of the patch This randomized procedure makes the training data much more challenging for the GANs / compensate for the possible biases of the simulation pipeline. FIG. 8 provides examples of synthetic images having partial occlusions. The three images 801 a, 801 b and 801 c, depict object 810. Image 801 a includes sample occlusions 820, 821. Image 801 b includes different occlusions 830 and 831. Image 801 c include object 810 and occlusion 840.

[0064] Preprocessing GAN 2

[0065] Requirements

[0066] For its training, the second GAN requires:

• The 3D model of each object of the target dataset;

• A rendering pipeline configured to simulate the target sensor, generating realistic depth images; • A similar or different pipeline configured to generate noiseless, clean depth images (e.g. clamped Z-buffer).

[0067] GAN Architecture

[0068] The second GAN is defined the same way as the first one, choosing between the two architectures depending on the use-cases. The exception is the loss function of the generator, its first part (comparison of the generated image with the target one) being edited to heavily penalize any change done to the background (i.e., using the input data as a binary mask + Hadamard product).

[0069] Training

[0070] FIG. 2 illustrates the training of the second GAN according to embodiments of the present invention. The training of the second component is also similar to the training of the first networks, with only the input and target datasets changing, for example:

• The images output by the now-fixed first GAN 130 (when given realistic augmented images as input) are used as input 232;

• The background-less clean depth data 231 are used as target. Once converged, the weights of the second GAN 230 can be saved, finalizing the training of the whole pipeline 235.

[0071] Fine Tuning

[0072] If available, real depth scans can be used to fine-tune the method. For each real image, a 3D model of its foreground and the viewpoint information is needed as ground-truth. Using the 3D engine configured to generate noiseless depth images, clean images of the foreground from the same viewpoints can thus be generated. Each of these synthetic images are used both:

• as a mask to crop the foreground out of the real image, obtaining a background-less real scan which will be used as target of the first GAN and input of the second GAN;

• as the target image of the second GAN. [0073] Usage

[0074] Once trained, the proposed pipeline can simply be used on every real-world depth scan containing one of the target objects to extract and clean its depth information. The result can then be used for various applications, (e.g. instance recognition or pose estimation).

[0075] FIG. 4 provides examples of results generated from a generator (GAN 1 ) each image triplet is associated with an object. Column a is the synthetic image input to GAN 1 for training, the center column b is the output of GAN 1 or the denoised and uncluttered version of a real-world depth image, and column c shows the ground-truth synthetic image (prior to noise and background added to the synthetic input images).

[0076] FIG. 5 shows the output of GAN 1 during testing of real-world depth images. Each triplet of images corresponds to an object to be recognized. Column a represents the real-world depth image as it is captured. Center column b shows the image as it is output from the first GAN, with the background removed and the object cleaned to resemble a noiseless synthetic image of the object. Column c is ground truth for the real-world depth image when the object is cropped from the originally captured image shown in column a.

[0077] The described methods and systems represent improvements over the prior art ways of identifying objects in images such as depth imaging applications. By changing the perspective from generating simulated images to try to mimic real-world interference and noise to starting with real-world depth images and processing the images in a GAN pipeline to transform the real-world images into denoised and uncluttered images to simulate what the field of view would look like in the simulated images generated from the CAD information, more accurate object detection and pose estimation may be achieved.

[0078] FIG. 10 illustrates an exemplary computing environment 1000 within which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 1010 and computing environment 1000, are known to those of skill in the art and thus are described briefly here.

[0079] As shown in FIG. 10, the computer system 1010 may include a communication mechanism such as a system bus 1021 or other communication mechanism for communicating information within the computer system 1010. The computer system 1010 further includes one or more processors 1020 coupled with the system bus 1021 for processing the information.

[0080] The processors 1020 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.

[0081] Continuing with reference to FIG. 10, the computer system 1010 also includes a system memory 1030 coupled to the system bus 1021 for storing information and instructions to be executed by processors 1020. The system memory 1030 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 1031 and/or random access memory (RAM) 1032. The RAM 1032 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 1031 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 1030 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 1020. A basic input/output system 1033 (BIOS) containing the basic routines that help to transfer information between elements within computer system 1010, such as during start-up, may be stored in the ROM 1031. RAM 1032 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 1020. System memory 1030 may additionally include, for example, operating system 1034, application programs 1035, other program modules 1036 and program data 1037.

[0082] The computer system 1010 also includes a disk controller 1040 coupled to the system bus 1021 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 1041 and a removable media drive 1042 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive). Storage devices may be added to the computer system 1010 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).

[0083] The computer system 1010 may also include a display controller 1065 coupled to the system bus 1021 to control a display or monitor 1066, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes an input interface 1060 and one or more input devices, such as a keyboard 1062 and a pointing device 1061 , for interacting with a computer user and providing information to the processors 1020. The pointing device 1061 , for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processors 1020 and for controlling cursor movement on the display 1066. The display 1066 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 1061. In some embodiments, an augmented reality device 1067 that is wearable by a user, may provide input/output functionality allowing a user to interact with both a physical and virtual world. The augmented reality device 1067 is in communication with the display controller 1065 and the user input interface 1060 allowing a user to interact with virtual items generated in the augmented reality device 1067 by the display controller 1065. The user may also provide gestures that are detected by the augmented reality device 1067 and transmitted to the user input interface 1060 as input signals.

[0084] The computer system 1010 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 1020 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 1030. Such instructions may be read into the system memory 1030 from another computer readable medium, such as a magnetic hard disk 1041 or a removable media drive 1042. The magnetic hard disk 1041 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 1020 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 1030. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.

[0085] As stated above, the computer system 1010 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term "computer readable medium" as used herein refers to any medium that participates in providing instructions to the processors 1020 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 1041 or removable media drive 1042. Non-limiting examples of volatile media include dynamic memory, such as system memory 1030. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 1021. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

[0086] The computing environment 1000 may further include the computer system 1010 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 1080. Remote computing device 1080 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 1010. When used in a networking environment, computer system 1010 may include modem 1072 for establishing communications over a network 1071 , such as the Internet. Modem 1072 may be connected to system bus 1021 via user network interface 1070, or via another appropriate mechanism.

[0087] Network 1071 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 1010 and other computers (e.g., remote computing device 1080). The network 1071 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 1071.

[0088] An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.

[0089] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the GUI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.

[0090] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.

[0091] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof.