Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PROBABILISTIC OBJECT MODELS FOR ROBUST, REPEATABLE PICK-AND-PLACE
Document Type and Number:
WIPO Patent Application WO/2019/079598
Kind Code:
A1
Abstract:
A method includes, as a robot encounters an object, creating a probabilistic object model to identify, localize, and manipulate the object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across multiple camera locations.

Inventors:
TELLEX STEFANIE (US)
OBERLIN JOHN (US)
Application Number:
PCT/US2018/056514
Publication Date:
April 25, 2019
Filing Date:
October 18, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV BROWN (US)
International Classes:
G06T7/00; G06V10/772; G06V10/84
Domestic Patent References:
WO2012089928A12012-07-05
Foreign References:
US20150269436A12015-09-24
US9333649B12016-05-10
RU2528140C12014-09-10
Attorney, Agent or Firm:
HOLMANDER, Daniel, J. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method comprising:

as a robot encounters an object, creating a probabilistic object model to identify, localize, and manipulate the object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across multiple camera locations.

2. The method of claim 1 wherein creating the probabilistic object model comprises rendering an observed view from images and poses at a particular plane.

3. The method of claim 2 wherein rendering the observed view from images and poses at the particular plane comprises computing a maximum likelihood estimate from image data by finding a sample mean and variance for pixel values observed at each cell using a calibration function.

4. The method of claim 3 wherein creating the probabilistic object model further comprises rendering a predicted view given a scene with objects and appearance models.

5. The method of claim 4 wherein creating the probabilistic object model further comprises inferring a scene.

6. The method of claim 5 wherein inferring the scene comprises computing an observed map and incrementally adding objects to the scene until the discrepancy is less than a predefined threshold.

7. The method of claim 6 wherein creating the probabilistic object model further comprises inferring the object.

8. A method comprising: using light fields to generate a probabilistic generative model for objects, enabling a robot to use all information from a camera to achieve precision.

9. The method of claim 8 further comprising:

combining learned models form models for object category that are metrically grounded and used to perform localization and grasp prediction.

10. A system comprising:

a robot arm;

multiple cameras locations; and

a process for creating a probabilistic object model to identify, localize, and manipulate an object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across the multiple camera locations.

11. The system of claim 10 wherein creating the probabilistic object model comprises rendering an observed view from images and poses at a particular plane.

12. The system of claim 11 wherein rendering the observed view from images and poses at the particular plane comprises computing a maximum likelihood estimate from image data by finding a sample mean and variance for pixel values observed at each cell using a calibration function.

13. The system of claim 12 wherein creating the probabilistic object model further comprises rendering a predicted view given a scene with objects and appearance models.

14. The system of claim 13 wherein creating the probabilistic object model further comprises inferring a scene.

15. The system of claim 14 wherein inferring the scene comprises computing an observed map and incrementally adding objects to the scene until the discrepancy is less than a predefined threshold.

16. The system of claim 15 wherein creating the probabilistic object model further comprises inferring the object.

Description:
Probabilistic Object Models for

Robust, Repeatable Pick-and-Place

STATEMENT REGARDING GOVERNMENT INTEREST

[001] None.

CROSS REFERENCE TO RELATED APPLICATIONS

[002] This application claims benefit from U.S. Provisional Patent Application Serial No. 62/573,890, filed October 18, 2017, which is incorporated by reference in its entirety.

BACKGROUND OF THE INVENTION

[003] The present invention relates generally to object recognition and manipulation, and more particularly to probabilistic object models for robust, repeatable pick-and-place.

[004] In general, most robots cannot pick up most objects most of the time. Yet for effective human-robot collaboration, a robot must be able to detect, localize, and manipulate the specific objects that a person cares about. For example, a household robot should be able to effectively respond to a person's commands such as "Get me a glass of water in my favorite mug," or "Clean up the workshop." For these tasks, extremely high reliability is needed; if a robot can pick at 95% accuracy, it will still fail one in twenty times. Considering it might be doing hundreds of picks each day, this level of performance will not be acceptable for an end-to-end system, because the robot will miss objects each day, potentially breaking a person's things.

SUMMARY OF THE INVENTION

[005] The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.

[006] In an aspect, the invention features a method including, as a robot encounters an object, creating a probabilistic object model to identify, localize, and manipulate the object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across multiple camera locations.

[007] In another aspect, the invention features a method including using light fields to generate a probabilistic generative model for objects, enabling a robot to use all information from a camera to achieve precision.

[008] In still another aspect, the invention features a system including a robot arm, multiple cameras locations, and a process for creating a probabilistic object model to identify, localize, and manipulate an object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across the multiple camera locations.

[009] These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed. BRIEF DESCRIPTION OF THE DRAWINGS

[0010] These and other features, aspects, and advantages of the present invention will become better understood with reference to the following description, appended claims, and accompanying drawings where:

[0011] FIG. 1 illustrates how an exemplary model enables a robot to learn to robustly detect, localize and manipulate objects.

[0012] FIG. 2 illustrates an exemplary probabilistic object map model for reasoning about objects.

[0013] FIG. 3 illustrates exemplary objects.

[0014] FIG. 4 illustrates an exemplary observed view for a scene.

[0015] FIG. 5 illustrates automatically generated thumbnails and observed views for two configurations detected for a yellow square duplo.

DETAILED DESCRIPTION

[0016] The subject innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well- known structures and devices are shown in block diagram form in order to facilitate describing the present invention.

[0017] Robust object perception is a core capability for a manipulator robot. Current perception techniques do not reach levels of precision necessary for a home robot, which might be requested to pick hundreds of times each day and where picking errors might be very costly, resulting in broken objects and ultimately, in the person's rejection of the robot. To address this problem, the present invention enables the robot to learn a model of an object using light fields, achieving very high levels of robustness when detecting, localizing, and manipulating objects. We present a generative model for object appearance and pose given observed camera images. Existing approaches use features, which discard information, because inference in a full generative model is intractable. We present a method that uses light fields to enable efficient inference for object detection and localization, while incorporating information from every pixel observed from across multiple camera locations. Using the learned model, the robot can segment objects, identify object instances, localize them, and extract three dimensional (3D) structure. For example, our method enables a Baxter robot to pick an object hundreds of times in a row without failures, and to

autonomously create models of objects for many hours at a time. The robot can identify, localize and grasp objects with high accuracy using our framework; modeling one side of a novel object takes a Baxter robot approximately 30 seconds and enables detection and localization to within 2 millimeters. Furthermore, the robot can merge object models to create metrically grounded models for object categories, in order to improve accuracy on previously unencountered objects.

[0018] Existing systems for generic object detection and manipulation have lower accuracy than instance-based methods and cannot adapt on the fly to objects that do not work the first time. Instance based approaches can have high accuracy but require object models to be provided in advance; acquiring models is a time consuming process that is difficult for even an expert to perform. For example, the winning team from the Amazon Picking

Challenge in 2015 used a system that required specific instance-based data to do pose estimation and still found that perception was a major source of system errors. Autonomously learning 3D object models requires expensive ICP-based methods to localize objects, which are expensive to globally optimize, making them impractical for classification.

[0019] To achieve extremely high accuracy picking, we reframe the problem to one of adaptation and learning. When the robot encounters a novel object, its goal is to create a model that contains the information needed to identify, localize, and manipulate that object with extremely high accuracy and robustness. We term this information a probabilistic object model (POM). After learning a POM, the robot is able to robustly interact with the object, as shown in FIG. 1. We represent POMs using a pixel-based inverse graphics approach based around light fields. Unlike feature-based methods, pixel based inverse graphics methods use all information obtained from a camera. Inference in inverse-graphics methods is

computationally expensive because the program must search a very large space of possible scenes and marginalize over all possible images. Using the light field approach, a robot can automatically acquire a POM by exploiting its ability to change the environment in service of its perceptual goals. This approach enables the robot to obtain extremely high accuracy and repeatability at localizing and picking objects. Additionally, a robot can create metrically grounded models for object categories by merging POMs.

[0020] The present invention demonstrates that a Baxter robot can autonomously create models for objects. These models enable the robot to detect, localize, and manipulate the objects with very high reliability and repeatability, localizing to within 2 mm for hundreds of successful picks in a row. Our system and method enable a Baxter robot to autonomously map objects for many hours using its wrist camera.

[0021] We present a probabilistic graphical model for object detection, localization, and modeling. We first describe the model, then inference in the model. The graphical model is depicted in FIG. 2, while a table of variables appears in Table I.

[0022] Probabilistic Obj ect Model

[0023] A goal is for a robot to estimate objects in a scene, given observations of camera images, and associated camera poses,

[0024] Each object instance, o N , consists of a pose, x^, along with an index, tf 1 , which identifies the object type. Each object type, O k , is a set of appearance modes. We rewrite the distribution using Bayes' rule:

[0025] We assume a uniform prior over object locations and appearances, so that only the first term matters in the optimization.

[0026] A generative, inverse graphics approach would assume each image is independent given the object locations, since if we know the true locations and appearances of the objects, we can predict the contents of the images using graphics:

[0027] This model is known as inverse graphics because it requires assigning a probability to images given a model of the scene. However, inference under these

assumptions is intractable because of the very large space of possible images. Existing approaches turn to features such as SIFT or learned features using neural networks, but these approaches throw away information in order to achieve more generality. Instead we aim to use all information to achieve the most precise localization and manipulation possible.

[0028] To solve this problem, we introduce the light field grid, m as a latent variable. We define a distribution over the light rays emitted from a scene. Using this generative model, we can then perform inferences about the objects in the scene, conditioned on observed light rays, R, where each ray corresponds to a pixel in one of the images, Z h , combined with a camera calibration function that maps from pixel space to a light ray given camera pose. We define a synthetic photograph, m, as an L x W array of cells in a plane in space. Each cell (/, w) e m has a height z and scatters light at its (x, y, z) location. We assume each observed light ray arose from a particular cell (/, w), so that the parameters associated with each cell include its height z and a model of the intensity of light emitted from that cell.

[0029] We integrate over m as a latent variable:

[0030] Then we factor this distribution assuming images are independent given the light field model m:

[0031] Here is the bundle of rays that arose from map cell (l,w), which can be

determined finding all rays that intersect the cell using the calibration function. This factorization assumes that each bundle of rays is conditionally independent given the cell parameters, an assumption valid for cells that do not actively emit light. We can render m as an image by showing the values for as the pixel color; however variance information μ /w is also stored. FIG. 4 shows example scenes rendered using this model. The first term in Equation 6 corresponds to light field rendering. The second term corresponds to the light field distribution in a synthetic photograph given a model of the scene as objects.

[0032] We assume each object has only a few configurations, c ε C, at which it may lie at rest. This assumption is not true in general; for example, a ball has an infinite number of such configurations. However many objects with continuously varying families of stable poses can be approximated using a few sampled configurations. Furthermore this assumption leads to straightforward representations for POMs. In particular, the appearance model, A c , for an object is a model of the light expected to be observed from the object by a camera. At both modeling time and inference time, we render a synthetic photograph of the object from a canonical view (e.g., top down or head-on), enabling efficient inference in the a lower- dimensional subspace instead of being required to do full 3D inference as in ICP. This lower- dimensional subspace enables much deeper and finer-grained search so that we can use all information from pixels to perform very accurate pose estimation.

[0033] Inference

[0034] Algorithm 1 Render the observed view from images and poses at a particular plane, z.

[0035] Algorithm 2 Render the predicted view given a scene with objects and appearance models.

[0036] Algorithm 3 Infer scene.

[0037] Algorithm 4 Infer object.

[0038] Using Equation 6 to find the objects that maximize a scene is still challenging because it requires integrating over all values for the variances of the map cells and searching over a full 3D configuration of all objects. Instead we approximate it by performing inference in the space of light field maps, m and integral finding the map m and scene (objects) that maximizes the

integral.

[0039] First, we find the value for m that maximizes the likelihood of the observed images in the first term:

[0040] This m * is the observed view. We compute the maximum likelihood estimate from image data analytically by finding the sample mean and variance for pixel values observed at each cell using the calibration function. An example observed view is shown in FIG. 4. We use the approximation that each ray is assigned to the cell it intersects without reasoning about obstructions. Algorithm 1 gives a formal description of how to compute it from calibrated camera images. This view can be computed by iterating through each image one time, linear in the number of pixels or light rays. For a given scene (i.e., object configuration and appearance models), we can compute an m that maximizes the second term in Equation 6.

[0041] This Λ m is the predicted view; an example predicted view is shown in FIG. 4. This computation corresponds to a rendering process. Our model enables us to render compositionally over object maps and match in the 2D configuration space with three degrees of freedom instead of the 3D space with six.

[0042] To maximize the product over scenes o°...o N , we can compute the discrepancy between the observed view and predicted view, shown in FIG. 4. Finding the configuration of objects that minimizes this discrepancy corresponds to maximizing the log-likelihood of the scene under the observed images. Additionally, the robot can use this discrepancy as a mask for object segmentation by mapping a region, adding an object to the scene, and then remapping the region; discrepant regions correspond to cells associated with the new object. Algorithm 2 gives a description of how to compute it given a scene defined as object poses and appearance models.

[0043] To infer a scene, the system computes the observed map and then incrementally adds objects to the scene until the discrepancy is less than a predefined threshold. At this point, all of the discrepancy has been accounted for, and the robot can use this information to ground natural language referring expressions such as "next to the bowl," to find empty space in the scene where objects can be placed, and to infer object pose for grasping.

[0044] Learning Probabilistic Object Models

[0045] Modeling objects requires estimating O k for each object k, including the appearance models for each stable configuration. We have created a system that enables a Baxter robot to autonomously map objects for hours at a time. We first divide the workspace for one arm of the robot into three regions: an input pile, a mapping space, and an output pile. The robot creates a model for the input pile and mapping space. Then a person adds objects to the input pile, and autonomous mapping begins. The robot picks an object from the input pile, moves it to the mapping space, maps the object, then moves it to the output pile.

[0046] To pick from the input pile before an object map has been acquired, the robot tries to pick repeatedly with generic grasp detectors. To propose grasps, the robot looks for patterns of discrepancy between the background map and the observed map. Once successful, it places the object in the mapping workspace and creates a model for the mapping workspace with the object. Regions in this model which are discrepant with the background are used to create an appearance model A for the object. After modeling has been completed, the robot clears the mapping workspace and moves on to the next object. Note that there is a trade off between object throughput and information acquired about each object; to increase throughput we can reduce the number of pick attempts during mapping. In contrast, to truly master an object, the robot might try 100 or more picks as well as other exploratory actions before moving on.

[0047] If the robot loses the object during mapping (for example because it rolls out of the workspace), the robot returns to the input pile to take the next object. If the robot is unable to grasp the current object to move it out of the mapping workspace, it uses nudging behaviors to push the object out. If its nudging behaviors fail to clear the mapping workspace, it simply updates its background model and then continues mapping objects from the input pile.

[0048] Inferring a New Configuration

[0049] After a new object has been placed in the workspace, the object model, O k is known, but the configuration, c is unknown. The robot needs to decide when it has encountered a new object configuration given the collection of maps it has already made. We use an approximation of a maximum likelihood estimate with a geometric prior to decide when to make a new configuration for the object.

[0050] Learning Object Categories

[0051] Once POMs have been acquired, they can be merged to form models for object categories. Many standard computer vision approaches can be applied to light field views rather than images; for example deformable parts models for classification in light field space. These techniques may perform better on the light field view because they have access to metric information, variance in appearance models, as well as information about different configurations and transition information. As a proof of concept, we created models for several object categories. First our robot mapped a number of object instances. Then for each group of instances, we successively applied the optimization in Algorithm 4, with a bias to force the localization to overlap with the model as much as possible. This process results in a synthetic photograph for each category that is strongly biased by object shape and contains noisy information about color and internal structure. Examples of learned object categories appear in FIG. 3. We used our learned models to successfully detect novel instances of each category from a scene with distractor objects. We aim to train on larger data sets and apply more sophisticated modeling techniques to learn object categories and parts from large datasets of POMs.

[0052] Evaluation

[0053] We evaluate our model's ability to detect, localize, and manipulate objects using the Baxter robot. We selected a subset of YCB objects that were rigid and pickable by Baxter with the grippers in the 6cm position as well as a standard ICRA duckie. In our

implementation the grid cell size is 0:25 cm, and the total size of the synthetic photograph was approximately 30 cm x 30 cm. We initialize the background variance to a higher value to account for changes in lighting and shadows.

[0054] Localization

[0055] To evaluate our model's ability to localize objects, we find the error of its position estimates by servoing repeatedly to the same location. For each trial, we moved the arm directly above the object, then moved to a random position and orientation within 10 cm of the true location. Next we estimated the object's position by serving: first we created a light field model at the arm's current location; then we used Algorithm 3 to estimate the object's position; then we moved the arm to the estimated position and repeated. We performed five trials in each location, then moved the object to a new location, for a total of 25 trials per object. We take the mean location estimated over the five trials as the object's true location, and report the mean distance from this location as well as 95% confidence intervals. This test records the repeatability of the servoing and pose estimation; if we are performing accurate pose estimation, then the system should find the object at the same place each time. Results appear in Table II. Our results show that using POMs, the system can localize objects to within 2 mm. We observe more error on taller objects such as the mustard, and the taller duplo structure, due to our assumption that all cells are at table height. Note that even on these objects, localization is accurate to within a centimeter, enough to pick reliably with many grippers; similarly detection accuracy is also quite high. To assess the effect of correcting for z, we computed new models for the tall yellow square duplo using the two different approaches. We found that the error reduced to 0.0013m+-2.0xl0 "05 using the maximum likelihood estimate and to 0.0019m+-1.9xl0 "05 using the marginal estimate. Both methods demonstrate a significant improvement. The maximum likelihood estimate performs slightly better, perhaps because the sharper edges lead to more consistent performance.

Computing z corrections takes significant time, so we do not use it for the rest of the evaluation.

[0056] Autonomous Classification and Grasp Model Acquisition

[0057] After using our autonomous process to map our test objects, we evaluated object classification and picking performance. Due to the processing time to infer z, we used z = table for this evaluation. The robot had to identify the object type, localize the object, and then grasp it. After each grasp, it placed the object in a random position and orientation. We report accuracy at labeling the object with the correct type along with its pick success rate over the ten trials in Table II. The robot discovered 1 configuration for most objects, but for the yellow square duplo discovered a second configuration, shown in FIG. 5. We explore more deliberate elicitation of new object configurations, by rotating the hand before dropping the object or by employing bimanual manipulation. We report detection and pick accuracy for the 10 objects.

[0058] Our results show 98% accuracy at detection performance for these objects. The duplo yellow square was confused with the standard ICRA duckie which is similarly colored and sized. Other errors were due to taller objects. The robot mapped the tall duplo structure in a lying down position. It was unable to pick it when it was standing up to move it to its scanning workspace because of the error introduced by the height. After a few attempts it knocked it down; the lower height introduced less error, and it was able to pick and localize perfectly. The padlock is challenging because it is both heavy and reflective. Also its smallest dimension just barely fits into the robot's gripper, meaning that very small amounts of position error can cause the grasp to fail. Overall our system is able to pick this data set 84% of the time.

[0059] Our automatic process successfully mapped all objects except for the mustard. The mustard is a particularly challenging object due to its height and weight; therefore we manually created an model and annotated a grasp. Our initial experiments with this model resulted in 1=10 picks due to noise from its height and its weight; however we were still able to use it for localization and detection. Next we created a new model using marginal z corrections and also performed z corrections at inference time. Additionally we changed to a different gripper configuration more appropriate to this very large object. After these changes, we were able to pick the mustard 10/10 times.

[0060] Picking Robustly

[0061] We can pick a spoon 100 times in a row.

[0062] In summary, our approach enables a robot to adapt to the specific objects it finds and robustly pick objects many times in a row without failures. This robustness and repeatability outperforms existing approaches in terms of its precision by trading off recall, enabling the robot to robustly pick objects many times in a row. Our approach uses light fields to create a probabilistic generative model for objects, enabling the robots to use all information from the camera to achieve this high precision. Additionally, learned models can be combined to form models for object category that are metrically grounded and can be used to perform localization and grasp prediction.

[0063] We can scale up this system so that the robot can come equipped with a large database of object models. This system enables the robot to automatically detect and localize novel objects. If the robot cannot pick the first time, it will automatically add a new model to its database, enabling it to increase its precision and robustness. Additionally this new model will augment the database, improving performance on novel objects.

[0064] Extending the model to full 3D perception enables it to fuse different stable configurations. We can track objects continuously over time, create a Bayes' filtering approach to object tracking. This model takes into account object affordances and actions on objects, creating a full object-oriented MDP. Ultimately objects like doors, elevators, and drawers can be modeled in an instance-based way, and then generalized to novel instances. This model is ideal for connecting to language because it factors the world into objects, just as people do when they talk about them.

[0065] It would be appreciated by those skilled in the art that various changes and modifications can be made to the illustrated embodiments without departing from the spirit of the present invention. All such modifications and changes are intended to be within the scope of the present invention except as limited by the scope of the appended claims.