Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INSTANCE SEGMENTATION
Document Type and Number:
WIPO Patent Application WO/2019/099428
Kind Code:
A1
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for instance segmentation. In one aspect, a system generates: (i) data identifying one or more regions of the image, wherein an object is depicted in each region, (ii) for each region, a predicted type of object that is depicted in the region, and (iii) feature channels comprising a plurality of semantic channels and one or more direction channels. The system generates a region descriptor for each of the one or more regions, and provides the region descriptor for each of the one or more regions to a segmentation neural network that processes a region descriptor for a region to generate a predicted segmentation of the predicted type of object depicted in the region.

Inventors:
CHEN, Liang-Chieh (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
HERMANS, Alexander (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
PAPANDREOU, Georgios (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
SCHROFF, Gerhard Florian (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
WANG, Peng (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
ADAM, Hartwig (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
Application Number:
US2018/060880
Publication Date:
May 23, 2019
Filing Date:
November 14, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (1600 Amphitheatre Parkway, Mountain View, California, 94043, US)
International Classes:
G06N3/04; G06N3/08
Other References:
HE KAIMING ET AL: "Mask R-CNN", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 22 October 2017 (2017-10-22), pages 2980 - 2988, XP033283165, DOI: 10.1109/ICCV.2017.322
LI YI ET AL: "Fully Convolutional Instance-Aware Semantic Segmentation", IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. PROCEEDINGS, IEEE COMPUTER SOCIETY, US, 21 July 2017 (2017-07-21), pages 4438 - 4446, XP033249798, ISSN: 1063-6919, [retrieved on 20171106], DOI: 10.1109/CVPR.2017.472
LIANG-CHIEH CHEN ET AL: "MaskLab: Instance Segmentation by Refining Object Detection with Semantic and Direction Features", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 13 December 2017 (2017-12-13), XP080843437
UHRIG JONAS ET AL: "Pixel-Level Encoding and Depth Layering for Instance-Level Semantic Labeling", 27 August 2016, INTERNATIONAL CONFERENCE ON SIMULATION, MODELING, AND PROGRAMMING FOR AUTONOMOUS ROBOTS,SIMPAR 2010; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 14 - 25, ISBN: 978-3-642-17318-9, XP047353643
S. REN; K. HE; R. GIRSHICK; J. SUN: "Faster R-CNN: towards real-time object detection with region proposal networks", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NIPS, 2015
Attorney, Agent or Firm:
TREILHARD, John et al. (FISH & RICHARDSON P.C, P.O. Box 1022Minneapolis, Minnesota, 55440-1022, US)
Download PDF:
Claims:
CLAIMS

1. A computer- implemented method comprising:

providing an image as input to a descriptor neural network that processes the image to generate outputs including: (i) data identifying one or more regions of the image, wherein an object is depicted in each region, (ii) for each region, a predicted type of object that is depicted in the region, and (iii) feature channels comprising a plurality of semantic channels and one or more direction channels, wherein:

each semantic channel is associated with a particular type of object and defines, for each pixel of the image, a respective likelihood that the pixel is included in an object of the particular type; and

the direction channels characterize a predicted direction from each pixel of the image to a center of an object depicted in the image which includes the pixel;

generating a region descriptor for each of the one or more regions, including, for each region:

for at least one of the feature channels, extracting feature data corresponding to the region of the image from the feature channel;

resizing the extracted feature data to a pre-determined dimensionality; and concatenating the resized feature data; and

providing the region descriptor for each of the one or more regions to a segmentation neural network that processes a region descriptor for a region to generate an output comprising a predicted segmentation of the predicted type of object depicted in the region.

2. The computer-implemented method of claim 1, further comprising:

generating a segmentation descriptor for each of the one or more regions, including, for each region:

extracting feature data corresponding to the region of the image from one or more intermediate outputs of the descriptor neural network;

resizing the extracted feature data from the one or more intermediate outputs of the descriptor neural network to a pre-determined dimensionality; and

concatenating the resized feature data from the one or more intermediate outputs of the descriptor neural network and the predicted segmentation of the predicted type of object depicted in the region; and

providing the segmentation descriptor for each of the one or more regions to a refining neural network, wherein the refining neural network processes a segmentation descriptor to generate a refined predicted segmentation of the predicted type of object depicted in the region.

3. The computer-implemented method of any one of claims 1-2, wherein generating a region descriptor for a region comprises extracting feature data corresponding to the region of the image from the semantic feature channel associated with the predicted type of object depicted in the region.

4. The computer-implemented method of any one of claims 1-3, wherein generating a region descriptor for a region further comprises pooling feature data corresponding to the region from one or more of the direction channels.

5. The computer-implemented method of claim 4, wherein pooling feature data

corresponding to the region from one or more of the direction channels comprises:

partitioning the region into one or more sub-regions;

associating each of the sub-regions with a different direction channel; and

for each of the sub-regions, extracting feature data corresponding to the sub-region from the direction channel associated with the sub-region.

6. The computer-implemented method of any one of claims 1-5, further comprising:

determining that a pixel of the image is included in a predicted segmentation of a predicted type of object depicted in a region; and

associating the pixel of the input image with the predicted type of object.

7. The computer-implemented method of any one of claims 1-6, wherein the descriptor neural network and the segmentation neural network are jointly trained.

8. The computer-implemented method of claim 7, wherein jointly training the descriptor neural network and the segmentation neural network comprises: backpropagating gradients of a loss function to jointly train the descriptor neural network and the segmentation neural network to generate more accurate predicted segmentations.

9. The computer-implemented method of claim 8, further comprising backpropagating gradients of the loss function to train the descriptor neural network to more accurately generate: (i) data identifying regions depicting objects, (ii) predicted types of objects depicted in the regions, and (iii) feature channels comprising semantic channels and direction channels.

10. The computer-implemented method of any one of claims 7-9, further comprising backpropagating gradients of the loss function to jointly train the descriptor neural network, the segmentation neural network, and the refining neural network to generate more accurate predicted segmentations.

11. The computer-implemented method of any one of claims 1-10, wherein generating a predicted type of object that is depicted in a region of the image comprises:

dividing the region into multiple sub-regions;

generating, using an offset neural network, an offset for each sub-region;

extracting feature data from an intermediate output of the descriptor neural network which corresponds to each of multiple offset sub-regions;

resizing the extracted feature data from each of the multiple offset sub-regions; and processing the resized feature data using one or more neural network layers of the descriptor neural network to generate a predicted type of object that is depicted in the region of the image.

12. The computer-implemented method of any one of claims 1-11, wherein the descriptor neural network is pre-trained based on a set of training data.

13. One or more computer storage media storing instructions that when executed by one or more computers cause the one or more computers to perform the operations of the method of any one of claims 1-12.

14. A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform the operations of the method of claims 1-12.

Description:
INSTANCE SEGMENTATION

BACKGROUND

[0001] This specification relates to processing data using machine learning models.

[0002] Machine learning models receive an input and generate an output, e.g., a predicted output, based on the received input. Some machine learning models are parametric models and generate the output based on the received input and on values of the parameters of the model.

[0003] Some machine learning models are deep models that employ multiple layers of models to generate an output for a received input. For example, a deep neural network is a deep machine learning model that includes an output layer and one or more hidden layers that each apply a non-linear transformation to a received input to generate an output.

SUMMARY

[0004] This specification describes an instance segmentation system implemented as computer programs on one or more computers in one or more locations that jointly performs object detection and semantic segmentation in images.

[0005] According to a first aspect there is provided a computer-implemented method

comprising: providing an image as input to a descriptor neural network that processes the image to generate outputs including: (i) data identifying one or more regions of the image, wherein an object is depicted in each region, (ii) for each region, a predicted type of object that is depicted in the region, and (iii) feature channels comprising a plurality of semantic channels and one or more direction channels, wherein: each semantic channel is associated with a particular type of object and defines, for each pixel of the image, a respective likelihood that the pixel is included in an object of the particular type; and the direction channels characterize a predicted direction from each pixel of the image to a center of an object depicted in the image which includes the pixel; generating a region descriptor for each of the one or more regions, including, for each region: for at least one of the feature channels, extracting feature data corresponding to the region of the image from the feature channel; resizing the extracted feature data to a pre-determined dimensionality; and concatenating the resized feature data; and providing the region descriptor for each of the one or more regions to a segmentation neural network that processes a region descriptor for a region to generate an output comprising a predicted segmentation of the predicted type of object depicted in the region.

[0006] The method may further comprise: generating a segmentation descriptor for each of the one or more regions, including, for each region: extracting feature data corresponding to the region of the image from one or more intermediate outputs of the descriptor neural network; resizing the extracted feature data from the one or more intermediate outputs of the descriptor neural network to a pre-determined dimensionality; and concatenating the resized feature data from the one or more intermediate outputs of the descriptor neural network and the predicted segmentation of the predicted type of object depicted in the region; and providing the segmentation descriptor for each of the one or more regions to a refining neural network, wherein the refining neural network processes a segmentation descriptor to generate a refined predicted segmentation of the predicted type of object depicted in the region.

[0007] Generating a region descriptor for a region may comprise extracting feature data corresponding to the region of the image from the semantic feature channel associated with the predicted type of object depicted in the region. Generating a region descriptor for a region may further comprise pooling feature data corresponding to the region from one or more of the direction channels. Pooling feature data corresponding to the region from one or more of the direction channels may comprise: partitioning the region into one or more sub-regions;

associating each of the sub-regions with a different direction channel; and for each of the sub- regions, extracting feature data corresponding to the sub-region from the direction channel associated with the sub-region.

[0008] The method may further comprise: determining that a pixel of the image is included in a predicted segmentation of a predicted type of object depicted in a region; and associating the pixel of the input image with the predicted type of object. The descriptor neural network and the segmentation neural network may be jointly trained. Jointly training the descriptor neural network and the segmentation neural network may comprise: backpropagating gradients of a loss function to jointly train the descriptor neural network and the segmentation neural network to generate more accurate predicted segmentations. The method may further comprise

backpropagating gradients of the loss function to train the descriptor neural network to more accurately generate: (i) data identifying regions depicting objects, (ii) predicted types of objects depicted in the regions, and (iii) feature channels comprising semantic channels and direction channels. The method may further comprise backpropagating gradients of the loss function to jointly train the descriptor neural network, the segmentation neural network, and the refining neural network to generate more accurate predicted segmentations.

[0009] Generating a predicted type of object that is depicted in a region of the image may comprise: dividing the region into multiple sub-regions; generating, using an offset neural network, an offset for each sub-region; extracting feature data from an intermediate output of the descriptor neural network which corresponds to each of multiple offset sub-regions; resizing the extracted feature data from each of the multiple offset sub-regions; and processing the resized feature data using one or more neural network layers of the descriptor neural network to generate a predicted type of object that is depicted in the region of the image. The descriptor neural network may be pre-trained based on a set of training data.

[0010] According to a second aspect there is provided a computer-implemented method which includes providing an image as input to a descriptor neural network that processes the image to generate corresponding outputs. The outputs include data identifying one or more regions of the image, where an object is depicted in each region, and for each region, a predicted type of object that is depicted in the region. The outputs further include one or more feature channels, wherein each feature channel is an output of the descriptor neural network, and each feature channel has a same dimensionality as a dimensionality of the input image. A region descriptor is generated for each of the one or more regions, including, for each region: (i) for at least one of the feature channels, extracting feature data corresponding to the region of the feature channel, (ii) resizing the extracted feature data to a pre-determined dimensionality, and (iii) concatenating the resized feature data. The region descriptor for each of the one or more regions is provided to a segmentation neural network that processes a region descriptor for a region to generate an output including a predicted instance segmentation of the predicted type of object depicted in the region.

[0011] In some implementations, the method further includes generating a segmentation descriptor for each of the one or more regions. This may include, for each region, extracting feature data corresponding to the region from one or more intermediate outputs of the descriptor neural network that have the same dimensionality as the dimensionality of the input image. The extracted feature data from the one or more intermediate outputs of the descriptor neural network is re-sized to the pre-determined dimensionality. The resized features from the one or more intermediate outputs of the descriptor neural network and the predicted instance segmentation of the predicted type of object depicted in the region are concatenated. The segmentation descriptor for each of the one or more regions are provided to a refining neural network, where the refining neural network processes a segmentation descriptor to generate a refined predicted instance segmentation of the predicted type of object depicted in the region corresponding to the segmentation descriptor.

[0012] In some implementations, the feature channels include one or more semantic feature channels, where each semantic feature channel is associated with a particular type of object.

[0013] In some implementations, generating a region descriptor for a region further includes extracting feature data corresponding to the region of a semantic feature channel associated with the predicted type of object depicted in the region.

[0014] In some implementations, generating a region descriptor for a region further includes pooling feature data corresponding to the region of one or more feature channels.

[0015] In some implementations, pooling feature data corresponding to the region of one or more feature channels includes partitioning the region into one or more sub-regions. Each of the sub-regions is associated with a different feature channel. For each of the sub-regions, feature data corresponding to the sub-region of the feature channel associated with the sub-region is extracted.

[0016] In some implementations, the method further includes determining that a pixel of the input image is in a predicted instance segmentation of a predicted type of object depicted in a region. The pixel of the input image is associated with the predicted type of object.

[0017] In some implementations, the method further includes backpropagating gradients based on a loss function to jointly train the descriptor neural network and the segmentation neural network to generate more accurate predicted instance segmentations.

[0018] In some implementations, backpropagating gradients based on the loss function further includes training the descriptor neural network to generate data identifying regions depicting objects and predicted types of objects depicted in the regions more accurately.

[0019] In some implementations, the method further includes backpropagating gradients based on a loss function to jointly train the descriptor neural network, the segmentation neural network, and the refining neural network to generate more accurate predicted instance segmentations. [0020] In some implementations, extracting feature data corresponding to a region of a feature channel further includes dividing the region into multiple sub-regions. An offset neural network is used to generate an offset for each sub-region. Feature data corresponding to each of multiple offset sub-regions of the feature channel is extracted. The extracted feature data from each of the multiple offset sub-regions is resized.

[0021] In some implementations, the descriptor neural network is pre-trained based on a set of training data.

[0022] It will be appreciated that aspects can be implemented in any convenient form. For example, aspects may be implemented by appropriate computer programs which may be carried on appropriate carrier media which may be tangible carrier media (e.g. disks) or intangible carrier media (e.g. communications signals). Aspects may also be implemented using suitable apparatus which may take the form of programmable computers running computer programs arranged to implement the invention. Aspects can be combined such that features described in the context of one aspect may be implemented in another aspect.

[0023] Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages.

[0024] The instance segmentation system described in this specification can process an image to generate object detection data which defines: (i) regions in the image (e.g., specified by bounding boxes) that each depict an object instance, and (ii) a respective object type of the object instance depicted in each region. Thereafter, the system can generate a respective segmentation of the object instance depicted in each of the regions.

[0025] In general, a region of an image which the system determines to depict an object instance of a particular object type may also depict: (i) parts of one or more additional object instances of the same object type, (ii) parts or all of one or more additional object instances of different object types, or (iii) both. For example, a bounding box around a given person in an image may additionally contain the arm and shoulder of another person standing next to the given person in the image, a coffee cup (or other object) being held in the hand of the given person, or both.

[0026] The system described in this specification can generate a segmentation of an object instance depicted in a region which excludes: (i) any parts of additional object instances of the same type, and (ii) any additional object instances of different types, which are also depicted in the region. In particular, the system can process the image to generate semantic channels and direction channels. For each pixel of the image, the semantic channels characterize a respective likelihood that the pixel is included in an object of each of a predetermined number of possible object types, and the direction channels characterize a direction from the pixel to the center of an object which includes the pixel. The system can distinguish the object instance depicted in the region from any parts of additional object instances of the same type which are also depicted in the region using the direction channels. Moreover, the system can distinguish the object instance depicted in the region from any additional object instances of different types which are also depicted in the region using the semantic channels. Therefore, by generating object instance segmentations using semantic channels and direction channels, the system described in this specification can generate object instance segmentations which may be more accurate than object instance segmentations generated by conventional systems. This is a technical improvement in the field of image processing.

[0027] The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0028] FIG. 1 is a block diagram of an example instance segmentation system.

[0029] FIG. 2 is a flow diagram of an example process for processing an image to jointly perform object detection and semantic segmentation.

[0030] FIG. 3 is a flow diagram of an example process for generating a region descriptor for a region of the image defined by the object detection data.

[0031] FIG. 4 is a flow diagram of an example process for jointly training the segmentation neural network, the descriptor neural network, and optionally, the refining neural network.

[0032] Fike reference numbers and designations in the various drawings indicate like elements.

DETAIFED DESCRIPTION [0033] This specification describes an instance segmentation system implemented as computer programs on one or more computers in one or more locations. The system described in this specification is configured to jointly perform object detection and semantic segmentation by processing an image to generate an output which defines: (i) regions in the image that each depict an instance of a respective object, (ii) a respective object type of the object instance (e.g., vehicle, cat, person, and the like) depicted in each region, and (iii) a respective segmentation of the object instance depicted in each region. A segmentation of an object instance depicted in a region defines which pixels in the region are included in the object instance. In this specification, a region of an image refers to a contiguous subset of the image, for example, as defined by a bounding box in the image.

[0034] The system processes an image using a descriptor neural network to generate object detection data and feature channels corresponding to the image. The object detection data defines: (i) regions of the image that each depict an object instance, and (ii) a respective object type of the object instance depicted in each region. The feature channels include: (i) semantic channels, (ii) direction channels, or (iii) both. For each pixel of the image, the semantic channels characterize a respective likelihood that the pixel is included in an object of each of a

predetermined number of possible object types, and the direction channels characterize a direction from the pixel to the center of an object which includes the pixel.

[0035] The system processes the object detection data and the feature channels to generate a respective region descriptor for each of the regions defined by the object detection data. For example, the system can generate the region descriptor for a region by extracting (e.g., cropping) feature data corresponding to the region from one or more of the feature channels (i.e., the semantic channels, the direction channels, or both). The system processes each region descriptor using a segmentation neural network to generate a segmentation of the object instance depicted in the region.

[0036] In general, a region of the image which the system determines to depict an object instance of a particular object type may also depict: (i) parts of one or more additional object instances of the same object type, (ii) parts or all of one or more additional object instances of different object types, or (iii) both. For example, a bounding box around a given person in an image may additionally contain the arm and shoulder of another person standing next to the given person in the image, a coffee cup (or other object) being held in the hand of the given person, or both. The segmentation neural network can process a region descriptor to generate a segmentation of the object instance depicted in the region which excludes: (i) any parts of additional object instances of the same type, and (ii) any additional object instances of different types, which are also depicted in the region. To distinguish the object instance depicted in the region from any parts of additional object instances of the same type which are also depicted in the region, the segmentation neural network can use the components of the region descriptor extracted from the direction channels. To distinguish the object instance depicted in the region from any additional object instances of different types which are also depicted in the region, the segmentation neural network can use the components of the region descriptor extracted from the semantic channels.

[0037] These features and other features are described in more detail below.

[0038] FIG. 1 is a block diagram of an example instance segmentation system 100. The instance segmentation system 100 is an example of a system implemented as computer programs on one or more computers in one or more locations in which the systems, components, and techniques described below are implemented.

[0039] The instance segmentation system 100 is configured to process an image 102 to generate object detection data 106 and object segmentation data 116. The object detection data 106 defines: (i) regions in the image 102 that each depict a respective object instance, (ii) a respective object type of the object instance depicted in each region. The object segmentation data 116 defines a respective segmentation of the object instance depicted in each region. A segmentation of the object instance depicted in a region defines whether each pixel in the region is included in the object instance. The system 100 generates the object detection data 106 and the object segmentation data 116 using a descriptor neural network 104 and a segmentation neural network 114 that can be trained using end-to-end machine learning training techniques, as will be described in more detail below.

[0040] The image 102 can be represented as a two-dimensional (2D) array of pixels, where each pixel is represented as a vector of one or more values. For example, if the image 102 is a black- and-white image, then each pixel can be represented as an integer or floating point number (i.e., a vector with one component) representing the brightness of the pixel. As another example, if the image 102 is a red-green-blue (RGB) image, then each pixel can be represented as a vector with three integer or floating point components, which respectively represent the intensity of the red, green, and blue color of the pixel. [0041] The system 100 processes the image 102 using the descriptor neural network 104 to generate the object detection data 106 and feature channels 108 (as will be described in more detail below). The descriptor neural network 104 is a convolutional neural network (i.e., includes one or more convolutional neural network layers), and can be implemented to embody any appropriate convolutional neural network architecture. In a particular example, the descriptor neural network 104 may include an input layer followed by a sequence of“shared” convolutional neural network layers. The output of the final shared convolutional neural network layer may be provided to a sequence of one or more additional neural network layers that are configured to generate the object detection data 106.

[0042] The additional neural network layers that are configured to generate the object detection data 106 may have an architecture derived from the object detection neural network that is described with reference to: S. Ren, K. He, R. Girshick, and J. Sun,“Faster R-CNN: towards real-time object detection with region proposal networks”, Advances in Neural Information Processing Systems (NIPS), 2015 (other appropriate neural network processes may also be used, however). The output of the final shared convolutional neural network layers may be provided to a different sequence of one or more additional neural network layers to generate the feature channels 108.

[0043] The object detection data 106 can define regions in the image 102 that depict object instances by, for example, the coordinates of bounding boxes around the object instances depicted in the image 102. The object detection data 106 can define the object type of an object instance depicted in an image region by, for example, a vector of object type probabilities for the image region. The vector of object type probabilities can include a respective probability value for each of the predetermined number of possible object types. The system 100 may determine that the image region depicts an object instance of the object type which corresponds to the highest probability value in the vector of object type probabilities for the image region.

Examples of object types include vehicle, cat, person, and the like, as well as a“background” object type (i.e., for pixels included in the background of an image rather than in a specific object depicted in the image).

[0044] The feature channels 108 can include: (i) semantic channels, (ii) direction channels, or (iii) both. The semantic channels and the direction channels can be represented in any

appropriate format. For each pixel of the image 102, the semantic channels characterize a respective likelihood that the pixel is included in an object of each of a predetermined number of possible object types, and the direction channels characterize a direction from the pixel to the center of an object which includes the pixel. The“center” of an object can refer to, for example, the centroid of all the pixels which are included in the object. As another example, the center of an object can refer to the center of a bounding box around the object. A few example

representations of semantic channels and direction channels follow.

[0045] In some implementations, the feature channels 108 may include a respective semantic channel corresponding to each object type of the predetermined number of object types. Each semantic channel can be represented as a 2D array of likelihood values (e.g., probability values between 0 and 1) which each characterize a likelihood that a corresponding pixel (or pixels) of the image 102 is included in an object of the object type corresponding to the semantic channel.

[0046] In some implementations, the feature channels 108 may include a respective direction channel corresponding to each of a predetermined number of angle ranges (e.g.,

{[» !] adians). Each direction channel can be represented as a 2D

array of likelihood values (e.g., probability values between 0 and 1). Each likelihood value characterizes a likelihood that the direction from a corresponding pixel (or pixels) of the image 102 to the center of a respective object instance which includes the pixel is in the angle range corresponding to the direction channel. Likelihood values corresponding to pixels of the image which are not included in an object instance can have any appropriate value.

[0047] In some cases, the feature channels 108 may have the same dimensionality as the image 102, while in other cases, the feature channels 108 may have a different (e.g., smaller) dimensionality than the image 102. For example, the semantic channels and the direction channels included in the feature channels 108 may have a smaller dimensionality than the image 102, in which case each likelihood value in the semantic channels and direction channels may correspond to multiple pixels in the image 102.

[0048] The system 100 processes the object detection data 106 and the feature channels 108 using a region descriptor engine 110 to generate a respective region descriptor 112 for each region of the image 102 defined by the object detection data 106. The region descriptor engine 110 generates a region descriptor 112 for a region of the image 102 by extracting (e.g., cropping) feature data corresponding to the region from one or more of the feature channels 108 (i.e., one or more of the semantic channels and direction channels). The region descriptors 112 may each be represented as multi-dimensional arrays of numerical values. Generating a region descriptor for a region of the image 102 defined by the object detection data 106 is described further with reference to FIG. 3.

[0049] The system 100 processes each of the region descriptors 112 of the regions in the image 102 defined by the object detection data 106 using a segmentation neural network 114 to generate respective object segmentation data 116 for each of the regions. The object

segmentation data 116 for a region defines a segmentation of the object instance depicted in the region (i.e., that has the object type specified for the region by the object detection data 106). For example, the object segmentation data 116 for a region may define a respective probability value (i.e., number between 0 and 1) that each pixel in the region is included in the object instance depicted in the region. In this example, the system 100 may determine those pixels in the region which have a probability value that satisfies (e.g., exceeds) a threshold that the pixels are included in the object instance depicted in the region.

[0050] Optionally, the system 100 can generate refined object segmentation data 124 for each region of the image 102 defined by the object detection data 106. The refined object

segmentation data 124 for a region of the image 102 defines a segmentation of the object instance depicted in the region which is potentially more accurate than the segmentation of the object instance defined by the object segmentation data 116.

[0051] To generate refined object segmentation data 124 for a region of the image 102 defined by the object detection data 106, the system 100 can use a segmentation descriptor engine 118 to generate a segmentation descriptor 120 for the region. The segmentation descriptor engine 118 can generate a segmentation descriptor 120 for a region by extracting (e.g., cropping) feature data corresponding to the region from one or more intermediate outputs of the descriptor neural network 104, and concatenating the extracted feature data with the object segmentation data 116 for the region. An intermediate output of the descriptor neural network refers to outputs generated by one or more hidden layers of the descriptor neural network, for example, the shared convolutional neural network layers of the descriptor neural network (as described earlier). The system 100 can process the segmentation descriptor 120 for a region of the image 102 using a refining neural network 122 to generate the refined object segmentation data 124 for the region of the image 102. [0052] The segmentation neural network 114 and the refining neural network 122 can be implemented to embody any appropriate neural network architecture. For example, the segmentation neural network 114 and the refining neural network 122 may be convolutional neural networks which each include one or more respective convolutional neural network layers.

[0053] The system 100 can jointly train the segmentation neural network 114 and the descriptor neural network 104 based on a set of training data which includes multiple training examples. Each training example can include: (i) a training image, (ii) target object detection data, (iii) target feature channels, and (iv) target object segmentation data. The target object detection data, the target feature channels, and the target object segmentation data represent outputs that should be generated by the system 100 by processing the training image, and can be obtained, for example, by manual human annotation.

[0054] To train the segmentation neural network 114 and the descriptor neural network 104, the system 100 processes the training images to generate object detection data 106, feature channels 108, and object segmentation data 116 for the training images. The system 100 determines gradients of a loss function (e.g., using backpropagation) which compares: (i) the object detection data 106, (ii) the feature channels 108, and (iii) the object segmentation data 116 generated by the system 100 by processing the training images to the corresponding target data included in the training examples. The system 100 can adjust the parameter values of the segmentation neural network 114 and the descriptor neural network 104 using the gradients.

Over multiple training iterations, the system 100 can iteratively adjust the parameter values of the segmentation neural network 114 and the descriptor neural network 104 in the described manner until a training termination criterion is satisfied. In some cases, the system 100 can“pre train” the descriptor neural network 114 by training the descriptor neural network 114 alone prior to jointly training the descriptor neural network 114 along with the segmentation neural network 114 and the refining neural network 122. An example process for jointly training the

segmentation neural network 114, the descriptor neural network 104, and optionally, the refining neural network 122, is described with reference to FIG. 4.

[0055] In some implementations, the system 100 associates each pixel of the input image 102 with a label, where the possible labels include each of the predetermined number of object types (including the“background” type). For example, if the object segmentation data 116 defines that a given pixel of the image 102 is included in an object of a particular object type, then the system 100 may associate the pixel of the image 102 with the label of the particular object type.

Associating a pixel of the image with a label may refer to, for example, storing data which links the pixel of the image with the label in a data store.

[0056] FIG. 2 is a flow diagram of an example process for processing an image to jointly perform object detection and semantic segmentation. For convenience, the process 200 will be described as being performed by a system of one or more computers located in one or more locations. For example, an instance segmentation system, e.g., the instance segmentation system 100 of FIG. 1, appropriately programmed in accordance with this specification, can perform the process 200.

[0057] The system receives an image (202). As described earlier, the image can be represented as a two-dimensional array of pixels, where each pixel is represented as a vector of one or more values. The image can be a black-and-white image, a color image (e.g., an RGB image), or any other kind of image.

[0058] The system processes the image using a descriptor neural network to generate object detection data and feature channels corresponding to the image (204). As described with reference to FIG. 1 , the descriptor neural network can be implemented to embody any appropriate convolutional neural network architecture.

[0059] The object detection data defines: (i) regions in the image that each depict a respective object instance, and (ii) a respective object type of the object instance depicted in each region. The regions in the image that depict respective object instances can be represented, for example, by bounding boxes in the image around the respective objects. The object type of the object instance is an object type from a predetermined set of possible object types. Examples of possible object types include vehicle, cat, person, and the like (as well as a background type).

[0060] The feature channels can include: (i) semantic channels, (ii) direction channels, or (iii) both. The semantic channels and the direction channels can be represented in any appropriate format. For each pixel of the image, the semantic channels characterize a respective likelihood that the pixel is included in an object of each of a predetermined number of possible object types, and the direction channels characterize a direction from the pixel to the center of an object which includes the pixel. Examples of possible representations of the semantic channels and the direction channels are described with reference to FIG. 1. [0061] In some cases, the system can implement a deformable cropping operation to generate data identifying the object type of an object instance depicted in a region of the image. In particular, the system can determine a partition (i.e., division) of the region of the image into multiple sub-regions, and crop portions of an intermediate output of the descriptor neural network corresponding to each of the sub-regions. The system can process the respective cropped data from the intermediate output of the descriptor neural network for each sub-region using an offset neural network to generate an off-set value for each sub-region. The off-set value for a sub-region may be represented as a pair of values representing an offset in the“x”-direction and an offset in the“y”-direction. The off-set values for the sub-regions define corresponding “off-set” sub-regions (i.e., by offsetting each sub-region by the off-set value). The system can subsequently crop and resize portions of the intermediate output of the descriptor neural network corresponding to each off-set sub-region, and process the cropped data using one or more neural network layers to generate data identifying the object type of object instance depicted in the region.

[0062] The system generates a region descriptor for each of the one or more regions defined by the object detection data (206). The system generates a region descriptor for a region of the image by extracting (e.g., cropping) feature data corresponding to the region from one or more of the feature channels 108 (i.e., one or more of the semantic channels and direction channels). The region descriptors 112 may each be represented as multi-dimensional arrays of numerical values. An example process for generating a region descriptor for a region of the image defined by the object detection data is described further with reference to FIG. 3.

[0063] For each region in the image defined by the object detection data, the system processes the region descriptor of the region using a segmentation neural network to generate respective object segmentation data for the region (208). The object segmentation data for a region defines a segmentation of the object instance depicted in the region. For example, the object

segmentation data for a region may define a respective probability value (i.e., number between 0 and 1) that each pixel in the region is included in the object instance depicted in the region. In this example, the system may determine those pixels in the region which have a probability value that satisfies (e.g., exceeds) a threshold are included in the object instance depicted in the region. As described with reference to FIG. 1, the segmentation neural network can be implemented to embody any appropriate neural network architecture (e.g., a convolutional neural network architecture).

[0064] Optionally, the system generates a segmentation descriptor for each of the one or more regions defined by the object detection data (210). The system can generate a segmentation descriptor for a region by extracting (e.g., cropping) feature data corresponding to the region from one or more intermediate outputs of the descriptor neural network, and concatenating the extracted feature data with the object segmentation data for the region.

[0065] Optionally, for each region in the image defined by the object detection data, the system processes the segmentation descriptor of the region using a refining neural network to generate respective refined object segmentation data for the region (212). The refined object segmentation data for a region defines a“refined” (e.g., more accurate) segmentation of the object instance depicted in the region. For example, the refined object segmentation data for a region may define a respective probability value (i.e., number between 0 and 1) that each pixel in the region is included in the object instance depicted in the region. The refining neural network may achieve a higher object segmentation accuracy than the segmentation neural network by exploiting additional information from the intermediate outputs of the descriptor neural network.

[0066] FIG. 3 is a flow diagram of an example process 300 for generating a region descriptor for a region of the image defined by the object detection data. For convenience, the process 300 will be described as being performed by a system of one or more computers located in one or more locations. For example, an instance segmentation system, e.g., the instance segmentation system 100 of FIG. 1, appropriately programmed in accordance with this specification, can perform the process 300.

[0067] The system extracts (e.g., crops) feature data corresponding to the region of the image from the semantic channels (302). As described earlier, the semantic channels may include a respective semantic channel corresponding to each object type of the predetermined number of object types. The system may crop feature data from the semantic channel corresponding to the object type of the object instance depicted in the region of the image (i.e., as defined by the object detection data), while refraining from cropping feature data from the other semantic channels. In a particular example, if the object detection data defines the object type of the object instance depicted in the region as“person”, the system may crop feature data corresponding to the region of the image from the semantic channel corresponding to the“person” object type. [0068] As described earlier, each semantic channel can be represented as a 2D array of likelihood values. Each likelihood value in a semantic channel corresponds to one or more pixels of the image. For example, the 2D array of likelihood values representing a semantic channel may have the same dimensionality (i.e., same number of rows and columns) as the image, in which case each likelihood value in the semantic channel may correspond to exactly one pixel in the image. As another example, the 2D array of likelihood values representing the semantic channel may have a smaller dimensionality than the image (e.g., due to convolution operations performed in the descriptor neural network), in which case each likelihood value in the semantic channel may correspond to multiple pixels in the image. Irrespective of the dimensionality of the semantic channel relative to the image, cropping feature data corresponding to a region of the image from the semantic channel refers to extracting likelihood values from the semantic channel which correspond to the pixels included in the region of the image.

[0069] The system extracts (e.g., crops) feature data corresponding to the region of the image from the direction channels (304). For example, as described earlier, the direction channels may include a respective direction channel corresponding to each of a predetermined number of angle

p 3p

ranges (e.g., {[q, |] , 71 p,— 2p] j

2 2 radians). To extract feature data corresponding to

the region of the image from the direction channels, the system may define a partition of the region of the image into a predetermined set of sub-regions and associate each of the sub-regions with a different direction channel. In a particular example, the system may define a partition of a region of the image defined by a bounding box into four sub-boxes, and associate each sub-box with a direction channel corresponding to a different angle range. For each direction channel, the system may extract feature data corresponding to the sub-region of the image associated with the direction channel. The described operations to extract feature data from the direction channels may be referred to as“pooling” operations.

[0070] The system resizes the feature data extracted from the semantic channels and the direction channels to a predetermined dimensionality and concatenates the resized feature data (306). Resizing a set of data to a predetermined dimensionality refers to determining a representation of the data that has the predetermined dimensionality. For example, the system can resize the extracted feature data from a feature channel to the predetermined dimensionality using average- or max- pooling operations. In this example, the system can partition the extracted feature data from the feature channel into a grid of a predetermined number of sub- windows, and then average- or max- pool the values in each sub-window into a corresponding position in the resized representation of the extracted feature data. By resizing the extracted feature data from each feature channel to the predetermined dimensionality and concatenating the resized feature data from each feature channel, the system can generate region descriptors which have a consistent dimensionality regardless of the size and shape of the corresponding image regions. The segmentation neural network is configured to process region descriptors of the consistent dimensionality to generate the object detection data corresponding to the regions of the image.

[0071] FIG. 4 is a flow diagram of an example process for jointly training the segmentation neural network, the descriptor neural network, and optionally, the refining neural network. For convenience, the process 400 will be described as being performed by a system of one or more computers located in one or more locations. For example, an instance segmentation system, e.g., the instance segmentation system 100 of FIG. 1, appropriately programmed in accordance with this specification, can perform the process 400.

[0072] The system obtains training examples from a set of training data which includes multiple training examples (402). Each training example can include: (i) a training image, (ii) target object detection data, (iii) target feature channels (i.e., semantic and direction channels), and (iv) target object segmentation data. The target object detection data, the target feature channels, and the target object segmentation data represent outputs that should be generated by the system by processing the training image, and can be obtained, for example, by manual human annotation. The system can obtain the training examples by randomly sampling a predetermined number of training examples from the training data.

[0073] The system processes the training images using the descriptor neural network and the segmentation neural network, in accordance with the current values of the descriptor neural network parameters and the segmentation neural network parameters, to generate respective object detection data, feature channels, and object segmentation data for the training images (404). Optionally, the system can process the object segmentation data generated for the training images using the refining neural network, in accordance with the current values of the refining neural network parameters, to generate refined object segmentation data for the training images.

[0074] The system backpropagates gradients of a loss function to adjust the parameters of the descriptor neural network, the segmentation neural network, and optionally, the refining neural network, to cause the system to generate more accurate object detection data, feature channels, object segmentation data, and optionally, refined object segmentation data (406).

[0075] The loss function may include terms which compare the object detection data generated by the system by processing the training images to the target object detection data specified by the training examples. For example, the loss function may include a smooth Ll loss term or squared-difference loss term which compares the coordinates of object instance bounding boxes defined by the object detection data and target coordinates of object instance bounding boxes defined by the target object detection data. As another example, the loss function may include a cross-entropy loss term which compares the object types defined by the object detection data and target object types defined by the target object detection data.

[0076] The loss function may include terms which compare the feature channels generated by the system by processing the training images to the target feature channels specified by the training examples. For example, the loss function may include a pixel-wise cross-entropy loss term which compares the semantic channels and the direction channels generated by the system to target semantic channels and target direction channels.

[0077] The loss function may include terms which compare the object segmentation data (and optionally, refined object segmentation data) generated by the system by processing the training images to the target object segmentation data specified by the training examples. For example, the loss function may include a pixel- wise cross-entropy loss term which compares the object segmentation data (and optionally, the refined object segmentation data) to the target object segmentation data.

[0078] The system can determine the gradient of the loss function with respect to the current parameter values of the descriptor neural network, the segmentation neural network, and optionally, the refining neural network, using any appropriate technique (e.g., backpropagation). After computing the gradient of the loss function, the training system can adjust the current parameter values of the descriptor neural network, the segmentation neural network, and optionally, the refining neural network using any appropriate gradient descent optimization algorithm update rule. Examples of gradient descent optimization algorithms include Adam, RMSprop, Adagrad, Adadelta, and AdaMax, amongst others.

[0079] After adjusting the current values of the neural network parameters based on the training examples, the system can determine if a training termination criterion is satisfied. For example, the system may determine the training termination criterion is satisfied if a predetermined number of iterations of the process 400 have been completed or if the magnitude of the gradient is below a predetermined threshold. In response to determining that the training termination criterion is met, the system can output the trained parameter values of the descriptor neural network, the segmentation neural network, and optionally, the refining neural network. In response to determining that the training termination criterion is not met, the system can return to step 402 and repeat the preceding steps.

[0080] This specification uses the term“configured” in connection with systems and computer program components. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.

[0081] Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.

Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.

[0082] The term“data processing apparatus” refers to data processing hardware and

encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

[0083] A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing

environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.

[0084] In this specification the term“engine” is used broadly to refer to a software-based system, subsystem, or process that is programmed to perform one or more specific functions. Generally, an engine will be implemented as one or more software modules or components, installed on one or more computers in one or more locations. In some cases, one or more computers will be dedicated to a particular engine; in other cases, multiple engines can be installed and running on the same computer or computers.

[0085] The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.

[0086] Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.

[0087] Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto- optical disks; and

CD-ROM and DVD-ROM disks.

[0088] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone that is running a messaging application, and receiving responsive messages from the user in return.

[0089] Data processing apparatus for implementing machine learning models can also include, for example, special-purpose hardware accelerator units for processing common and compute intensive parts of machine learning training or production, i.e., inference, workloads. [0090] Machine learning models can be implemented and deployed using a machine learning framework, e.g., a TensorFlow framework, a Microsoft Cognitive Toolkit framework, an Apache Singa framework, or an Apache MXNet framework.

[0091] Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.

[0092] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received at the server from the device.

[0093] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment.

Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. [0094] Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0095] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.