Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD, APPARATUS AND SYSTEM FOR IDENTIFYING TARGET OBJECTS
Document Type and Number:
WIPO Patent Application WO/2022/029478
Kind Code:
A1
Abstract:
The present disclosure provides a method, apparatus and system for identifying target objects. The method includes: clipping out a target image from an acquired image, where the target image involves a plurality of target objects to be identified that are stacked; adjusting a height of the target image to a preset height, where a height direction of the target image corresponds to a direction in which the plurality of target objects are stacked; extracting a feature map of the adjusted target image; segmenting the feature map in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features; and identifying the target objects based on each of the preset number of segment features.

Inventors:
WU JIN (SG)
CHEN KAIGE (SG)
YI SHUAI (SG)
Application Number:
PCT/IB2020/060203
Publication Date:
February 10, 2022
Filing Date:
October 30, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SENSETIME INT PTE LTD (SG)
International Classes:
G06K9/62; G06N3/02; G06N3/08; G06N20/00; G06T7/10
Foreign References:
CN111062237A2020-04-24
CN111310751A2020-06-19
CN109344832A2019-02-15
US20160358337A12016-12-08
Download PDF:
Claims:
CLAIMS

1. A method for identifying target objects, comprising: clipping out a target image from an acquired image, wherein the target image involves a plurality of target objects to be identified that are stacked; adjusting a height of the target image to a preset height, wherein a height direction of the target image corresponds to a direction in which the plurality of target objects to be identified are stacked; extracting a feature map of the adjusted target image; segmenting the feature map in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features; and identifying the target objects based on each of the preset number of segment features.

2. The method according to claim 1, wherein adjusting the height of the target image to the preset height comprises: scaling the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is greater than the preset height, reducing the height and width of the scaled target image by equal proportions until a height of the reduced target image is equal to the preset height.

3. The method according to claim 1, wherein adjusting the height of the target image to the preset height comprises: scaling the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is less than the preset height, filling the scaled target image with first pixels, so that a height of the filled target image is equal to the preset height.

4. The method according to claim 1, wherein each of the target objects to be identified in the target image is sheetlike and has an equal thickness, and the plurality of target objects to be identified are stacked in a direction of the thickness; and the preset height is an integer multiple of the thickness.

5. The method according to any one of claims 1 to 4, wherein extracting the feature map and identifying the target objects both are performed by a neural network which is trained from a sample image and labeled information thereof.

6. The method according to claim 5, wherein the labeled information of the sample image comprises a labeled category of each target object in the sample image; and the neural network is trained by: extracting features from a resized sample image to obtain a feature map of the resized sample image; identifying target objects in the sample image based on each segment feature that is obtained by segmenting the feature map, to obtain a predicted category of each of the target objects in the sample image; and adjusting a value of a parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image.

7. The method according to claim 6, wherein the labeled information of the sample image further comprises a number of target objects of each labeled category; and adjusting the value of the parameter for the neural network comprises: adjusting the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and a number of target objects of each predicted category in the sample image.

8. The method according to claim 6, wherein the labeled information of the sample image further comprises a total number of target objects in the sample image; and adjusting the value of the parameter for the neural network comprises: adjusting the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, a sum of numbers of target objects of respective predicted categories in the sample image, and the total number of target objects in the sample image.

9. The method according to claim 5, further comprising: testing the trained neural network; ranking an identification accuracy for each category of target objects identified by the neural network based on a result of the testing, to obtain an identification accuracy rank result; ranking an identification error rate for each category of target objects identified by the neural network based on the result of the testing, to obtain an identification error rate rank result; and further training the neural network based on the identification accuracy rank result and the identification error rate rank result.

10. An apparatus for identifying target objects, comprising: an obtaining unit, configured to clip out a target image from an acquired image, wherein the target image involves a plurality of target objects to be identified that are stacked; an adjusting unit, configured to adjust a height of the target image to a preset height, wherein a height direction of the target image corresponds to a direction in which the plurality of target objects to be identified are stacked; an extracting unit, configured to extract a feature map of the adjusted target image; a segmenting unit, configured to segment the feature map in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features; and an identifying unit, configured to identify the target objects based on each of the preset number of segment features.

11. The apparatus according to claim 10, wherein the adjusting unit is configured to: scale the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is greater than the preset height, reduce the height and width of the scaled target image by equal proportions until a height of the reduced target image is equal to the preset height.

12. The apparatus according to claim 10, wherein the adjusting unit is configured to: scale the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is less than the preset height, fill the scaled target image with first pixels, so that a height of the filled target image is equal to the preset height. 21

13. The apparatus according to claim 10, wherein each of the target objects to be identified in the target image is sheetlike and has an equal thickness, and the plurality of target objects to be identified are stacked in a direction of the thickness; and the preset height is an integer multiple of the thickness.

14. The apparatus according to any one of claims 10 to 13, wherein extracting the feature map and identifying the target objects both are performed by a neural network which is trained from a sample image and labeled information thereof.

15. The apparatus according to claim 14, wherein the labeled information of the sample image comprises a labeled category of each target object in the sample image; and the apparatus further comprises a training unit configured to train the neural network by: extracting feature from a resized sample image to obtain a feature map of the resized sample image; identifying target objects in the sample image based on each segment feature that is obtained by segmenting the feature map, to obtain a predicted category of each of the target objects in the sample image; and adjusting a value of a parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image.

16. The apparatus according to claim 15, wherein the labeled information of the sample image further comprises a number of target objects of each labeled category; and the training unit is configured to: adjust the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and a number of target objects of each predicted category in the sample image.

17. The apparatus according to claim 15, wherein the labeled information of the sample image further comprises a total number of target objects in the sample image; and the training unit is configured to: adjust the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object 22 in the sample image, a sum of numbers of target objects of respective predicted category in the sample image, and the total number of target objects in the sample image.

18. The apparatus according to claim 14, further comprising a testing unit, configured to: test the trained neural network; rank an identification accuracy for each category of target objects identified by the neural network based on a result of the testing, to obtain an identification accuracy rank result; rank an identification error rate for each category of target objects identified by the neural network based on the result of the testing, to obtain an identification error rate rank result; and further training the neural network based on the identification accuracy rank result and the identification error rate rank result.

19. An electronic device, comprising: a processor; and a memory storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the method according to any one of claims 1 to 9.

20. A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, cause the processor to implement the method according to any one of claims 1 to 9.

Description:
METHOD, APPARATUS AND SYSTEM FOR IDENTIFYING TARGET OBJECTS CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] The present application claims a priority of the Singaporean patent application No. 10202007347V filed on August 1, 2020, which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure relates to the field of computer vision technology, and in particular, to a method, apparatus and system for identifying target objects.

BACKGROUND

[0003] In daily production and life, it is often necessary to identify some target objects. Taking an entertainment scene of board games as an example, in some board games, game coins on a tabletop need to be identified so as to obtain the category and number information of the game coins. However, conventional identification methods are low in identification accuracy.

SUMMARY

[0004] According to an aspect of the present disclosure, provided is a method for identifying target objects, including: clipping out a target image from an acquired image, where the target image involves a plurality of target objects to be identified that are stacked; adjusting a height of the target image to a preset height, where a height direction of the target image corresponds to a direction in which the plurality of target objects to be identified are stacked; extracting a feature map of the adjusted target image; segmenting the feature map in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features; and identifying the target objects based on each of the preset number of segment features.

[0005] In combination with any of implementations provided by the present disclosure, adjusting the height of the target image to the preset height includes: scaling the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is greater than the preset height, reducing the height and width of the scaled target image by equal proportions until a height of the reduced target image is equal to the preset height.

[0006] In combination with any of the implementations provided by the present disclosure, adjusting the height of the target image to the preset height includes: scaling the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is less than the preset height, filling the scaled target image with first pixels, so that a height of the filled target image is equal to the preset height.

[0007] In combination with any of the implementations provided by the present disclosure, each of the target objects to be identified in the target image is sheetlike and has an equal thickness, and the plurality of target objects to be identified are stacked in a direction of the thickness; and the preset height is an integer multiple of the thickness.

[0008] In combination with any of the implementations provided by the present disclosure, extracting the feature map and identifying the target objects both are performed by a neural network which is trained from a sample image and labeled information thereof.

[0009] In combination with any of the implementations provided by the present disclosure, the labeled information of the sample image comprises a labeled category of each target object in the sample image; and the neural network is trained by: extracting features from a resized sample image to obtain a feature map of the resized sample image; identifying target objects in the sample image based on each segment feature that is obtained by segmenting the feature map, to obtain a predicted category of each of the target objects in the sample image; and adjusting a value of a parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image.

[0010] In combination with any of the implementations provided by the present disclosure, the labeled information of the sample image further comprises a number of target objects of each labeled category; and adjusting the value of the parameter for the neural network comprises: adjusting the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and a number of target objects of each predicted category in the sample image.

[0011] In combination with any of the implementations provided by the present disclosure, the labeled information of the sample image further comprises a total number of target objects in the sample image; and adjusting the value of the parameter for the neural network comprises: adjusting the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, a sum of numbers of target objects of respective predicted category in the sample image, and the total number of target objects in the sample image.

[0012] In combination with any of the implementations provided by the present disclosure, the method further includes: testing the trained neural network; ranking an identification accuracy for each category of target objects identified by the neural network based on a result of the testing, to obtain an identification accuracy rank result; ranking an identification error rate for each category of target objects identified by the neural network based on the result of the testing, to obtain an identification error rate rank result; and further training the neural network based on the identification accuracy rank result and the identification error rate rank result.

[0013] According to an aspect of the present disclosure, provided is an apparatus for identifying target objects, including: an obtaining unit, configured to clip out a target image from an acquired image, where the target image involves a plurality of target objects to be identified that are stacked; an adjusting unit, configured to adjust a height of the target image to a preset height, wherein a height direction of the target image corresponds to a direction in which the plurality of target objects to be identified are stacked; an extracting unit, configured to extract a feature map of the adjusted target image; a segmenting unit, configured to segment the feature map in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features; and an identifying unit, configured to identify the target objects based on each of the preset number of segment features.

[0014] In combination with any of the implementations provided by the present disclosure, the adjusting unit is configured to: scale the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is greater than the preset height, reduce the height and width of the scaled target image by equal proportions until a height of the reduced target image is equal to the preset height.

[0015] In combination with any of the implementations provided by the present disclosure, the adjusting unit is configured to: scale the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that a height of the scaled target image is less than the preset height, fill the scaled target image with first pixels, so that a height of the filled target image is equal to the preset height.

[0016] In combination with any of the implementations provided by the present disclosure, each of the target objects to be identified in the target image is sheetlike and has an equal thickness, and the plurality of target objects to be identified are stacked in a direction of the thickness; and the preset height is an integer multiple of the thickness.

[0017] In combination with any of the implementations provided by the present disclosure, extracting the feature map and identifying the target objects both are performed by a neural network which is trained from a sample image and labeled information thereof.

[0018] In combination with any of the implementations provided by the present disclosure, the labeled information of the sample image comprises a labeled category of each target object in the sample image; and the apparatus further comprises a training unit configured to train the neural network by: extracting feature from a resized sample image to obtain a feature map of the resized sample image; identifying target objects in the sample image based on each segment feature that is obtained by segmenting the feature map, to obtain a predicted category of each of the target objects in the sample image; and adjusting a value of a parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image.

[0019] In combination with any of the implementations provided by the present disclosure, the labeled information of the sample image further comprises a number of target objects of each labeled category; and the training unit is configured to: adjust the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and a number of target objects of each predicted category in the sample image.

[0020] In combination with any of the implementations provided by the present disclosure, the labeled information of the sample image further comprises a total number of target objects in the sample image; and the training unit is specifically configured to: adjust the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, a sum of numbers of target objects of respective predicted category in the sample image, and the total number of target objects in the sample image.

[0021] In combination with any of the implementations provided by the present disclosure, the apparatus further includes a testing unit, configured to: test the trained neural network; rank an identification accuracy for each category of target objects identified by the neural network based on a result of the testing, to obtain an identification accuracy rank result; rank, an identification error rate for each category of target objects identified by the neural network based on the result of the testing, to obtain an identification error rate rank result; and further training the neural network based on the identification accuracy rank result and the identification error rate rank result.

[0022] According to an aspect of the present disclosure, provided is an electronic device, including a processor; and a memory storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the method according to any of the implementations of the present disclosure.

[0023] According to an aspect of the present disclosure, provided is a computer readable storage medium having computer program instructions stored thereon, when executed by a processor, cause the processor to implement the method according to any of the implementations of the present disclosure.

[0024] According to the method and apparatus for identifying target objects, and the electronic device and the storage medium provided by one or more embodiments of the present disclosure by adjusting the height of the target image clipped out from the acquired image to the preset height, extracting a feature map of the adjusted target image; and segmenting the feature map in the dimension corresponding to the height direction of the target image to obtain a preset number of segment features, the target objects are identified based on each of the preset number of segment features. Because the segment feature obtained by segmenting corresponds to the feature map of each target object, the number of target objects does not affect the identification accuracy of identifying the target objects based on the segment features, thereby improving the identification accuracy for the target objects.

[0025] It should be understood that the above general description and the following detailed description are merely exemplary and explanatory, and are not intended to limit the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0026] The accompanying drawings here, which are incorporated in and constitute a part of the specification, illustrate embodiments of the present disclosure and are used to explain technical solutions of the present disclosure together with the specification.

[0027] FIG. 1 is a flowchart of a method for identifying target objects according to at least one embodiment of the present disclosure;

[0028] FIG. 2A is a schematic diagram of a plurality of target objects which are stacked in a stand mode in the method for identifying target objects according to at least one embodiment of the present disclosure;

[0029] FIG. 2B is a schematic diagram of a plurality of target objects which are stacked in a float mode in the method for identifying target objects according to at least one embodiment of the present disclosure;

[0030] FIG. 3 is a schematic block diagram of an apparatus for identifying target objects according to at least one embodiment of the present disclosure; and

[0031] FIG. 4 is a schematic block diagram of an electronic device according to at least one embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0032] To make persons skilled in the art better understand the present disclosure, some embodiments thereof will be clearly and completely described below with reference to the accompanying drawings. It is apparent that the described embodiments are only part of possible embodiments of the present disclosure. All other embodiments, which may be obtained by a person of ordinary skill in the art based on one or more embodiments of the present disclosure without any inventive effort, fall within the scope of the present disclosure.

[0033] The terms used in the present disclosure are for the purpose of describing particular embodiments only and are not intended to limit present disclosure. The singular forms "a/an", "said", and "the" used in the present disclosure and the appended claims are also intended to include the plural forms, unless the context clearly indicates other meanings. It should also be understood that the term "and/or" used herein refers to and includes any or all possible combinations of one or more associated listed items. In addition, the term "at least one" herein indicates any one of a plurality or any combination of at least two of the plurality.

[0034] It should be understood that although the terms such as first, second, and third may be used in the present disclosure to describe various information, the information should not be limited by these terms. These terms are only used to distinguish the same type of information from one another. For example, in the case of not departing from the scope of the present disclosure, first information may also be referred to as second information, and similarly, the second information may also be referred to as the first information. Depending on the context, the word "if as used herein may be interpreted as "at the time of..." or "when. . . " or "in response to determining".

[0035] FIG. 1 is a flowchart of a method for identifying target objects according to at least one embodiment of the present disclosure. As shown in FIG. 1, the method may include steps 101-105.

[0036] At step 101, a target image is clipped out from an acquired image, where the target image involves a plurality of target objects to be identified that are stacked.

[0037] In some common situations, each of the target objects to be identified is sheetlike in various shapes, for example, game coins, and the thickness (height) of each target object is usually the same. The plurality of the target objects to be identified are usually stacked in a direction of the thickness. As shown in FIG. 2A, a plurality of game coins are stacked in a vertical direction (stacked in a stand mode), the height direction (H) of the target image is the vertical direction, and the width direction (W) of the target image is a direction perpendicular to the height direction (H) of the target image. And as shown in FIG. 2B, the plurality of game coins are stacked in a horizontal direction (stacked in a float mode), the height direction (H) of the target image is the horizontal direction, and the width direction (W) of the target image is a direction perpendicular to the height direction (H) of the target image.

[0038] The target objects to be identified may be target objects placed in a target region. The target region may be a plane (for example, a tabletop), a container (for example, a box), and the like. An image of the target region may be acquired by an image acquisition apparatus near the target region, such as a camera or a webcam.

[0039] In the embodiments of the present disclosure, a deep learning network, such as an RCNN (Regions Convolutional Neural Networks), may be used to detect the acquired image to obtain a detection result of target objects, and the detection result may be a bounding box. Based on the bounding box, the target image, which involves the plurality of target objects to be identified that are stacked, may be clipped out from the acquired image. It should be understood by a person skilled in the art that the RCNN is an example only, and other deep learning networks may also be used for target detection, which is not limited in the present disclosure.

[0040] At step 102, a height of the target image is adjusted to a preset height.

[0041] A height direction of the target image corresponds to a direction in which the plurality of target objects to be identified are stacked, and the preset height may be an integer multiple of the thickness. Taking the stacked game coins shown in FIGS. 2A and 2B as an example, the direction of stacking the game coins displayed in FIGS. 2A and 2B may be determined to be the height direction of the target image, and accordingly, the radial direction of the game coins is determined to be the width direction of the target image.

[0042] At step 103, a feature map of the adjusted target image is extracted.

[0043] For the adjusted target image, a pre-trained feature extraction network may be used so as to obtain the feature map of the adjusted target image, where the feature extraction network may include a plurality of convolutional layers, or include a plurality of convolutional layers and pooling layers, etc. After multilayers feature extraction are completed, a low-level feature may be gradually converted into a middle-level feature or a high-level feature so as to improve the expression of the target image and facilitate subsequent processing.

[0044] At step 104, the feature map is segmented in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features.

[0045] The preset number of segment features may be obtained by segmenting the feature map in the height direction of the target image, where each segment feature may be considered to correspond to one target object, and the preset number is equal to a maximum number of target objects to be identified.

[0046] In one example, the feature map may include a plurality of dimensions, for example, a channel dimension, a height dimension, a width dimension, a batch dimension, etc. The format of the feature map may be expressed as, for example, [B C H W], where B indicates the batch dimension, C indicates the channel dimension, H indicates the height dimension, and W indicates the width dimension. The direction indicated by the height dimension of the feature map may be determined based on the height direction of the target image, and the direction indicated by the width dimension of the feature map may be determined based on the width direction of the target image.

[0047] At step 105, the target objects are identified based on each of the preset number of segment features.

[0048] Because each segment feature corresponds to one target object, compared with performing object identification by directly using the feature map of the target image, identifying target objects by using each segment feature eliminates impact on the number of target objects, and improves the identification accuracy for the target objects in the target image.

[0049] In some embodiments, an image acquisition apparatus disposed on a side of the target region may photograph a target image involving a plurality of target objects that are in a stand mode (called a side view image), or an image acquisition apparatus disposed above the target region may photograph a target image involving a plurality of target objects that are in a float mode (called an overhead view image).

[0050] In some embodiments, the height of the target image may be adjusted by the following method.

[0051] A preset height and a preset width corresponding to the target image are first obtained and used for resizing the target image, where the preset width may be set according to an average width of the target objects, and the preset height may be set according to an average height of the target objects and a maximum number of the target objects to be identified.

[0052] In one example, the height and width of the target image may be scaled in equal proportions until the width of the target image reaches a preset width where scaling in equal proportions refers to enlarging or reducing the target image in a case of maintaining the proportion of the height to the width of the target image unchanged. The unit of the preset width and of the preset height may be a pixel and may also be other units, which is not limited in the present disclosure.

[0053] In a case that the width of the scaled target image reaches the preset width while a height of the scaled target image is greater than the preset height, the height and width of the scaled target image are reduced by equal proportions until a height of the reduced target image is equal to the preset height.

[0054] For example, assuming that the target objects are game coins, the preset width may be set to 224pix (pixel) according to an average width of the game coins, and the preset height may be set to 1344pix based on an average height of the game coins and a maximum number of the game coins to be identified (for example, 72). First, the width of the target image may be scaled to 224pix, and the height of the target image is scaled in equal proportions. In a case that the height of the scaled target image is greater than 1344pix, the height of the scaled target image may be adjusted again to enable the height of the adjusted target image to be 1344pix, and the width of the target image is adjusted in equal proportions, thereby implementing that the height of the target image is adjusted to a preset height of 1344pix. In the case that the height of the adjusted target image is equal to 1344pix, there is no need to adjust again, and adjusting the height of the target image to a preset height of 1344pix is implemented.

[0055] In one example, the height and width of the target image are scaled in equal proportions until the width of the target image reaches the preset width. In a case that the width of the scaled target image reaches the preset width while the height of the scaled target image is less than the preset height, the scaled target image is filled with first pixels, so that the height of the filled target image is equal to the preset height.

[0056] The first pixel may be a pixel of which a pixel value is zero, i.e., a black pixel. The first pixel may also be set to other pixel values, and the specific pixel value does not affect the effect of the embodiments of the present disclosure.

[0057] Still taking the cases that the target objects are game coins, the preset width is 224pix, the preset height is 1344pix, and the maximum number is 72 as an example, the width of the target image may be first scaled to 224pix, and the height of the target image is scaled in equal proportions. In a case that the height of the scaled target image is less than 1344pix, a height less than 1344pix and higher than the height of the scaled target image is filled with the black pixel so that the height of the filled target image is 1344pix. In a case that the height of the filled target image is equal to 1344pix, there is no need to fill, and adjusting the height of the target image to a preset height of 1344pix is implemented.

[0058] After the height of the target image is adjusted to the preset height, the feature map of the adjusted target image may be segmented in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features.

[0059] Taking the feature map [B C H W] as an example, the feature map [B C H W] is segmented in an H dimension (a height dimension) according to the preset number, i.e., the maximum number of the target objects to be identified, for example, 72. The scaled target image is filled in a case that the height of the scaled target image is less than the preset height, so that the height of the filled target image reaches the present height, and the height of the target image is adjusted to the preset height by reducing by equal proportions in a case that the height of the scaled target image is greater than the preset height, so that the feature map of the target image is obtained according to the target image with the preset height. Moreover, because the preset height is set according to the maximum number of the target objects to be identified, the feature map is segmented according to the maximum number, a preset number of segment features are obtained, each segment feature corresponds to each target object, and target objects are identified based on each segment feature. Therefore, the impact on the number of target objects can be reduced, and the identification accuracy for the target objects can be improved.

[0060] In some embodiments, for the filled target image, the preset number of segment features are obtained by segmenting the filled target image, when classifying the segment features, the segment feature corresponding to a region filled with the first pixel, a classification result of which is null. For example, for the segment features corresponding to a region filled with the black pixels, the classification results corresponding to the segment feature may be determined to be null. A number of non-null classification results may be determined by a difference between the maximum number of the target objects to be identified and the number of null classification results, or determined by directly identifying the segment features corresponding to the target objects. Thereby based on the number of obtained non-null classification results, the number of target objects included in the target image may be determined.

[0061] Assuming that the maximum number of target objects to be identified is 72, the feature map of the target image is segmented into 72 segments, the target objects are identified according each segment feature map, and 72 classification results may be obtained. In a case that the target image includes a black pixel filled region, the classification results corresponding to the segment feature maps of the filled region are null, for example, in a case that 16 null classification results are obtained, 56 non-null classification results are obtained, so that the target image may be determined to include 56 target objects.

[0062] The person skilled in the art should understand that the above preset width parameter, preset height parameter, and the maximum number parameter of target objects to be identified are examples, and the specific values of the parameters may be specifically set based on actual needs. No limitation is made thereto in the embodiments of the present disclosure.

[0063] In some embodiments, extracting the feature map and identifying the target objects both are performed by a neural network, and the neural network is trained from a sample image and labeled information thereof. The neural network may include a feature extraction network and a classification network, where the feature extraction network is used for extracting the feature map of a resized target image, and the classification network is used for identifying target objects based on each of a preset number of segment features. The sample image includes a plurality of target objects.

[0064] In one example, the labeled information of the sample image includes a labeled category of each target object in the sample image; and the neural network is trained by: extracting features from a resized sample image to obtain a feature map of the resized sample image; identifying target objects in the sample image based on each segment feature that is obtained by segmenting the feature map, to obtain a predicted category of each of the target objects in the sample image; and adjusting a value of a parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image.

[0065] Taking game coins as an example, the category of each game coin is related to the denomination, and game coins having the same denomination belong to the same category. For the sample image involving a plurality of game coins that are stacked in a stand mode, the denomination of each game coin is labeled in the sample image. The neural network for identifying target objects is trained based on the sample image in which the denomination is labeled. The denomination of each game coin is predicted by the neural network based on the sample image by using a difference between the predicted category and the labeled category, a value of a parameter for the neural network, for example, including a value of a parameter for the feature extraction network and a value of parameter for the classification network, is adjusted, and when the difference between the predicted category and the labeled category is less than a set threshold or iteration reaches a set number of times, the training is completed.

[0066] In one example, the labeled information of the sample image further includes the number of target objects of each labeled category; and in this case, the value of the parameter for the neural network is adjusted based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and the number of target objects of each predicted category in the sample image.

[0067] Still taking the plurality of game coins that are stacked in a stand mode as an example, the denomination information of each game coin and the number information of game coins of each denomination are annotated in the sample image. The neural network for identifying target objects is trained based on the sample image in which the above information is labeled. The denomination of each game coin and the number of game coins of the same denomination are predicted by the neural network based on the sample image. The value of the parameter for the neural network is adjusted based on the difference between the predicted result and the labeled information.

[0068] In one example, the labeled information of the sample image further includes the total number of target objects in the sample image; and in this case, the value of the parameter for the neural network is adjusted based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, a sum of numbers of target objects of respective predicted category in the sample image, and the total number of target objects in the sample image.

[0069] Still taking the plurality of game coins that are stacked in a stand mode as an example, the denomination information of each game coin and the total number information of the game coins are annotated in the sample image. The neural network for identifying target objects is trained based on the sample image in which the above information is labeled. The denomination of each game coin and the total number of the game coins (i.e. the predicted results) are predicted by the neural network based on the sample image. The value of the parameter for the neural network is adjusted based on the difference between the predicted result and the labeled information.

[0070] In one example, the labeled information of the sample image includes the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and the total number of target objects in the sample image; and in this case, the value of the parameter for the neural network is adjusted based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image and the number of target objects of each predicted category in the sample image, a sum of numbers of target objects of respective predicted category in the sample image, and the total number of target objects in the sample image.

[0071] Still taking the plurality of game coins that are stacked in a stand mode as an example, the denomination information of each game coin, the number information of each denomination of game coins, and the total number information of the game coins are labeled in the sample image. The neural network for identifying target objects is trained based on the sample image in which the above information is labeled. The denomination of each game coin, the number of game coins of each denomination, and the total number of the game coins are predicted by the neural network based on the sample image. The value of the parameter for the neural network is adjusted based on the difference between the predicted result and the labeled information.

[0072] In the embodiments of the present disclosure, the loss function used for training the neural network includes at least one of the following: a cross-entropy loss, a number loss of each category of target objects, or a total number loss of the target objects. That is, in addition to the cross-entropy loss, the loss function may also include the number loss of each category of target objects, and the total number loss of the target objects, thereby improving the ability to identify the number of target objects.

[0073] In some embodiments, when training the neural network, training data may be augmented, so that the neural network for identifying the category and number of the target objects provided by the embodiments of the present disclosure may be applied to an actual scene better. For example, data augmentation may be performed by using any one or more of the following: flipping the sample image horizontally, rotating the sample image at a set angle, performing color transformation on the sample image, performing brightness transformation on the sample image, and the like.

[0074] The method for identifying target objects provided by multiple embodiments of the present disclosure may be used for identifying a plurality of categories of target objects, and identifying the target objects by using the segmented feature maps, and with the increase of categories, the identification precision of each category of target objects does not decrease due to the increase of category types. [0075] In some embodiments, the trained neural network may also be tested; an identification accuracy for each category of target objects identified by the neural network is ranked based on a result of the testing, to obtain an identification accuracy rank result; an identification error rate for each category of target objects identified by the neural network is ranked based on result of the testing, to obtain an identification error rate rank result; and the neural network is further trained based on the identification accuracy rank result and the identification error rate rank result.

[0076] The identification accuracy rank result and the identification error rate rank result of respective categories of target objects may be stored by using a two-dimensional table. For example, the identification accuracy rank result are stored in the table in order from top to bottom, the identification error rate rank result are stored in the table in order from left to right, and for categories within a set range in the table, for example, the categories within the third row and first three columns in the table, the training is continued so as to improve the identification precision and accuracy of the neural network for these categories.

[0077] FIG. 3 is a schematic block diagram of an apparatus for identifying target objects according to at least one embodiment of the present disclosure. As shown in FIG. 3, the apparatus includes: an obtaining unit 301 configured to clip out a target image from an acquired image, where the target image involves a plurality of target objects to be identified that are stacked; an adjusting unit 302 configured to adjust a height of the target image to a preset height, where a height direction of the target image corresponds to a direction in which the plurality of target objects to be identified are stacked; an extracting unit 303 configured to extract a feature map of the adjusted target image; a segmenting unit 304 configured to segment the feature map in a dimension corresponding to the height direction of the target image to obtain a preset number of segment features; and an identifying unit 305 configured to identify the target objects based on each of the preset number of segment features.

[0078] In some embodiments, the adjusting unit 302 is configured to: scale the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that the width of the scaled target image reaches the preset width while a height of the scaled target image is greater than the preset height, reduce the height and width of the scaled target image by equal proportions until a height of the reduced target image is equal to the preset height.

[0079] In some embodiments, the adjusting unit 302 is configured to: scale the height and a width of the target image in equal proportions until a width of the scaled target image reaches a preset width; and in a case that the width of the scaled target image reaches the preset width while a height of the scaled target image is less than the preset height, fill the scaled target image with first pixels, so that a height of the filled target image is equal to the preset height. [0080] In some embodiments, each of the target objects to be identified in the target image is sheetlike and has an equal thickness, and the plurality of target objects to be identified are stacked in a direction of the thickness; and the preset height is an integer multiple of the thickness.

[0081] In some embodiments, extracting the feature map and identifying the target objects both are performed by a neural network which is trained from a sample image and labeled information thereof.

[0082] In some embodiments, the labeled information of the sample image includes a labeled category of each target object in the sample image; and the apparatus further includes a training unit configured to train the neural network by: extracting feature from a resized sample image to obtain a feature map of the resized sample image; identifying target objects in the sample image based on each segment feature that is obtained by segmenting the feature map, to obtain a predicted category of each of the target objects in the sample image; and adjusting a value of a parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image.

[0083] In some embodiments, the labeled information of the sample image further includes a number of target objects of each labeled category; and when adjusting the value of the parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image, the training unit is configured to: adjust the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, the number of target objects of each labeled category in the sample image, and a number of target objects of each predicted category in the sample image. [0084] In some embodiments, the labeled information of the sample image further includes a total number of target objects in the sample image; and when adjusting the value of the parameter for the neural network based on the predicted category of each target object in the sample image and the labeled category of each target object in the sample image, the training unit is configured to: adjust the value of the parameter for the neural network based on the predicted category of each target object in the sample image, the labeled category of each target object in the sample image, a sum of numbers of target objects of respective predicted category in the sample image, and the total number of target objects in the sample image.

[0085] In some embodiments, the apparatus further includes a testing unit configured to: test the trained neural network; rank an identification accuracy for each category of target objects identified by the neural network based on a result of the testing, to obtain an identification accuracy rank result; rank an identification error rate for each category of target objects identified by the neural network based on the result of the testing, to obtain an identification error rate rank result; and further training the neural network based on the identification accuracy rank result and the identification error rate rank result.

[0086] FIG. 4 is a schematic block diagram of an electronic device according to at least one embodiment of the present disclosure. As shown in FIG. 4, the electronic device may include a processor and a memory storing instructions executable by the processor, where the processor is configured to execute the instructions to implement the method for identifying target objects according to any of the implementations of the present disclosure.

[0087] At least one embodiment of the present disclosure may further provide a computer readable storage medium having computer program instructions stored thereon. When executed by a processor, the computer program instructions may cause the processor to implement the method for identifying target objects according to any of the implementations of the present disclosure.

[0088] A person skilled in the art should understand that one or more embodiments of the present disclosure may be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of the present disclosure may be in the forms of full hardware embodiments, full software embodiments, or embodiments in combination with software and hardware. Moreover, one or more embodiments of the present disclosure may be in the form of a computer program product implemented on one or more computer usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, etc.) having computer usable program codes stored therein.

[0089] " And/or" in the present disclosure indicates at least one of the two, for example, "A and/or B" includes three solutions: A, B, and "A and B".

[0090] The embodiments in the present disclosure are described in a progressive manner, for same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the difference from other embodiments. In particular, data processing device embodiments are substantially similar to method embodiments and therefore are only described briefly, and for the relevant parts may be referred to the descriptions of the method embodiments.

[0091] The foregoing describes specific embodiments of the present disclosure. There may be other embodiments within the scope of the appended claims. In some cases, actions or steps recited in the claims may be performed in a different order than the embodiments and may still achieve a desired result. In addition, the processes depicted in the accompanying drawings are not necessarily in a particular order or in a sequential order to achieve the desired result. In some implementations, multi-task processing and parallel processing are possible or may be advantageous.

[0092] Embodiments of the subject matter and functional operations described in the specification may be implemented in the following: digital electronic circuitry, tangible embodied computer software or firmware, computer hardware including the structures disclosed in the specification and structural equivalents thereof, or a combination of one or more thereof. Embodiments of the subject matter described in the specification may be implemented as one or more computer programs, that is, one or more modules of computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing apparatus or to control operations of the data processing apparatus. Alternatively or additionally, the program instructions may be encoded on manually generated propagated signals, such as a machine-generated electrical, optical or electromagnetic signal, which is generated to encode and transmit information to a suitable receiver apparatus to be executed by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more thereof.

[0093] The processes and logic flows described in the specification may be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating an output. The processes and logic flows may also be performed by a special logic circuit, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), and the apparatus may also be implemented as a special logic circuit.

[0094] Computers suitable for executing a computer program include, for example, a general-purpose microprocessor and/or a special-purpose microprocessor, or any other type of central processing unit. Generally, the central processing unit receives instructions and data from a read-only memory (ROM) and/or a random access memory (RAM). Basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, the computer further includes one or more mass storage devices for storing data, such as a magnetic disk, a magneto-optical disk, or an optical disk, etc., or the computer is operatively coupled to the mass storage device to receive data from the mass storage device or transmit data to the mass storage device, or both. However, the computer does not necessarily have such a device. Moreover, the computer may be embedded in another device, such as a mobile phone, a Personal Digital Assistant (PDA), a mobile audio or a video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, for example, a Universal Serial Bus (USB) flash drive, to name a few.

[0095] A computer-readable medium suitable for storing computer program instructions and data includes all forms of a non-volatile memory, a medium, and a memory device, including, for example, a semiconductor memory device (such as an EPROM, an EEPROM, and a flash device), a magnetic disk (such as an internal hard disk or a movable disk), a magneto -optical disk, and a CD ROM and DVD-ROM disk. The processor and the memory may be supplemented by, or incorporated in, a special logic circuit. [0096] Although the present disclosure includes many specific implementation details, these details should not be interpreted as limiting the present disclosure, but rather are mainly used to describe the features of specific embodiments of the present disclosure. Some features that are described in multiple embodiments in the present disclosure may also be implemented in combination in a single embodiment. On the other hand, various features described in a single embodiment may be separately implemented in multiple embodiments or in any suitable sub-combination. Moreover, although the features may function in some combinations as described above and even initially claimed, one or more features from a claimed combination may be removed from the combination in some cases, and the claimed combination may point to a sub-combination or a variant of the sub-combination.

[0097] Similarly, although operations are depicted in the accompanying drawings in a particular order, this should not be construed as requiring these operations to be performed in the particular order shown or sequentially, or that all illustrated operations be performed to achieve a desired result. In some cases, multi-task and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the foregoing embodiments should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products.

[0098] Therefore, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the appended claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve a desired result. Moreover, the processes depicted in the accompanying drawings are not necessarily required to be in the particular order or sequentially shown to achieve a desired result. In certain implementations, multi-task and parallel processing may be advantageous.

[0099] Some embodiments of the present disclosure are described above, and shall not be interpreted as limitation to the present disclosure. Any modifications, alteration or equivalent substitutions which may be made based on the spirit and principle of the present disclosure should fall within the scope of the present disclosure.