Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LEARNING ORDINAL REPRESENTATIONS FOR DEEP REINFORCEMENT LEARNING BASED OBJECT LOCALIZATION
Document Type and Number:
WIPO Patent Application WO/2022/217122
Kind Code:
A1
Abstract:
A reinforcement learning based approach to the problem of query object localization, where an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. It enables test-time policy adaptation to new environments where the reward signals are not readily available, and thus outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing of the trained agent for new tasks, such as annotation refinement, or selective localization from multiple common objects across a set of images. Experiments on corrupted MNIST dataset and CU-Birds dataset demonstrate the effectiveness of our approach.

Inventors:
HAN SHAOBO (US)
MIN RENQIANG (US)
LI TINGFENG (US)
Application Number:
PCT/US2022/024118
Publication Date:
October 13, 2022
Filing Date:
April 08, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NEC LAB AMERICA INC (US)
International Classes:
G06V10/82; G06N3/04; G06N3/08; G06V10/25
Foreign References:
US20210027098A12021-01-28
US20200202171A12020-06-25
US20200218888A12020-07-09
US20210073986A12021-03-11
US20160098633A12016-04-07
Attorney, Agent or Firm:
KOLODKA, Joseph (US)
Download PDF:
Claims:
Claims: 1. A deep reinforcement learning (RL) method for object localization comprising: acquiring a seed dataset including a set of seed images each with ground truth bounding box annotation; pretrain ordinal embedding by randomly perturbing the ground truth bounding box at different levels denoted by parameter p, said ordinal embedding satisfying an ordinal constraint locally for each pair of perturbed data augmented from the same image, wherein the pretraining is performed through the effect of a backbone network, a region of interest (RoI) head, and a triplet loss; and using an embedding function, configuring RL agents to start from a whole image and recursively sample actions from a discrete action space such that rewards are produced, the rewards of a sample action determined from embedding distances and updating a policy network based on the rewards so determined; and outputting an annotation policy and embedding function. 2. The method of claim 1 wherein the seed image bounding box annotation is initially provided by a human action.
Description:
LEARNING ORDINAL REPRESENTATIONS FOR DEEP REINFORCEMENT LEARNING BASED OBJECT LOCALIZATION TECHNICAL FIELD [0001] This disclosure relates generally to image processing and recognition. More particularly, it describes systems and methods for learning ordinal representations for deep reinforcement learning based object localization. BACKGROUND [0002] As those skilled in the art will readily appreciate, in many fields of endeavor, it is often of interest to automatically discover one or more types of common objects within an image or a set of images. Notably, fully supervised object detection or localization methods require large amount of human annotation (i.e., bounding boxes around target objects) in training, which is expensive and impractical in cost-sensitive applications. For example, in distributed fiber optic sensing or digital pathology, high-quality annotations from experienced human experts are very limited while weakly supervised object detection or localization (WSOD or WSOL) approaches requires only image-level annotations (classes). However, such learned annotation(s) is/are often partial, which refers to a most discriminative region of the target object instead of the integral regions. Finally, existing approaches for co-localization are unsupervised, which may provide the unwanted common objects as output if the image dataset contains more than one types of common objects SUMMARY [0003] An advance in the art is made according to aspects of the present disclosure directed to systems and methods to address the issues noted above. Advantageously our inventive approach requires only a “seed dataset” with accurate bounding box annotations. [0004] In sharp contrast to traditional fully-supervised object detection/localization approaches, our algorithm requires a much smaller size for the seed dataset. Starting from the seed dataset, a large amount of perturbed boxes are sampled as the reinforcement learning agent exploring the image environment. The preference on these perturbed boxes are naturally determined based on the intersection over Union (IoU) to the ground truth bounding box of the image. We encode this information into an ordinal representation jointly trained with a reinforcement learning annotation agent. Existing deep reinforcement learning based object localization methods fail to encode this information and are therefore with much worse sample efficiency. [0005] In further contrast to WSOD/WSOL methods, our approach focuses explicitly on the similarity between common objects across different images within the same image class, instead of the discrimination across different classes. Image-level class labels can be incorporated but are not mandated. [0006] More specifically, any ambiguity on the class of target object in co- localization is avoided by designating the target object explicitly in the seed dataset. The algorithm works in a human-in-the-loop manner. In particular, when given an image dataset, a human starts annotating a few, and the reinforcement learning agent automatically automatically labels the rest of them following human’s guidance. [0007] Our inventive framework is motivated from common challenges of image data annotation in fiber sensing tasks, which is very time-consuming and laborious. However, our method can be applied to other data modalities/applications as well, such as images in digital pathology, object tracking in video, and temporal localization for sound event detection, etc. [0008] Operationally, we view each image as an environment that an annotation agent can interact with by moving the bounding box. The learned localizing strategy shall be generalizable to the new environments (images). To facilitate information sharing from multiple learning stages and across different images, the reward is not given directly via IoU, but indirectly via distances on the learned latent representation. [0009] With our inventive approach, ordinal representation learning and deep reinforcement learning (RL) are jointly trained with mutual benefits. The representation learning model is trained on not only precisely annotated data, but also augmented data with perturbations. Existing representation learning methods do not directly yield more compact cluster on the correctly annotated data. Therefore, the reward can only be defined on the original data, not on its latent embedding. In our approach, a latent embedding function is trained to preserve the ordinal relationship between a pair of imperfect annotations on the same image. In other words, the embedding of a bounding box with higher IoU will be closer to the embedding of ground truth bounding box than that of a box with lower IoU. As a result, the RL reward can be defined based on embedding distance. [0010] If the ordinal embedding is trained separately with the deep RL agent, then the perturbed samples are generated randomly, the majority of samples would not be on the search path of RL agent, and therefore redundant and inefficient. In the proposed joint training scheme, the box pairs are sampled when the RL agent is exploring the embedding space, so the ordinal embedding can be trained more efficiently. At different stage of learning, the supervision is customized. The model will learn to assign preference to a pair of better-annotated boxes at later stage of training. [0011] As a byproduct, the embedding distance also provides a metric for assessing the quality of annotation. Given a set of images with both high and low quality annotations, the well-annotated data falls into compact clusters in our ordinal embedding space. Therefore, they can be selected. The quality of annotations can be ranked according to the distance to the cluster centroids of filtered data. [0012] Finally, our recurrent neural network (RNN) based methods allow explorations starting from the whole image. This makes our approach applicable to large- scale single image co-localization problems that contains multiple common objects of the same class, even if the targeted objects are of different size, and the images are high- resolution. The interactive process between human and RL annotator works as follows. A human initiates the annotation process by labeling one or two target objects of interest. The annotation agent starts from looking at the whole image at a coarse resolution, and follows a top-down scheme to localize the objects in the rest of images by taking a sequence of recursive actions. The human can accept or reject the selected objects, and/or run the annotator again, until no new objects are found. BRIEF DESCRIPTION OF THE DRAWING [0013] A more complete understanding of the present disclosure may be realized by reference to the accompanying drawing in which: [0014] FIG. 1 is a schematic diagram illustrating joint training framework of annotation agent and data representation according to aspects of the present disclosure; [0015] FIG. 2 is a schematic flow diagram illustrating a model training process according to aspects of the present disclosure; [0016] FIG. 3 is a schematic diagram illustrating application 1 - human guided automatic annotation of fiber sensing dataset(s) wherein well-annotated data can benefit downstream training of an event classifier according to aspects of the present disclosure; and [0017] FIG. 4 is a schematic diagram illustrating application 2 – worker quality assessment and improvement for crowdsourcing based images annotations platform wherein high quality annotation can be identified and low quality data can be corrected by the trained agent according to aspects of the present disclosure; [0018] FIG.5 is a schematic diagram illustrating ordinal representation learning of embedding net and triplet loss according to aspects of the present disclosure; [0019] FIG.6 is a schematic diagram illustrating ordinal embedding based reward and action space according to aspects of the present disclosure; [0020] FIG. 7 is a schematic diagram illustrating a complete recurrent neural network (RNN) based architecture of RL agent and ordinal representation learning according to aspects of the present disclosure; [0021] FIG.8(A), FIG.8(B), and FIG.8(C) illustrate action sequence of RL agent and convergence of learning and plots of co-localization of digits 4 from cluttered background and the convergence of embedding distance to ground truth according to aspects of the present disclosure; [0022] FIG. 9 is a dataset comparing fixed embedding vs training embedding during RL updates according to aspects of the present disclosure; [0023] FIG.10 is a dataset showing agent trained and tested on digits 4, as well as other new digits 0-9 according to aspects of the present disclosure; [0024] FIG.11 is a schematic diagram showing RL-based query object localization having reward signal defined on an exemplary set rather thatn bounding boxes according to aspects of the present disclosure; [0025] FIG. 12 is a schematic diagram showing an illustrative RoI encoder and projection head according to aspects of the present disclosure; [0026] FIG.13(A), and FIG.13(B) are datasets that illustrate: FIG.13(A) random sampling and anchor sampling on OrdAcc (%); and FIG. 13(B) a comparison with and without sign for IoU reward on CorLoc (% )according to aspects of the present disclosure; [0027] FIG. 14(A), and FIG. 14(B) are plots that illustrate comparison under different train set sizes according to aspects of the present disclosure; [0028] FIG.15(A), and FIG.15(B) are datasets that illustrate: FIG.15(A) CorLoc (%); and FIG.15(B) a comparison of four training strategies according to the anchor used according to aspects of the present disclosure; [0029] FIG. 16 is a dataset that illustrates performance on different digits according to the anchor used according to aspects of the present disclosure; [0030] FIG.17 is a plot showing before, after, and finetuning of adaption according to the anchor used according to aspects of the present disclosure; [0031] FIG. 18(A), and FIG. 18(B) are datasets that illustrate: FIG. 18(A) performance of loose to tight annotated bounding box; and FIG.18(B) performance when transferring to other background according to aspects of the present disclosure; and [0032] FIG.19 is a listing of an Algorithm I for training and reward localization agent according to the anchor used according to aspects of the present disclosure. DESCRIPTION [0033] The following merely illustrates the principles of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope. [0034] Furthermore, all examples and conditional language recited herein are intended to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. [0035] Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. [0036] Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. [0037] FIG. 1 is a schematic diagram illustrating joint training framework of annotation agent and data representation according to aspects of the present disclosure. [0038] FIG. 2 is a schematic flow diagram illustrating a model training process according to aspects of the present disclosure. [0039] FIG. 3 is a schematic diagram illustrating application 1 - human guided automatic annotation of fiber sensing dataset(s) wherein well-annotated data can benefit downstream training of an event classifier according to aspects of the present disclosure. [0040] FIG. 4 is a schematic diagram illustrating application 2 – worker quality assessment and improvement for crowdsourcing based images annotations platform wherein high quality annotation can be identified and low quality data can be corrected by the trained agent according to aspects of the present disclosure. [0041] As we shall now describe, our inventive method / algorithm involves three steps in training. [0042] Step 1: Identify a set of seed images. This can be acquired either from human experts, or a pre-selection heuristics, or a third-party dataset. [0043] Step 2: Pretrain the ordinal embedding. Given a seed dataset, pretrain by randomly perturb the ground truth bounding box at different levels. The levels of perturbation are denoted by parameter p. The ordinal embedding needs to satisfy the ordinal constraint locally for each pair of perturbed data augmented from the same image. FIG.5 is a schematic diagram illustrating ordinal representation learning of embedding net and triplet loss according to aspects of the present disclosure; [0044] Step 3: Reinforcement Learning. Given an embedding function, the RL agents start from the whole image and recursively sample actions from a discrete action space. FIG. 6 is a schematic diagram illustrating ordinal embedding based reward and action space according to aspects of the present disclosure. The reward of actions are calculated from the embedding distances. The policy network (action head) is jointly updated with the embedding network. The neural network architecture is detailed in FIG. 7 which is a schematic diagram illustrating a complete recurrent neural network (RNN) based architecture of RL agent and ordinal representation learning according to aspects of the present disclosure. [0045] The effectiveness of the proposed approach is evaluated at Clutter MNIST benchmark dataset. FIG.8(A), FIG.8(B), and FIG.8(C) illustrate action sequence of RL agent and convergence of learning and plots of co-localization of digits 4 from cluttered background and the convergence of embedding distance to ground truth according to aspects of the present disclosure. The figure demonstrates advantages of joint training in terms of final localization performance and shows that the agent trained on a co- localization task one digits to adapt to find new classes of common objects (0~3, 5~9), that is unseen in the training phase. [0046] Our inventive system and method jointly conduct ordinal representation learning and deep reinforcement learning, to overcome the shortage of high-quality annotated data. Our system and method can be applied broadly to fully supervised, weakly- supervised, and co-localization tasks. [0047] Our system and method employ the human-in-the-loop paradigm, which effectively utilizes a limited amount of high-quality, high confidence human annotated data, to identify and improve the quality of low-quality annotated data. [0048] As those skilled in the art will readily understand and appreciate, our inventive system and method may benefit a number of applications namely, 1) as a tool to automatically annotate unlabeled dataset, in cost sensitive applications include but not limited to fiber sensing; 2) as a tool to enhance the interpretability of deep neural networks such as the class activation map (CAM) methods; 3) as a tool to assess the quality of annotation, and improve low-quality annotations on crowdsourcing platform; and 4) as a tool to localize multiple common target objects within the same image such as crops from satellite image in intelligent agriculture or cells from whole-slide image in digital pathology. [0049] The illustrative embodiments are described more fully by the Figures and detailed description. Embodiments according to this disclosure may, however, be embodied in various forms and are not limited to specific or illustrative embodiments described in the drawing and detailed description. [0050] FIG. 9 is a dataset comparing fixed embedding vs training embedding during RL updates according to aspects of the present disclosure; and [0051] FIG.10 is a dataset showing agent trained and tested on digits 4, as well as other new digits 0-9 according to aspects of the present disclosure. [0052] At this point we describe a reinforcement learning based approach to the problem of query object localization, where an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. It enables test-time policy adaptation to new environments where the reward signals are not readily available, and thus outperforms fine- tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing of the trained agent for new tasks, such as annotation refinement, or selective localization from multiple common objects across a set of images. Experiments on corrupted MNIST dataset and CU-Birds dataset demonstrate the effectiveness of our approach. [0053] In this disclosure, we focus on the reinforcement learning (RL) formulation to the problem of query object localization, where an agent is trained to localize the target object specified by a small set of exemplary images. The vision-based agent can be viewed as a proactive information gatherer that actively interacts with the image environment, following a class-specific localization policy, thus will be more suitable for robotic manipulation or embodied AI tasks. [0054] During test-time, the queried object to localize may be novel, or the background environment may undergo substantial change, hindering the applicability of class-agnostic agents with fixed policy. When reward signal is available, fine-tuning methods can effectively adapt agents to the new environment and yield improved performance. Different from standard RL settings, the reward signal is not available in our application during test-time, as the bounding box annotations are to be found by the localization agent on test images. [0055] To address this problem, we describe an ordinal metric learning based framework for learning an implicitly transferable reward signal defined with a small exemplary set. An ordinal embedding network is pre-trained with data augmentation under a loss function designed to be relevant to the RL task. The reward signal allows explicit updates of the controller in the policy network with continual training during test-time. Compared to fine-tuning approaches, the agents can get exposed to the new environment more extensively with unlimited usage of test images. Informed by the exemplary set precisely, the agent is versatile to the change of localization target. [0056] FIG.11 is a schematic diagram showing RL-based query object localization having reward signal defined on an exemplary set rather than bounding boxes according to aspects of the present disclosure. [0057] As compared to bounding-box regression approaches, off-policy RL based object localization approaches have the advantage of being region-proposal free, with customized search paths to each image environment. The specificity of agent purely depends on the classes of bounding-boxes used in the reward. They can be made class- specific, but agent for each class would need to be trained separately. [0058] Despite the rise of crowdsourcing platforms, obtaining an ample amount of bounding-box annotations remains costly and error-prone. Furthermore, the quality of annotation often varies, and precise annotations for certain object class may require special expertise from annotators. The emergence of weakly supervised object localization (WSOL) methods alleviates the situation, which utilize image class labels in deriving bounding box annotations. It is known that WSOL methods have drawbacks of overly relying on the inter-class discriminative features and not being able to generalize to classes unseen during the training phase. [0059] We note that intra-class similarity is a more natural objective for the problem of localizing objects belonging to the target class. A similar problem is image co- localization, where the task is to identify the common objects within a set of images. Co- localization approaches exploit the common characteristics across images to localize objects. Being unsupervised, co-localization approaches could suffer from ambiguity if there exist multiple common objects or parts, e.g., bird head and body, which may provide the unwanted common objects as output. [0060] There exists an apparent contradiction between the goals of training an agent with high task-specificity and better generalization performance to new situations at the same time. The key to reconcile these two goals lies in the usage of a small set of examples. There has been a paradigm shift from training static models defined with parameters to ones defined together with a support set, which have proven to be very effective in the few-shot training. [0061] Besides the effort of meta learning implicitly adjustable models, fine-tuning on a pretrained model has also been used in transferring knowledge from data-abundant to data-scarce tasks. When reward signal is not available, a policy adaptation approach may be employed in which the intermediate representation is fine-tuned via optimizing a self- supervised auxiliary loss while the controller is kept fixed. Our disclosure shares the same motivation of test-time training, but we focus instead on the settings where the controller needs to be adapted or even repurposed for new tasks. [0062] In query object localization, we are given a set of images I, and a small set of exemplary images E. The image annotation is available in the form of a bounding box g. Our goal is to find the location of the bounding box containing the queried object in each image without candidate boxes. [0063] Considering each image I i as an environment, existing RL approaches for object localization use its ground-truth object bounding box g i as the reward signal, R = sign(IoU(b t ,g i ) − IoU(b t−1 ,g i )), (1) [0064] where IoU(bt,gi) denotes the Intersection-over-Union (IoU) between the current window bt and the corresponding ground-truth box gi, and IoU(b,g) = area(b∩g)/area(b∪g). Similar to the bounding box regression approaches which learns a mapping f : I 7→ g, the image and box must be paired. However, annotated image-box pairs (I,g) may be scarce in both the training and testing phases. The reward signal in (??) is not transferable across training images, not to mention test image with potential domain shifts. [0065] To address this problem, a natural idea is to define the reward signal based on the distance between the cropped images by current window b t and the ground-truth window g. Given their M - dimensional representations b t and g produced by an embedding function f : R from D- dimensional image feature vectors, a distance function d : r eturn the embedding distance d(bt ,g). However, it may not decrease monotonically as the agent approaches to the ground-truth box g. As a result, the embedding distance based reward signal may be less effective than (??). [0066] Furthermore, we propose to use an ordinal embedding based reward signal. For any two perturbed boxes bj,bk from g in a constraint set C, embeddings b j ,b k ,g are learned, such that the relative preference between any pair of boxes are preserved in the Euclidean space, p j > p k ⇔ ||b j − g|| < ||b k − g||, ∀j,k ∈ C, (2) [0067] where pj and pk denote the preference (derived from IoU to ground-truth box or ordinal feedbacks from user). This problem is originally posed as non-metric multidimensional scaling. Although we apply a very simple pairwise-based approach, there exist other extensions such as the listwise-based, quadruplet-based and landmark-based. [0068] The anchor g in (2) is not restrictive to the embedding coming from the same image. For example, it could be replaced by the prototype embedding of the exemplary set E, c = 1/|E| P i∈E bi, where bi is the embedding of the cropped image Ii by ground-truth box gi. If images from multiple classes are available, the prototype can be further made to be class-dependent, or clustering-based. We find that prototype-based embedding as the anchor may have better generalization performance than g in some experiments. This choice also makes our approach amenable to few-shot training, when only a small subset of training images per class are annotated. The ordinal reward can be viewed as meta information. Moreover, even if the exemplary set during the test time only contains the cropped object, test-time policy adaptation is still feasible without image-box pairs. [0069] We assume that during the training time, the exemplary set E contains both image I and box g. We adopt a tailored data augmentation scheme - box perturbation, in which C is constructed by sampling box pairs around g. We have found that using IoU- based partition scheme is more effective than random sampling This can be viewed as a procedure to enhance the robustness of neural network against box perturbations and protect the special purpose of its usage on distinguishing reward increases or decreases. Pre-training with data augmentation can also make the downstream task of policy network training more efficient. [0070] In this disclosure, we define p as the IoU of box b to the ground-truth box g, i.e., p = IoU(b,g). We learn an embedding space consistent to the local ordinal constraints specified on the image pairs obtained via data augmentation. [0071] We choose to optimize the triplet loss for learning the desired embedding, where f a is the “anchor" embedding. f p ,f n are the “positive" and “negative" embeddings with larger and smaller IoUs with ground truth box g, respectively. Note that a good representation for defining reward may not necessarily be a good state representation at the same time - it may not contain enough information guiding the agent taking the right actions. suggests that adding a projection head between the representation and the contrastive loss substantially improves the quality of the learned representation. [0072] We find the use of projection head is crucial in balancing the two objectives in our task. The network architecture is shown in FIG. 12, in which an MLP projection head is attached after an (Region of Interest) RoI encoder. According to the given image and RoI, the RoI encoder extracts RoI feature s that will be used the state representation for localization. The projection head learns ordinal embedding b for computing reward. The ROI alignment module handles boxes of different size. Under a joint loss function embed ec t ip, the state representation s can indirectly benefit from the ordinal supervision on b, while it still must render satisfactory image reconstruction results. Beside the autoencoder scheme, the RoI encoder can use pre-trained network as well. [0073] The localization is formulated as a Markov Decision Process (MDP) with raw pixels in each image as the Environment. As discussed herein, we use the ordinal embedding rather than the bounding box coordinates to compute the improvement that agent makes, and the Reward for an agent moving from state s 0 to s takes the following form, ( 4) where a is the protoptype embedding. Ordinal embeddings are extracted from the image regions surrounded by the ground-truth boxes in E, by the pre-trained RoI encoder and projection head, and the protoptype is computed as the mean vector. Furthermore, we use policy gradient with recurrent neural network (RNN) (Mnih et al., 2014) rather than DeepQNetwork with a vector of history actions and states. Starting from the whole image pixels as input, the agent is trained to select the actions to transform current bounding box at each step, by maximizing the total discounted reward. The agent takes pooled feature from the current box as State, while it also maintains an internal state within RNN, which encodes information from history observations. The Action set is defined with discrete actions facilitating a top-down search, including five scaling, eight translation transformations as in, plus one stay action. [0074] Test-time Adaptation During test-time, the agent has the option of further updating the policy network using the received reward from (4) with a as the prototype of the test exemplary set Etest. To match test conditions, the training batch is split into two group and a is computed on a small subset that does not overlap with the training images to localize, while during test adaptation, a becomes the prototype of the exemplary set. The full algorithm is outlined in Algorithm 1 which is illustratively shown in FIG.19. [0075] The transferability of our reward signal from training to testing crucially relies on the generalization ability of the learned ordinal representation. If the ordinal preference does not hold in the test domain, the proposed test-time policy adaptation scheme will not work. By adapting the represention with self-supervised objectives, this issue might be remedied. Although our approach does not directly handle the special cases of multiple queried objects or no queried object within the image environment, it can be easily modified to accomplish these tasks. [0076] We evaluate our approach with several tasks on MNIST and CUB birds dataset. For MNIST, we use three convolutional layers with ReLU activation after each layer as image encoder, while the same but mirrored structure as decoder to learn an autoencoder. Then attach RoI align layer following two fully connected layers as projection head for ordinal reward learning. For CUB dataset, we adopt layers before conv5_3 of VGG16 pretrained on ImageNet encoder. The projection head is the same structure as before but with more units for each fully connected layer. To evaluate learned ordinal structure, we use OrdAcc defined as the percentage of images where the order of a pair of perturbed boxes is correctly predicted. We use the Correct Localization (CorLoc)) metric, which is defined as the percentage of images correctly localized according to the criterion area(b p ∩ g)/area(b p ∪ g) ≥ 0.5, where b p is the predicted box and g is the ground-truth box. [0077] We analyze the effectiveness of using ordinal embedding in terms of representation and reward on Cluttered MNIST. Each 28 × 28 digit is randomly put on an 84 × 84 cluttered background. We compare embeddings trained with only autoencoder and jointly trained with ordinal projection head. Besides, we also compare the IoU based reward used with our embedding based reward. The agent is trained on specific number of digit 4 images. And tested on all images in the test set. The results under different train set size(s) are shown in FIG.13(A), and FIG. 13(B) which are datasets that illustrate: FIG. 13(A) random sampling and anchor sampling on OrdAcc (%); and FIG. 13(B) a comparison with and without sign for IoU reward on CorLoc ( %)according to aspects of the present disclosure. With ordinal embedding presents in both representation and reward ("AE+Ord+Embed"), the model performance is consistently better than other settings, especially when train set size is small. [0078] FIG. 14(A), and FIG. 14(B) are plots that illustrate comparison under different train set sizes according to aspects of the present disclosure. [0079] To learn ordinal reward efficiently, we conduct experiments to compare the sampling strategy of generating augmented bounding box pair. The first strategy is random sampling, where the pair of boxes are generated completely randomly. The other one is sampling by anchor, where we first generate dense anchors with variant scales, then divide them into 10 groups according to the IoU with ground-truth box. Each group has an interval of 0.1. The sampling is first on group level, i.e., sampling two groups. Then sample two boxes corresponding to each group. Thus, the sampled boxes can cover more cases compared to random sampling. The resulting OrdAcc of the two strategies is shown in FIG. 13(A). With anchor sampling, we can learn better ordinal embedding. [0080] Reward {+1,−1}, sign or not sign use Eq. 1 as reward to train the agent. However, from FIG 14(A) and FIG.14(B), it can be seen that there is a large gap between this IoU reward and our Embed reward, especially when train set size is small. This is somewhat counter-intuitive as the ordinal reward is to approximate the property of IoU in embedding space, thus it should be less accurate than IoU as a reward. To analyze this problem, we take the sign operation off in Eq. 1 to train the models on digit 4 images. As shown in FIG.13(B), with sign operation, the localization accuracies increase 3.4% on digit 4 and 6.2% on other digits test set. [0081] FIG.15(A), and FIG.15(B) are datasets that illustrate: FIG.15(A) CorLoc (%); and FIG.15(B) a comparison of four training strategies according to the anchor used according to aspects of the present disclosure. [0082] As opposed to using a Deep Q-Network to train the agent, we apply policy gradient to optimize it. Besides, we adopt top-down search strategy through RNN, while they used a vector of history actions to encode memory in these works. We evaluate the design choices with models trained and tested on digit 4 or tested on other digits, as FIG. 15(A) shows. As we can see, the agent achieves best performance with "PG+RNN". While with history action vectors, the accuracy decreases when the agent is trained by DQN. [0083] We conducted experiments to evaluate the effects on ordinal reward learning and localization of different training strategy on a subset of CUB dataset, where the train and test set contains 15 and 5 different fine-grained classes respectively, resulting 896 images for training and 294 for testing. FIG.15(B) shows the OrdAcc and CorLoc of four settings: “Self", both embedding pretraining and agent training is using the ground- truth from this instance as anchor; “Proto", both use the prototype of a subgroup containing this instance within a batch; “Shuffle self", both use the ground-truth from another instance; “Shuffle proto", both use the prototype of a subgroup without this instance within a batch. The RoI Encoder is trained with only loss trip . Thus, the whole train set can be seen as a single class. From the results, while the OrdAcc is lower than others for “Shuffle proto", the CorLoc is the best with large margin. This phenomena suggests that this training strategy brings compactness to the train set, constructing an ordinal structure around the cluster. Note that the OrdAcc is computed using instance as anchor. [0084] As will now be appreciated by those skilled in the art, we disclose an ordinal representation learning based reward, for training a localization agent to search queried object of interest in potentially new environments. In particular, we use a small exemplary set as a guidance signal for delivering learning objectives, which can avoid learning ambiguity. Meanwhile, we use test images environment to inform the agent about the domain shifts without requiring image-box pairs during the test-time. Our algorithm takes raw image pixels as input with no need of proposing candidate boxes. [0085] Our approach is based on the feature similarity with the exemplary set, which is fundamentally different with bounding-box regression and bounding-box RL approaches. In order to generalize to various object classes and background scenario, previous approaches have to be trained as classagnostic on large datasets covering foreground and background variations. In contrast, we allow specialized agents to be trained, with policy adaptation ability during the test-time. [0086] Instead of jointly training the localization model with the classification model, we explore learning box annotations from image class labels, in a similar spirit with weakly-supervised learning. Given an image label from a classification model, our localization model can identify the box region with enhanced interpretability. Empirically, we show that our approach works on the transfer learning setting from one single data abundant source task to data-scarce test tasks. In addition, our approach also applies to the few-shot learning setting where limited annotations across a number of tasks are available during training. Future work includes cross-modality query or zero-shot query based on attributes, and curriculum learning with a designed sequence of targets in the exemplary set. [0087] Annotation collection plays an important role in building machine learning systems. It is one task that could benefit greatly from automation, especially in cost- sensitive applications. We aim to reduce the human labeling efforts, in terms of the number of annotate samples per class, number of annotate classes, and the level of accuracy required. Our approach enables objective evaluation and iterative refinement of data quality. [0088] FIG. 16 is a dataset that illustrates performance on different digits according to the anchor used according to aspects of the present disclosure; [0089] FIG.17 is a plot showing before, after, and finetuning of adaption according to the anchor used according to aspects of the present disclosure; [0090] FIG. 18(A), and FIG. 18(B) are datasets that illustrate: FIG. 18(A) performance of loose to tight annotated bounding box; and FIG.18(B) performance when transferring to other background according to aspects of the present disclosure; [0091] FIG.19 is a listing of an Algorithm I for training and reward localization agent according to the anchor used according to aspects of the present disclosure. [0092] At this point, while we have presented this disclosure using some specific examples, those skilled in the art will recognize that our teachings are not so limited. Accordingly, this disclosure should be only limited by the scope of the claims attached hereto.