Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE RETRIEVAL SYSTEM
Document Type and Number:
WIPO Patent Application WO/2021/044146
Kind Code:
A1
Abstract:
In some examples, it is disclosed a method for generating an image retrieval system configured to rank a plurality of images of cargo from a dataset of images, in response to a query corresponding to an image of cargo of interest generated using penetrating radiation. The method may involve obtaining a plurality of annotated training images comprising cargo, each of the training images being associated with an annotation indicating a type of the cargo in the training image, and training the image retrieval system by applying a deep learning algorithm to the obtained annotated training images. The training may involve applying, to the annotated training images, a feature extraction convolutional neural network, and applying an aggregated generalized mean pooling layer associated with image spatial information.

Inventors:
MARFOQ OTHMANE (FR)
MOSHAYEDI ALIREZA (FR)
GADI NAJIB (FR)
Application Number:
PCT/GB2020/052107
Publication Date:
March 11, 2021
Filing Date:
September 03, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SMITHS HEIMANN SAS (FR)
VIENNE AYMERIC (GB)
International Classes:
G06F16/538; G01V5/00; G06K9/00; G06K9/46; G06K9/62; G06N3/04; G06N3/08
Foreign References:
US20130101172A12013-04-25
Other References:
HEEJAE JUN ET AL: "Combination of Multiple Global Descriptors for Image Retrieval", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 26 March 2019 (2019-03-26), XP081450115
HANG ZHANG ET AL: "Deep TEN: Texture Encoding Network", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 8 December 2016 (2016-12-08), XP080737935, DOI: 10.1109/CVPR.2017.309
Attorney, Agent or Firm:
MATHYS & SQUIRE (GB)
Download PDF:
Claims:
CLAIMS

1. A method for generating an image retrieval system configured to rank a plurality of images of cargo from a dataset of images, in response to a query corresponding to an image of cargo of interest generated using penetrating radiation, the method comprising: obtaining a plurality of annotated training images comprising cargo, each of the training images being associated with an annotation indicating a type of the cargo in the training image; and training the image retrieval system by applying a deep learning algorithm to the obtained annotated training images, the training comprising: applying, to the annotated training images, a feature extraction convolutional neural network, CNN, comprising a plurality of convolutional layers to generate a tensor X of image features, and applying, to the generated tensor X, an aggregated generalized mean, AgGeM, pooling layer associated with image spatial information, the applying comprising: applying a generalized mean pooling layer, to generate a plurality of embedding vectors such that: with: with:

P is a set of positive integers p representing pooling parameters of the generalized mean pooling layer, the tensor X = (Xfc)fce{1 „. K} having H x W activations for a feature map k e {1 the feature map resulting from the application of the feature extraction CNN on an training image, with H and W being respectively a height and a width of each of the feature maps, K is a number of feature maps in a last convolutional layer of the feature extraction CNN, x is a feature from the generated tensor X, and \Xk\ is a cardinal of Xk of the tensor X, and aggregating the generated plurality of embedding vectors by applying weights a of a scoring layer associated with an attention mechanism to the plurality of embedding vectors for each pooling parameter p belonging to P, the weights a being such that: with the weights a and a parameter Q being learnable by the image retrieval system to minimize an associated loss function, wherein the aggregating is configured to generate a feature vector f such that: f = {f ,.. K] with

2. The method of claim 1 , wherein training the image retrieval system further comprises: applying, to the generated tensor X, an orderless feature pooling layer associated with image texture information, optionally wherein applying the orderless feature pooling layer comprises using a Gaussian mixture model, GMM, to generate orderless image descriptors of the image features, the applying comprising: mapping the image features xit i e {1, . .,d] of the tensor X = (Xfc)fce{1 „.K} to a group of clusters of a Gaussian Mixture Model, with diagonal variances åk such that: with Id being the d x d identity matrix, d = H x W being a dimension of the cluster k of (Xfc)fce{1 and ak being a smoothing factor that represents the inverse of the variance åk in the kth cluster, ak being learnable by the image retrieval system to minimize the loss function, applying a soft assignment algorithm by assigning weights ak associated with the feature x i e {1, . . , d] , to the cluster k of centre ck, such that: with ck being a vector representing the centre of the k-th cluster, ck being learnable by the image retrieval system to minimize the loss function, ck, is the same as ck for index k=k’ ranging from 1 to K, M being a hyper-parameter representing a number of clusters to include in the group of the plurality of clusters of the Gaussian Mixture Model, and which can be chosen by an operator training the system, and generating a matrix V of descriptors, such that:

3. The method of claim 2, wherein applying the aggregated generalized mean pooling layer and applying the orderless feature pooling layer are performed in parallel. 4. The method of claim 3, wherein the training further comprises applying a bilinear model layer to a combined output associated with the aggregated generalized mean pooling layer and the orderless feature pooling layer, wherein the bilinear model layer is associated with a bilinear function Yts such that: with a* being a vector with a dimension I and associated with an output of the orderless feature pooling layer, bs being a vector with a dimension J and associated with an output of the aggregated generalized mean pooling layer, and

(oij a weight configured to balance interaction between a* and bs, coij being learnable by the image retrieval system to minimize the loss function.

5. The method of claim 4, wherein the vector a* is obtained by applying an normalization layer and/or a fully connected layer to the matrix V of descriptors.

6. The method of claim 4 or claim 5, wherein the vector bs is obtained by applying a normalization layer, such as an fi2 normalization layer and/or a batch normalization layer, and/or a fully connected layer to the feature vector f.

7. The method of any of claims 3 to 6, further comprising applying, to a combined output associated with the aggregated generalized mean pooling layer and the orderless feature pooling layer, and in order to obtain a final image representation of an image, at least one of: at least one normalization layer, such as an fi2 normalization layer, and/or a fully connected layer, optionally wherein generating the image retrieval system further comprises obtaining the final image representation of the image.

8. The method of any of the preceding claims, wherein each of the training images is further associated with a code of the Harmonised Commodity Description and Coding System, HS, the HS comprising hierarchical sections and chapters corresponding to the type of the cargo in the training image, and wherein training the image retrieval system further comprises taking into account, in the loss function associated with the image retrieval system, the hierarchical sections and chapters of the HS.

9. The method of claim 8, wherein the loss function L of the image retrieval system is such that:

With l anchor^ jsimilar and jdifferent bej ng th|-ee jmages janchor^ jsimilar and jdifferent bej ng such that isimilar comprises cargo similar to the cargo of an anchor image Ianchor, and idifferent comprises cargo which is different from the cargo of the anchor image Ianchor,

||. ||2 being the Euclidean 12 norm in IRd, and d is a dimension of an image vectorial representation, and d may be chosen by an operator training the system, N being the number of images in the dataset of images, b being a hyper-parameter that controls the margin between similar images and different images, and which can be chosen by an operator training the system ^anchor gn |_|g C0C|e of g trajnjng image corresponding to a query, ^similar gn |_|g C0C|e of g trajnjng image sharing a same hierarchical section and/or hierarchical chapter with the training image corresponding to the query,

^different gn |_|g C0C|e 0f a training image having a different hierarchical section and/or hierarchical chapter from the training image corresponding to the query, yh being a parametric function associated with the image retrieval system, h being a parameter learnable by the image retrieval system to minimize the loss function L , l being a parameter controlling an importance given to the hierarchical structure of the HS-codes during the training, and d being a hyper-parameter that controls a margin between similar and different HS-codes, and which can be chosen by the operator training the system.

10. The method of any of the preceding claims, wherein training the image retrieval system further comprises applying a Hardness-aware Deep Metric Learning, HDML, algorithm, or wherein the feature extraction CNN comprises at least one of a CNN named

AlexNet, VGG and ResNet, or wherein the feature extraction CNN is fully convolutional.

11. The method of any of the preceding claims, wherein the scoring layer comprises two convolutions and a softplus activation, optionally wherein a size of a last of the two convolutions is 1x1 , or wherein the weights a have values in the range [0,1]

12. The method of any of the preceding claims, wherein each of the training images is further associated with textual information corresponding to the type of cargo in the training image, optionally wherein the textual information comprises at least one of: a report describing the cargo and a report describing parameters of an inspection of the cargo.

13. The method of any of the preceding claims, wherein obtaining the annotated training images comprises: retrieving the annotated training images from an existing database of images; and/or generating the annotated training images, comprising: irradiating, using penetrating radiation, one or more containers comprising cargo, and detecting radiation from the irradiated one or more containers, optionally wherein the irradiating and/or the detecting are performed using one or more devices configured to inspect containers.

14. The method of any of the preceding claims, wherein the image retrieval system is configured to detect at least one image in the dataset of images comprising a cargo most similar to the cargo of interest in the inspection image, a similarity between cargos being based on a Euclidean distance between the cargos, the Euclidean distance being taken into account in a loss function of the image retrieval system applied to the training images. 15. The method according to any of the preceding claims, wherein the method is performed at a computer system separate, optionally remote, from a device configured to inspect containers.

16. A method comprising: obtaining an inspection image of cargo of interest generated using penetrating radiation, the inspection image corresponding to a query; applying, to the inspection image, an image retrieval system generated by the method according to any of the preceding claims; and ranking a plurality of images of cargo from a dataset of images, based on the applying.

17. The method of the preceding claim, wherein ranking the plurality of images comprises outputting a ranked list of images comprising cargo corresponding to the cargo of interest in the inspection image, optionally wherein the ranked list is a subset of the dataset of images, optionally wherein the dataset comprises at least one of: one or more training images and a plurality of inspection images.

18. The method of claim 16 or 17, wherein ranking the plurality of images further comprises outputting an at least partial code of the Harmonised Commodity Description and Coding System, HS, the HS comprising hierarchical sections and chapters corresponding to a type of cargo in each of the plurality of ranked images.

19. The method of any of claims 16 to 18, wherein ranking the plurality of images further comprises outputting at least partial textual information corresponding to the type of cargo in each of the plurality of ranked images, optionally wherein the textual information comprises at least one of: a report describing the cargo and a report describing parameters of an inspection of the cargo.

20. A method of producing a device configured to rank a plurality of images of cargo from a dataset of images generated using penetrating radiation, the method comprising: obtaining an image retrieval system generated by the method according to any one of claims 1 to 15; and storing the obtained image retrieval system in a memory of the device.

21. The method according to the preceding claim, wherein the storing comprises transmitting the generated image retrieval system to the device via a network, the device receiving and storing the image retrieval system, or wherein the image retrieval system is generated, stored and/or transmitted in the form of one or more of: a data representation of the image retrieval system; executable code for applying the image retrieval system to one or more inspection images.

22. The method according to any of the preceding claims, wherein the cargo of interest comprises at least one of: a threat, such as a weapon and/or an explosive material and/or a radioactive material; and/or a contraband product, such as drugs and/or cigarettes. 23. A device configured to rank a plurality of images of cargo from a dataset of images generated using penetrating radiation, the device comprising a memory storing an image retrieval system generated by the method according to any one of claims 1 to 15.

24. The device of the preceding claim, further comprising a processor, and wherein the memory of the device further comprises instructions which, when executed by the processor, enable the processor to perform the method of any one of claims 16 to 19.

25. A computer program or a computer program product comprising instructions which, when executed by a processor, enable the processor to perform the method according to any one of claims 1 to 22 or to provide the device according to claim 23 or claim 24.

Description:
Imaqe retrieval system

Field of the invention The invention relates but is not limited to generating an image retrieval system configured to rank a plurality of images of cargo from a dataset of images, in response to a query corresponding to an image of cargo of interest generated using penetrating radiation. The invention also relates but is not limited to ranking a plurality of images of cargo from a dataset of images, based on an inspection image corresponding to a query. The invention also relates but is not limited to producing a device configured to rank a plurality of images of cargo from a dataset of images generated using penetrating radiation. The invention also relates but is not limited to corresponding devices and computer programs or computer program products. BacKaround

Inspection images of containers containing cargo may be generated using penetrating radiation. In some examples, a user may want to detect objects corresponding to a cargo of interest on the inspection images. Detection of such objects may be difficult. In some cases, the object may not be detected at all. In cases where the detection is not clear from the inspection images, the user may inspect the container manually, which may be time consuming for the user.

Summary of the Invention

Aspects and embodiments of the invention are set out in the appended claims. These and other aspects and embodiments of the invention are also described herein.

Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to device and computer program aspects, and vice versa.

Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly. Brief Description of Drawinqs

Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:

Figure 1 shows a flow chart illustrating an example method according to the disclosure;

Figure 2 schematically illustrates an example system and an example device configured to implement the example method of Figure 1 ;

Figure 3 illustrates an example inspection image according to the disclosure; Figure 4 shows a flow chart illustrating a detail of the example method of Figure 1;

Figure 5 shows a flow chart illustrating a detail of the example method of Figure 1;

Figure 6 schematically illustrates an example image retrieval system configured to implement e.g. the example method of Figure 1 ;

Figure 7 shows a flow chart illustrating a detail of the example method of Figure 1;

Figure 8 shows a flow chart illustrating a detail of the example method of Figure 1;

Figure 9 shows a flow chart illustrating another example method according to the disclosure; and Figure 10 shows a flow chart illustrating another example method according to the disclosure.

In the figures, similar elements bear identical numerical references. Detailed Description of Example Embodiments

The disclosure discloses an example method for generating an image retrieval system configured to rank a plurality of images of cargo from a dataset of images. The ranking is performed in response to a query corresponding to an image of cargo of interest generated using penetrating radiation (e.g. X-rays, but other penetrating radiation is envisaged). The cargo of interest may be any type of cargo, such as food, industrial products, drugs or cigarettes, as non-limiting examples.

The disclosure also discloses an example method for ranking a plurality of images of cargo from a dataset of images, based on an inspection image corresponding to a query. The disclosure also discloses an example method for producing a device configured to rank a plurality of images of cargo from a dataset of images generated using penetrating radiation.

The disclosure also discloses corresponding devices and computer programs or computer program products.

The image retrieval system may enable an operator of an inspection system benefitting from an existing dataset of images and/or existing textual information (such as expert reports) and/or codes associated with the ranked images. The image retrieval system may enable enhanced inspection of the cargo of interest.

The image retrieval system may enable the operator of the inspection system benefitting from automatic outputting of textual information (such as cargo description reports, scanning process reports) and/or codes associated with associated with the cargo of interest.

Figure 1 shows a flow chart illustrating an example method 100 according to the disclosure for generating an image retrieval system 1 illustrated in Figure 6. Figure 2 shows a device 15 configurable by the method 100 to rank a plurality of images of cargo from a dataset 20 of images generated using penetrating radiation, in response to a query corresponding to an inspection image 1000 (shown in Figures 3 and 6) comprising cargo 11 of interest generated using penetrating radiation. The inspection image 1000 may be generated using penetrating radiation, e.g. by the device 15. The method 100 of Figure 1 comprises in overview: obtaining, at S1 , a plurality of annotated training images 101 (shown in Figures 3 and 6) comprising cargo 110, each of the training images 101 being associated with an annotation indicating a type of the cargo 110 in the training image 101 ; and training, at S2, the image retrieval system 1 by applying a deep learning algorithm to the obtained annotated training images 101.

As described in more detail later in reference to Figure 9 showing a method 200, configuration of the device 15 involves storing, e.g. at S32, the image retrieval system 1 at the device 15. In some examples, the image retrieval system 1 may be obtained at S31 (e.g. by generating the image retrieval system 1 as in the method 100 of Figure 1). In some examples, obtaining the image retrieval system 1 at S31 may comprise receiving the image retrieval system 1 from another data source.

As described above, the image retrieval system 1 is derived from the training images 101 using the deep learning algorithm, and is arranged to produce an output corresponding to the cargo 11 of interest in the inspection image 1000. In some examples and as described in more detail below, the output may correspond to ranking a plurality of images of cargo from the dataset 20 of images. The dataset 20 may comprise at least one of: one or more training images 101 and a plurality of inspection images 1000.

The image retrieval system 1 is arranged to produce the output more easily, after it is stored in a memory 151 of the device 15 (as shown in Figure 2), even though the process 100 for deriving the image retrieval system 1 from the training images 101 may be computationally intensive.

After it is configured, the device 15 may provide an accurate output corresponding to the cargo 11 by applying the image retrieval system 1 to the inspection image 1000. The raking process is illustrated (as process 300) in Figure 10 (described later). Computer system and detection device

Figure 2 schematically illustrates an example computer system 10 and the device 15 configured to implement, at least partly, the example method 100 of Figure 1. In particular, in a preferred embodiment, the computer system 10 executes the deep learning algorithm to generate the image retrieval system 1 to be stored on the device 15. Although a single device 15 is shown for clarity, the computer system 10 may communicate and interact with multiple such devices. The training images 101 may themselves be obtained using images acquired using the device 15 and/or using other, similar devices and/or using other sensors and data sources.

In some examples, as illustrated in Figure 4, obtaining at S1 the training images 101 may comprise retrieving at S11 the annotated training images from an existing database of images (such as the dataset 20, in a non-limiting example). Alternatively or additionally, obtaining at S1 the training images 101 may comprise generating at S12 the annotated training images 101. In some examples, the generating at S12 may comprise: irradiating, using penetrating radiation, one or more containers comprising cargo, and detecting radiation from the irradiated one or more containers. In some examples the irradiating and/or the detecting are performed using one or more devices configured to inspect containers.

In some examples, the training images 101 may have been obtained in a different environment, e.g. using a similar device (or equivalent set of sensors) installed in a different (but preferably similar) environment, or in a controlled test configuration in a laboratory environment.

The computer system 10 of Figure 2 comprises a memory 121, a processor 12 and a communications interface 13.

The system 10 may be configured to communicate with one or more devices 15, via the interface 13 and a link 30 (e.g. Wi-Fi connectivity, but other types of connectivity may be envisaged). The memory 121 is configured to store, at least partly, data, for example for use by the processor 12. In some examples the data stored on the memory 121 may comprise the dataset 20 and/or data such as the training images 101 (and the data used to generate the training images 101) and/or the deep learning algorithm. In some examples, the processor 12 of the system 10 may be configured to perform, at least partly, at least some of the steps of the method 100 of Figure 1 and/or the method 200 of Figure 9 and/or the method 300 of Figure 10.

The detection device 15 of Figure 2 comprises a memory 151, a processor 152 and a communications interface 153 (e.g. W-Fi connectivity, but other types of connectivity may be envisaged) allowing connection to the interface 13 via the link 30.

In a non-limiting example, the device 15 may also comprise an apparatus 3 acting as an inspection system, as described in greater detail later. The apparatus 3 may be integrated into the device 15 or connected to other parts of the device 15 by wired or wireless connection.

In some examples, as illustrated in Figure 2, the disclosure may be applied for inspection of a real container 4 containing the cargo 11 of interest. Alternatively or additionally, at least some of the methods of the disclosure may comprise obtaining the inspection image 1000 by irradiating, using penetrating radiation, one or more real containers 4 configured to contain cargo, and detecting radiation from the irradiated one or more real containers 4.

In other words the apparatus 3 may be used to acquire the plurality of training images 101 and/or to acquire the inspection image 1000.

In some examples, the processor 152 of the device 15 may be configured to perform, at least partly, at least some of the steps of the method 100 of Figure 1 and/or the method 200 of Figure 9 and/or the method 300 of Figure 10.

Generatina the imaae retrieval system Referring back to Figure 1, the image retrieval system 1 is built by applying a deep learning algorithm to the training images 101. Any suitable deep learning algorithm may be used for building the image retrieval system 1. For example, approaches based on convolutional deep learning algorithm may be used. The image retrieval system 1 is generated based on the training images 101 obtained at S1.

The learning process is typically computationally intensive and may involve large volumes of training images 101 (such as several thousands or tens of thousands of images). In some examples, the processor 12 of the system 10 may comprise greater computational power and memory resources than the processor 152 of the device 15. The image retrieval system 1 generation is therefore performed, at least partly, remotely from the device 15, at the computer system 10. In some examples, at least steps S1 and/or S2 of the method 100 are performed by the processor 12 of the computer system 10. However, if sufficient processing power is available locally then the image retrieval system 1 learning could be performed (at least partly) by the processor 152 of the device 15.

The deep learning step involves inferring image features based on the training images 101 and encoding the detected features in the form of the image retrieval system 1.

The training images 101 are annotated, and each of the training images 101 is associated with an annotation indicating a type of the cargo 110 in the training image 101. In other words, in the training images 101 , the nature of the cargo 110 is known. In some examples, a domain specialist may manually annotate the training images 101 with ground truth annotation (e.g. the type of the cargo for the image).

In some examples, the generated image retrieval system 1 is configured to detect at least one image in the dataset 20 of images, the at least one image comprising a cargo most similar to the cargo 11 of interest in the inspection image 1000. In some examples, a plurality of images of the dataset 20 is detected and ranked based on the similarity of their cargo with the cargo 11 of interest (e.g. the plurality may be ranked from a most similar to a least similar, or from a least similar to a most similar, as non-limiting examples).

In the disclosure, a similarity between cargos may be based on a Euclidean distance between features of the cargos.

As described in greater detail below, the Euclidean distance may be taken into account in a loss function L associated with the image retrieval system 1 applied to the training images 101.

As also described in greater detail below and shown in Figure 2, the features of the cargos may be derived from one or more compact vectorial representations 21 of images (images such as the training images 101 and/or the inspection image 1000). In some examples, the one or more compact vectorial representations of the images may comprise at least one of a feature vector f, a matrix V of descriptors and a final image representation, FIR. In some examples, the one or more compact vectorial representations 21 of the images may be stored in the memory 121 of the system 10. In other words, during the training performed at S2, the image retrieval system 1 is configured to learn a metric problem, so that the Euclidean distance captures the similarity between of features of the cargos. During the training performed at S2, the image retrieval system 1 is associated with a parametric function f q (f(ΐ) £ K d ), and the training performed at S2 enables the image retrieval system 1 to find a learnable parameter Q that minimizes the loss function L such that:

With ! I anchor^ jsimilar and jdifferent bejng three j ma geS, I anchor , jsimilar and jdifferent bejng such that i similar comprises cargo similar to the cargo of an anchor image I anchor , and i different comprises cargo which is different from the cargo of the anchor image I anchor ,

||. || 2 being the Euclidean 1 2 norm in IR d , and d is a dimension of an image vectorial representation, and d may be chosen by an operator training the system,

N being the number of images in the dataset of images, and b being a hyper-parameter that controls the margin between similar images and different images, and which can be chosen by the operator training the system.

As illustrated in Figures 5 and 6, training at S2 the image retrieval system 1 comprises applying, to the annotated training images 101, at S21 , a feature extraction convolutional neural network 1001 , referred to as CNN 1001, comprising a plurality of convolutional layers to generate a tensor X of image features.

The feature extraction CNN 1001 may comprise at least one of a CNN named AlexNet, VGG and ResNet, as non-limiting examples. In some examples the feature extraction CNN 1001 is fully convolutional.

Training at S2 the image retrieval system 1 also comprises applying, to the generated tensor X, at S22, an aggregated generalized mean, AgGeM, pooling layer 1002 associated with image spatial information. As illustrated in Figure 7, the applying at S22 comprises applying, at S221, a generalized mean pooling layer 1011 , to generate a plurality of embedding vectors such that: with: with:

P is a set of positive integers p representing pooling parameters of the generalized mean pooling layer, the tensor X = (X fc ) fce{1 „. K} having H x W activations for a feature map k e the feature map resulting from the application of the feature extraction CNN 1001 on an training image 101 , with H and W being respectively a height and a width of each of the feature maps,

K is a number of feature maps in a last convolutional layer of the feature extraction CNN 1001, x is a feature from the generated tensor X, and \X k I is a cardinal of X k of the tensor X.

The applying at S22 further comprises, at S222, aggregating the generated plurality of embedding vectors by applying weights a of a scoring layer 1012 associated with an attention mechanism to the plurality of embedding vectors (/ (p £ R K , for each pooling parameter p belonging to P, the weights a being such that: with: the weights a and a parameter Q being learnable by the image retrieval system 1 to minimize the loss function L. The aggregating performed at S222 is configured to generate a feature vector f such that: with: Referring back to Figures 5 and 6, training at S2 the image retrieval system 1 may comprise applying, to the generated tensor X, at S23, an orderless feature pooling layer 1003 associated with image texture information.

As illustrated in Figure 8, applying at S23 the orderless feature pooling layer 1003 may comprise using a Gaussian mixture model, GMM, to generate orderless image descriptors of the image features. The applying at S23 may comprise mapping, at S231 , the image features x i e {1, . . , d] of the tensor X = (X fc ) fce{1 „. K} to a group of clusters of a Gaussian Mixture Model, with diagonal variances å k such that: with: I d being the d x d identity matrix, d = H x W being a dimension of the cluster k of (X k kE{i,.„K} , and a k being a smoothing factor that represents the inverse of the variance å k in the k th cluster, a k being learnable by the image retrieval system to minimize the loss function L. The applying at S23 may further comprise applying, at S232, a soft assignment algorithm by assigning weights d k associated with the feature x i e {1, to the cluster k of centre c k , such that: with: c k being a vector representing the centre of the k-th cluster, c k being learnable by the image retrieval system to minimize the loss function L, c kr is the same as c k for index k=k’ ranging from 1 to K,

M being a hyper-parameter representing a number of clusters to include in the group of the plurality of clusters of the Gaussian Mixture Model.

The applying at S23 may further comprise generating, at S233, a matrix V of descriptors, such that: The hyper-parameter M may be chosen by the operator training the system 1.

As illustrated in Figures 5 and 6, in some examples applying at S22 the aggregated generalized mean, AgGeM, pooling layer 1002 and applying at S23 the orderless feature pooling layer 1003, e.g. the GMM layer, may be performed in parallel.

As illustrated in Figures 5 and 6, the training at S2 may further comprise applying, at S24, a bilinear model layer 1004 to a combined output associated with the aggregated generalized mean pooling layer 1002 and the orderless feature pooling layer 1003.

In some examples, the bilinear model layer 1004 may be associated with a bilinear function Y ts such that: with: a* being a vector with a dimension I and associated with an output of the orderless feature pooling layer 1003, b s being a vector with a dimension J and associated with an output of the aggregated generalized mean pooling layer 1002, and ojij a weight configured to balance interaction between a* and b s , ooij being learnable by the image retrieval system 1 to minimize the loss function £. As illustrated in Figure 6, the vector a* may be obtained by applying an l 2 normalization layer 1005 and/or a fully connected layer 1006 to the matrix V of descriptors.

As illustrated in Figure 6, the vector b s may be obtained by applying a normalization layer 1007, such as an l 2 normalization layer and/or a batch normalization layer, and/or a fully connected layer 1008 to the feature vector f.

As illustrated in Figures 5 and 6, S2 may further comprise applying at S25, to the combined output associated with the aggregated generalized mean pooling layer 1002 and the orderless feature pooling layer 1003, at least one normalization layer 1009, such as an l 2 normalization layer. Alternatively or additionally, S2 may further comprise applying at S26, to the combined output associated with the aggregated generalized mean pooling layer 1002 and the orderless feature pooling layer 1003, a fully connected layer 1010.

Applying at S25 the at least one normalization layer 1009 and/or applying at S26 the fully connected layer 1010 enables obtaining the final image representation FIR of the image.

In some examples, each of the training images 101 is further associated with a code of the Harmonised Commodity Description and Coding System, HS. The HS comprises hierarchical sections and chapters corresponding to the type of the cargo in the training image 101.

In some examples, training at S2 the image retrieval system 1 further comprises taking into account, in the loss function L of the image retrieval system 1 , the hierarchical sections and chapters of the HS.

In some examples, the loss function L of the image retrieval system is such that: with hf nchor an HS code of a training image corresponding to a query, h imiiar a n HS code of a training image sharing a same hierarchical section and/or hierarchical chapter with the training image corresponding to the query,

^ different gn |_|g C0C | e of g training image having a different hierarchical section and/or hierarchical chapter from the training image corresponding to the query, y h being a parametric function associated with the image retrieval system 1 , h being a parameter learnable by the image retrieval system 1 to minimize the loss function L , l being a parameter controlling an importance given to the hierarchical structure of the HS-codes during the training, and d being a hyper-parameter that controls a margin between similar and different HS-codes, and which can be chosen by the operator training the system. In some examples, training at S2 the image retrieval system 1 further comprises applying a Hardness-aware Deep Metric Learning, HDML, algorithm. Other architectures are also envisaged for the image retrieval system 1. For example, deeper architectures may be envisaged and/or an architecture of the same shape as the architecture shown in Figure 6 that would generate vectors or matrices (such as the vector f, the matrix V, the vector b s , the vector a* and/or the final image representation FIR) with sizes different from those already discussed may be envisaged.

Referring back to Figure 6, the scoring layer 1012 of the AgGeM layer 1002 may comprise two convolutions and a softplus activation. In some examples a size of a last of the two convolutions may be 1x1. Other architectures are also envisaged for the scoring layer 1012.

In some examples, each of the training images 101 is further associated with textual information corresponding to the type of cargo in the training image 101. In some examples, the textual information may comprise at least one of: a report describing the cargo (e.g. existing expert reports) and a report describing parameters of an inspection of the cargo (such as radiation dose, radiation energy, inspection device type, etc.).

Device manufacture

As illustrated in Figure 9, the method 200 of producing the device 15 configured to rank a plurality of images of cargo from a dataset of images generated using penetrating radiation, may comprise: obtaining, at S31, an image retrieval system 1 generated by the method 100 according to any aspects of the disclosure; and storing, at S32, the obtained image retrieval system 1 in the memory 151 of the device 15.

The image retrieval system 1 may be stored, at S32, in the detection device 15. The image retrieval system 1 may be created and stored using any suitable representation, for example as a data description comprising data elements specifying ranking conditions and their ranking outputs (e.g. a ranking based on a Euclidean distance of image features with respect to image features of the query). Such a data description could be encoded e.g. using XML or using a bespoke binary representation. The data description is then interpreted by the processor 152 running on the device 15 when applying the image retrieval system 1.

Alternatively, the deep learning algorithm may generate the image retrieval system 1 directly as executable code (e.g. machine code, virtual machine byte code or interpretable script). This may be in the form of a code routine that the device 15 can invoke to apply the image retrieval system 1.

Regardless of the representation of the image retrieval system 1 , the image retrieval system 1 effectively defines a ranking algorithm (comprising a set of rules) based on input data (i.e. the inspection image 1000 defining a query). After the image retrieval system 1 is generated, the image retrieval system 1 is stored in the memory 151 of the device 15. The device 15 may be connected temporarily to the system 10 to transfer the generated image retrieval system (e.g. as a data file or executable code) or transfer may occur using a storage medium (e.g. memory card). In a preferred approach, the image retrieval system is transferred to the device 15 from the system 10 over the network connection 30 (this could include transmission over the Internet from a central location of the system 10 to a local network where the device 15 is located). The image retrieval system 1 is then installed at the device 15. The image retrieval system could be installed as part of a firmware update of device software, or independently.

Installation of the image retrieval system 1 may be performed once (e.g. at time of manufacture or installation) or repeatedly (e.g. as a regular update). The latter approach can allow the classification performance of the image retrieval system to be improved over time, as new training images become available.

Applvina the imaae retrieval system to perform rankina

Ranking of images from the dataset 20 is based on the image retrieval system 1. After the device 15 has been configured with the image retrieval system 1 , the device 15 can use the image retrieval system 1 based on locally acquired inspection images 1000 to rank a plurality of images of cargo from the dataset 20 of images. In some examples, the image retrieval system 1 effectively defines a ranking algorithm for extracting features from the query (i.e. the inspection image 1000), computing a distance of the features of the images of the dataset 20 with respect to the image features of the query, and ranking the images of the dataset 20 based on the computed distance.

In general, the image retrieval system 1 is configured to extract the features of the cargo 11 of interest in the inspection image 1000 in a way similar to the features extraction performed during the training at S2. Figure 10 shows a flow chart illustrating an example method 300 for ranking a plurality of images of cargo from the dataset 20 of images. The method 300 is performed by the device 15 (as shown in Figure 2).

The method 300 comprises: obtaining, at S41, the inspection image 1000; applying, at S42, to the obtained image 1000, the image retrieval system 1 generated by the method 100 according to any aspects of the disclosure; and ranking, at S43, a plurality of images of cargo from the dataset 20 of images, based on the applying.

It should be understood that in order to rank at S43 the plurality of images in the dataset 20, the device 15 may be connected, at least temporarily, to the system 10, and the device 15 may access the memory 121 of the system 10. In some examples, at least a part of the dataset 20 and/or a part of the one or more compact vectorial representations 21 of images (such as the feature vector f, the matrix V of descriptors and/or the final image representation, FIR) may be stored in the memory 151 of the device 15. In some examples, ranking at S43 the plurality of images comprises outputting a ranked list of images comprising cargo corresponding to the cargo of interest in the inspection image. In some examples, the ranked list may be a subset of the dataset 20 of images, such as 1 image of the dataset or 2, 5, 10, 20 or 30 images of the dataset, as non-limiting examples.

In some examples, ranking at S43 the plurality of images may further comprise outputting an at least partial code of the Harmonised Commodity Description and Coding System, HS, the HS comprising hierarchical sections and chapters corresponding to a type of cargo in each of the plurality of ranked images.

In some examples, ranking at S43 may further comprise outputting at least partial textual information corresponding to the type of cargo in each of the plurality of ranked images. In some examples, the textual information may comprise at least one of: a report describing the cargo and a report describing parameters of an inspection of the cargo.

Further details and examples

The disclosure may be advantageous but is not limited to customs and/or security applications.

The disclosure typically applies to cargo inspection systems (e.g. sea or air cargo).

The apparatus 3 of Figure 2, acting as an inspection system, is configured to inspect the container 4, e.g. by transmission of inspection radiation through the container 4.

The container 4 configured to contain the cargo may be, as a non-limiting example, placed on a vehicle. In some examples, the vehicle may comprise a trailer configured to carry the container 4.

The apparatus 3 of Figure 2 may comprises a source 5 configured to generate the inspection radiation. The radiation source 5 is configured to cause the inspection of the cargo through the material (usually steel) of walls of the container 4, e.g. for detection and/or identification of the cargo. Alternatively or additionally, a part of the inspection radiation may be transmitted through the container 4 (the material of the container 4 being thus transparent to the radiation), while another part of the radiation may, at least partly, be reflected by the container 4 (called “back scatter”).

In some examples, the apparatus 3 may be mobile and may be transported from a location to another location (the apparatus 3 may comprise an automotive vehicle).

In the source 5, electrons are generally accelerated under a voltage comprised between 10OkeV and 15MeV.

In mobile inspection systems, the power of the X-ray source 5 may be e.g., between 100keV and 9.0MeV, typically e.g., 300keV, 2MeV, 3.5MeV, 4MeV, or 6MeV, for a steel penetration capacity e.g., between 40mm to 400mm, typically e.g., 300mm (12in).

In static inspection systems, the power of the X-ray source 5 may be e.g., between 1 MeV and 10MeV, typically e.g., 9MeV, for a steel penetration capacity e.g., between 300mm to 450mm, typically e.g., 410mm (16.1 in).

In some examples, the source 5 may emit successive X-ray pulses. The pulses may be emitted at a given frequency, comprised between 50 Hz and 1000 Hz, for example approximately 200 Hz.

According to some examples, detectors may be mounted on a gantry, as shown in Figure 2. The gantry for example forms an inverted “L”. In mobile inspection systems, the gantry may comprise an electro-hydraulic boom which can operate in a retracted position in a transport mode (not shown on the Figures) and in an inspection position (Figure 2). The boom may be operated by hydraulic actuators (such as hydraulic cylinders). In static inspection systems, the gantry may comprise a static structure.

It should be understood that the inspection radiation source may comprise sources of other penetrating radiation, such as, as non-limiting examples, sources of ionizing radiation, for example gamma rays or neutrons. The inspection radiation source may also comprise sources which are not adapted to be activated by a power supply, such as radioactive sources, such as using C06O or Cs137. In some examples, the inspection system comprises detectors, such as X-ray detectors, optional gamma and/or neutrons detectors, e.g., adapted to detect the presence of radioactive gamma and/or neutrons emitting materials within the cargo, e.g., simultaneously to the X-ray inspection. In some examples, detectors may be placed to receive the radiation reflected by the container 4.

In the context of the present disclosure, the container 4 may be any type of container, such as a holder or a box, etc. The container 4 may thus be, as non-limiting examples a palette (for example a palette of European standard, of US standard or of any other standard) and/or a train wagon and/or a tank and/or a boot of the vehicle and/or a “shipping container” (such as a tank or an ISO container or a non-ISO container or a Unit Load Device (ULD) container). In some examples, one or more memory elements (e.g., the memory of one of the processors) can store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in the disclosure. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in the disclosure. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof.

As one possibility, there is provided a computer program, computer program product, or computer readable medium, comprising computer program instructions to cause a programmable computer to carry out any one or more of the methods described herein. In example implementations, at least some portions of the activities related to the processors may be implemented in software. It is appreciated that software components of the present disclosure may, if desired, be implemented in ROM (read only memory) form. The software components may, generally, be implemented in hardware, if desired, using conventional techniques.

Other variations and modifications of the system will be apparent to the skilled in the art in the context of the present disclosure, and various features described above may have advantages with or without other features described above. The above embodiments are to be understood as illustrative examples, and further embodiments are envisaged. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.




 
Previous Patent: ENERGY STORAGE APPARATUS

Next Patent: GOLF SYSTEM