Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR COMPARING VIDEO SHOTS
Document Type and Number:
WIPO Patent Application WO/2016/058626
Kind Code:
A1
Abstract:
A method (100) for comparing a first video shot (Vs1) comprising a first set of first images (I1(s)) with a second video shot (Vs2) comprising a second set of second images (I2(t)), at least one between the first and the second set comprising at least two images. The method comprises pairing (110) each first image of the first set with each second image of the second set to form a plurality of images pairs (IP(m)), and, for each image pair, carrying out the operations a) –g): a) identifying (120) first interest points in the first image and second interest points in the second image; b) associating (120) first interest points with corresponding second interest points in order to form corresponding interest point matches; c) for each pair of first interest points, calculating (130) the distance therebetween for obtaining a corresponding first length; d) for each pair of second interest points, calculating (130) the distance therebetween for obtaining a corresponding second length; e) calculating a plurality of distance ratios (130), each distance ratio corresponding to a selected pair of interest point matches and being based on a ratio of a first term and a second term or on a ratio of the second term and the first term, said first term corresponding to the distance between the first interest points of said pair of interest point matches and said second term corresponding to the distance between the second interest points of said pair of interest point matches; f) computing (140) a first representation of the statistical distribution of the plurality of calculated distance ratios; g)computing (150) a second representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in the image pair are outliers. -The method further comprises generating (160) a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs based on the first representations of all the image pairs; generating (170) a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs are outliers based on the second representations of all the image pairs; comparing (180) said first global representation with said second global representation, and assessing (190) whether the first video shot contains a view of an object depicted in the second video shot based on said comparison.

Inventors:
LEPSØY SKJALG (IT)
BALESTRI MASSIMO (IT)
FRANCINI GIANLUCA (IT)
Application Number:
PCT/EP2014/071878
Publication Date:
April 21, 2016
Filing Date:
October 13, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TELECOM ITALIA SPA (IT)
International Classes:
G06K9/00; G06K9/62
Domestic Patent References:
WO2012100819A12012-08-02
Foreign References:
US6711293B12004-03-23
EP2014065808W2014-07-23
Other References:
SKJALG LEPSOY ET AL: "Statistical modelling of outliers for fast visual search", MULTIMEDIA AND EXPO (ICME), 2011 IEEE INTERNATIONAL CONFERENCE ON, IEEE, 11 July 2011 (2011-07-11), pages 1 - 6, XP031964858, ISBN: 978-1-61284-348-3, DOI: 10.1109/ICME.2011.6012184
JOSEF SIVIC ET AL: "Video Google: Efficient Visual Search of Videos", 1 January 2007, TOWARD CATEGORY-LEVEL OBJECT RECOGNITION LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, PAGE(S) 127 - 144 SEARCH IN VIDEO; OBJECT RETRIEVAL;, ISBN: 978-3-540-68794-8, XP019053244
A. ARAUJO ET AL: "Efficient video search using image queries", 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 1 October 2014 (2014-10-01), pages 3082 - 3086, XP055192632, ISBN: 978-1-47-995751-4, DOI: 10.1109/ICIP.2014.7025623
FRANCINI GIANLUCA ET AL: "Selection of local features for visual search", SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 28, no. 4, 19 November 2012 (2012-11-19), pages 311 - 322, XP028526778, ISSN: 0923-5965, DOI: 10.1016/J.IMAGE.2012.11.002
F. ROTHGANGER; S. LAZEBNIK; C. SCHMID; J. PONCE: "Segmenting, modeling, and matching video clips containing multiple moving objects", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 29, no. 3, 2007, pages 477 - 491, XP002740470
HUIZHONG CHEN: "Huizhong Chen - Stanford University", 24 September 2014 (2014-09-24), XP055192619, Retrieved from the Internet [retrieved on 20150601]
A. ARAUJO; M. MAKAR; V. CHANDRASEKHAR; D. CHEN; S. TSAI; H. CHEN; R. ANGST; B. GIROD: "Efficient video search using image queries", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, October 2014 (2014-10-01)
F. ROTHGANGER; S. LAZEBNIK; C. SCHMID; J. PONCE: "Segmenting, modeling, and matching video clips containing multiple moving objects", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 29, no. 3, 2007, pages 477 - 491, XP011157932, DOI: doi:10.1109/TPAMI.2007.57
G. DAVENPORT; T. A. SMITH; N. PINCEVER: "Cinematic primitives for multimedia", IEE COMPUTER GRAPHICS AND APPLICATION, vol. 11, no. 4, 1991, pages 67 - 74, XP011417290, DOI: doi:10.1109/38.126883
SAM S. TSAI; DAVIDE CHEN; GABRIEL TAKACS; VIJAY CHANDRASEKHAR; RAMAKRISHNA VEDANTHAM; RADEK GRZESZCZUK; BERND GIROD: "Fast geometric re-ranking for image-based retrieval", INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, October 2010 (2010-10-01)
S. LEPSOY; G. FRANCINI; G. CORDARA; P.P. DE GUSMAO: "Statistical modelling of outliers for fast visual search", IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2011, pages 1 - 6, XP031964858, DOI: doi:10.1109/ICME.2011.6012184
R.J. LARSEN; M.L. MARX: "An introduction to Mathematical Statistics and its Applications", 1986, PRENTICE-HALL, pages: 402 - 403
Attorney, Agent or Firm:
MACCALLI, Marco et al. (Via Settembrini 40, Milano, IT)
Download PDF:
Claims:
CLAIMS

1. A method (100) for comparing a first video shot (Vsl) comprising a first set of first images (Il(s)) with a second video shot (Vs2) comprising a second set of second images (I2(t)), at least one between the first and the second set comprising at least two images, the method comprising:

- pairing (110) each first image of the first set with each second image of the second set to form a plurality of images pairs (IP(m));

- for each image pair, carrying out the operations a) - g):

a) identifying (120) first interest points in the first image and second interest points in the second image;

b) associating (120) first interest points with corresponding second interest points in order to form corresponding interest point matches;

c) for each pair of first interest points, calculating (130) the distance therebetween for obtaining a corresponding first length;

d) for each pair of second interest points, calculating (130) the distance therebetween for obtaining a corresponding second length; e) calculating a plurality of distance ratios (130), each distance ratio corresponding to a selected pair of interest point matches and being based on a ratio of a first term and a second term or on a ratio of the second term and the first term, said first term corresponding to the distance between the first interest points of said pair of interest point matches and said second term corresponding to the distance between the second interest points of said pair of interest point matches;

f) computing (140) a first representation of the statistical distribution of the plurality of calculated distance ratios;

g) computing (150) a second representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in the image pair are outliers;

- generating (160) a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs based on the first representations of all the image pairs;

- generating (170) a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs are outliers based on the second representations of all the image pairs;

- comparing (180) said first global representation with said second global representation, and

- assessing (190) whether the first video shot contains a view of an object depicted in the second video shot based on said comparison.

2. The method (100) of claim 1, wherein the operation f) provides for arranging the plurality of distance ratios in a corresponding image pair histogram having a plurality of ordered bins, each one corresponding to a respective interval of distance ratio values, the image pair histogram enumerating for each bin a corresponding number of calculated distance ratios having values comprised within the respective interval.

3. The method (100) of claim 2, wherein the operation g) provides for generating an image pair outlier probability mass function comprising for each of said bins the probability that, under the hypothesis that all the interest point matches are outliers, a distance ratio has a value that falls within said bin.

4. The method (100) of claim 3, wherein the phase of generating a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs based on the first representations of all the image pairs comprises generating a global histogram based on the image pair histograms, said global histogram being indicative of how the values of the distance ratios calculated for all the image pairs are distributed among the bins. 5. The method (100) of claim 4, wherein the phase of generating a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs are outliers based on the second representations of all the image pairs comprises generating a global outlier probability mass function by combining the image pair outlier probability mass functions.

6. The method (100) of claim 5, wherein the phase of comparing said first global representation with said second global representation comprises comparing said global histogram with said global outlier probability mass function.

7. The method (100) of claim 6, wherein the phase of generating the global histogram based on the image pair histograms comprises:

- for each bin of the plurality of ordered bins, summing the number of calculated distance ratios corresponding to that bin of all image pair histograms.

8. The method (100) of claim 7, wherein the phase of generating the image pair outlier probability mass function comprises calculating a linear combination of the image pair outlier probability mass functions.

9. The method (100) of any one among the preceding claims, wherein said comparing said first global representation with said second global representation comprises performing a Pearson's test. 10. The method (100) of any one among the preceding claims, wherein said calculating the distance ratios provides for calculating the logarithm of the distance ratios.

11. A video shot comparing system (410, 420) comprising:

- a first unit (504) configured to receive a first video shot (Vsl) comprising a first set of first images (11 (s)) and identify first interest points in the first images;

- a reference database (510) storing a plurality of second video shot (Vs2), each one comprising a respective second set of second images (I2(t));

- a second unit (508) configured to associate for each second video shot, and for each image pair comprising a second image of said second video shot and a first image of the first video shot, first interest points in said first image to second interest points in said second image in order to form corresponding interest point matches;

- a third unit (512) configured to calculate, for each second video shots (Vs2) and for each image pair comprising a second image of said second video shot and a first image of the first video shot:

- for each pair of first interest points, the distance therebetween for obtaining a corresponding first length;

- for each pair of second interest points, the distance therebetween for obtaining a corresponding second length; - a plurality of distance ratios, each distance ratio corresponding to a selected pair of interest point matches and being based on a ratio of a first term and a second term or on a ratio of the second term and the first term, said first term corresponding to the distance between the first interest points of said pair of interest point matches and said second term corresponding to the distance between the second interest points of said pair of interest point matches;

- a first representation of the statistical distribution of the plurality of calculated distance ratios;

- a second representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in the image pair are outliers;

- a fourth unit (514) configured to generate for each second video shot:

- a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs comprising second images of said second video shot based on the first representations of all the image pairs comprising second images of said second video shot;

- a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs comprising second images of said second video shot are outliers based on the second representations of all the image pairs comprising second images of said second video shot

- a fifth unit (516) configured to compare for each second video shot the corresponding first global representation with the corresponding second global representation, and to assess whether there is a second video shot containing a view of an object depicted in the first video shot based on said comparison.

Description:
METHOD AND SYSTEM FOR COMPARING VIDEO SHOTS

Background of the Invention

Field of the Invention

The present invention relates to the field of the image analysis.

Description of the related art

In the field of the image analysis, a common operation provides for comparing two images in order to find the relation occurring therebetween in case both the images include at least a portion of a same scene or of a same object.

Known methods for determining whether two images display the same object provide for selecting a set of so-called interest points in the first image and then matching each interest point of the set or a subset thereof to a corresponding interest point in the second image (generally, some of the selected interest points of the set may not be matched, because of ambiguities). The selection of which point of the first image should become an interest point is carried out by taking into consideration image features in the area of the image surrounding the point itself.

As it is well known to those skilled in the art, if a matching between an interest point of the first image and a corresponding interest point of the second image is correct, in the sense that both interest points correspond to a same point of a same object (depicted in both images), such interest point match is referred to as "inlier".

Conversely, if a matching between an interest point of the first image and a corresponding interest point of the second image is incorrect, in the sense that the two interest points do not correspond to a same point of the same object, such interest point match is referred to as "outlier".

Therefore, in order to obtain a reliable result, a procedure capable of distinguishing the inliers from the outliers is advantageously performed after the interest point matches have been determined.

Several examples of procedures of this type are already known in the art, such as for example the image comparison method disclosed in the patent application WO 2012/100819 in the name of the same present Applicant.

Another common operation in the field of the image analysis provides for comparing video shots, or comparing a single image to images of a video shot in order to find the relation occurring therebetween in case both the video shots or both the image and the video shot include at least a portion of a same scene or of a same object. For example, "Efficient video search using image queries" by A. Araujo, M. Makar, V. Chandrasekhar, D. Chen, S. Tsai, H. Chen, R. Angst, B. Girod, IEEE International Conference on Image processing, October 2014, discloses a method of comparing images to video shots which checks geometric consistency using the Random sample consensus (RANSAC) iterative method.

The method disclosed in "Segmenting, modeling, and matching video clips containing multiple moving objects" by F. Rothganger, S. Lazebnik, C. Schmid, & J. Ponce, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(3), 2007, pages 477- 491, provides a method for the identification of shots that depict the same scene in a video clip. In this case as well, geometric consistency is checked using the RANSAC method.

Summary of the invention

Applicant has found that the solutions known in the art for comparing video shots, or comparing a single image to images of a video shot, are affected by a severe drawback. The known solutions have a scarce robustness when the video shot comprises very small objects and/or in case the video shot comprises non detailed objects. Indeed, in these cases, only a small number of interest points may be identified within said objects, causing a possible failure of the identification thereof during the comparison operations.

The Applicant has tackled to problem of how to improve the known solutions in term of robustness.

The Applicant has found that, given two video shots each one comprising a respective group or set of images, by accumulating to each other histograms of interest point distance ratios corresponding to each image pair comprising an image of the first video shot and an image of the second video shot, a global histogram may be calculated which represents a statistical distribution of the distance ratios computed for all the image pairs. Thanks to said accumulation, the contribution of few selected interest points corresponding to small and/or poorly detailed objects is sensibly increased.

An aspect of the present invention provides for a method for comparing a first video shot comprising a first set of first images with a second video shot comprises a second set of second images. At least one between the first and the second set comprising at least two images. The method comprises pairing each first image of the first set with each second image of the second set to form a plurality of images pairs. The method further comprises, for each image pair, carrying out the operations a) - g): a) identifying first interest points in the first image and second interest points in the second image;

b) associating first interest points with corresponding second interest points in order to form corresponding interest point matches;

c) for each pair of first interest points, calculating the distance therebetween for obtaining a corresponding first length;

d) for each pair of second interest points, calculating the distance therebetween for obtaining a corresponding second length;

e) calculating a plurality of distance ratios, each distance ratio corresponding to a selected pair of interest point matches and being based on a ratio of a first term and a second term or on a ratio of the second term and the first term, said first term corresponding to the distance between the first interest points of said pair of interest point matches and said second term corresponding to the distance between the second interest points of said pair of interest point matches;

f) computing a first representation of the statistical distribution of the plurality of calculated distance ratios;

g) computing a second representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in the image pair are outliers.

The method further comprises generating a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs based on the first representations of all the image pairs, and generating a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs are outliers based on the second representations of all the image pairs. The method still further comprises comparing said first global representation with said second global representation, and assessing whether the first video shot contains a view of an object depicted in the second video shot based on said comparison.

According to an embodiment of the present invention, the operation f) provides for arranging the plurality of distance ratios in a corresponding image pair histogram having a plurality of ordered bins, each one corresponding to a respective interval of distance ratio values. The image pair histogram enumerates for each bin a corresponding number of calculated distance ratios having values comprised within the respective interval. According to an embodiment of the present invention, the operation provides for generating an image pair outlier probability mass function comprising for each of said bins the probability that, under the hypothesis that all the interest point matches are outliers, a distance ratio has a value that falls within said bin.

According to an embodiment of the present invention, the phase of generating a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs based on the first representations of all the image pairs comprises generating a global histogram based on the image pair histograms. Said global histogram is indicative of how the values of the distance ratios calculated for all the image pairs are distributed among the bins.

According to an embodiment of the present invention, the phase of generating a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs are outliers based on the second representations of all the image pairs comprises generating a global outlier probability mass function by combining the image pair outlier probability mass functions.

According to an embodiment of the present invention, the phase of comparing said first global representation with said second global representation comprises comparing said global histogram with said global outlier probability mass function.

According to an embodiment of the present invention, the phase of generating the global histogram based on the image pair histograms comprises, for each bin of the plurality of ordered bins, summing the number of calculated distance ratios corresponding to that bin of all image pair histograms.

According to an embodiment of the present invention, the phase of generating the image pair outlier probability mass function comprises calculating a linear combination of the image pair outlier probability mass functions.

According to an embodiment of the present invention, said comparing said first global representation with said second global representation comprises performing a Pearson's test.

According to an embodiment of the present invention, said calculating the distance ratios provides for calculating the logarithm of the distance ratios.

Another aspect of the present invention provides for a video shot comparing system. The video shot comparing system comprises a first unit configured to receive a first video shot comprising a first set of first images and identify first interest points in the first images, and a reference database storing a plurality of second video shot, each one comprising a respective second set of second images. The video shot comparing system further comprises a second unit configured to associate for each second video shot, and for each image pair comprising a second image of said second video shot and a first image of the first video shot, first interest points in said first image to second interest points in said second image in order to form corresponding interest point matches. The video shot comparing system further comprises a third unit configured to calculate, for each second video shots and for each image pair comprising a second image of said second video shot and a first image of the first video shot:

- for each pair of first interest points, the distance therebetween for obtaining a corresponding first length;

- for each pair of second interest points, the distance therebetween for obtaining a corresponding second length;

- a plurality of distance ratios, each distance ratio corresponding to a selected pair of interest point matches and being based on a ratio of a first term and a second term or on a ratio of the second term and the first term, said first term corresponding to the distance between the first interest points of said pair of interest point matches and said second term corresponding to the distance between the second interest points of said pair of interest point matches;

- a first representation of the statistical distribution of the plurality of calculated distance ratios;

- a second representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in the image pair are outliers.

The video shot comparing system further comprises a fourth unit configured to generate for each second video shot:

- a first global representation of the statistical distribution of the plurality of calculated distance ratios computed for all the image pairs comprising second images of said second video shot based on the first representations of all the image pairs comprising second images of said second video shot;

- a second global representation of the statistical distribution of distance ratios obtained under the hypothesis that all the interest point matches in all the image pairs comprising second images of said second video shot are outliers based on the second representations of all the image pairs comprising second images of said second video shot The video shot comparing system further comprises a fifth unit configured to compare for each second video shot the corresponding first global representation with the corresponding second global representation, and to assess whether there is a second video shot containing a view of an object depicted in the first video shot based on said comparison.

Brief description of the drawings

These and other features and advantages of the present invention will be made evident by the following description of some exemplary and non- limitative embodiments thereof, to be read in conjunction with the attached drawings, wherein:

Figure 1 illustrates the main phases of a video shot comparison method according to an embodiment of the present invention;

Figure 2 illustrates an exemplary plurality of image pairs;

Figure 3 illustrates an example in which a set of interest points in the first image of a image pair of Figure 2 are matched with a set of interest points in the second image of the same image pair;

Figure 4 schematically illustrates a possible scenario wherein the method according to an embodiment of the present invention may be exploited for implementing a visual searching service according to embodiments of the present invention;

Figures 5A illustrates a system implementing a visual searching service according to an embodiment of the present invention, and

Figure 5B illustrates a system implementing a visual searching service according to another embodiment of the present invention.

Detailed description of exemplary embodiments of the invention

Figure 1 illustrates the main phases of a novel video shot comparison method 100 adapted to assess whether two video shots Vsl, Vs2 contain a view of a same object according to an embodiment of the present invention. As specified in G. Davenport, T. A. Smith, and N. Pincever, "Cinematic primitives for multimedia", IEE Computer Graphics and Application, vol. 11, no. 4, pages 67-74, 1991, a video shot is a sequence of images (frames) generated and recorded contiguously and representing a continuous action in time and space.

The first phase of the method 100 (block 110 of Figure 1) provides for selecting a first set of a images from the first video shot Vsl, selecting a second set of b images from the second video shot Vs2 (wherein at least one among a and b are higher than 1), and forming a plurality of M = a*b image pairs IP(m) (m = 1 to M) by pairing each image of the first set with each image of the second set.

Figure 2 illustrates an example in which the first set of images of the first video shot Vsl comprises a = 4 images Il(s) (s = 1 to 4) and the second set of images of the second video shot Vs2 comprises b = 3 images I2(t) (t = 1 to 3). In this case, M = 12 image pairs IP(m) are formed, each one comprising an image Il(s) of the first set and an image I2(t) of the second set:

IP(1) = (11(1), 12(1)};

IP(2) = {I1(1), 12(2)};

IP(3) = (11(1), 12(3)};

IP(4) = {I1(2), 12(1)};

IP(12) = {I1(4), 12(3)};

The second phase of the method 100 (block 120 of Figure 1) provides for selecting for each one of the M image pairs IP(m) a set of interest points Xi in the first image Il(s) of the image pair IP(m) and a set of interest points yi in the second image I2(t) of the image pair IP(m), and then matching each interest point x; of the first image Il(s) with a corresponding interest point y; of the second image I2(t), yielding L m matches. As it is well known to those skilled in the art, the selection of which points of the images Il(s), I2(t) have to become interest points Xi, yi may be carried out by taking into consideration local features of the area of the image surrounding the point itself exploiting known procedures, such as for example the procedure disclosed in the patent US 6,711,293 or the procedure disclosed in the patent application PCT/EP2014/065808 in the name of the same present Applicant.

Figure 3 illustrates an example in which L m = 9 interest points Xi (i = 1 to 9) in the first image Il(s) of a image pair IP(m) are matched with L m = 9 interest points yi (i = 1 to 9) of the second image I2(t) of the same image pair IP(m) (with the interest point xi that is matched to the interest point yi , the interest point x 2 that is matched to the interest point y 2 , and so on).

The next phase of the method 100 (block 130 of Figure 1) provides for calculating for each pair of interest point matches {(xi, yi), (xj, y)} of each one of the M image pairs IP(m) formed in the previous phase the so-called log distance ratio (LDR for short) proposed in "Fast geometric re-ranking for image-based retrieval" by Sam S. Tsai, Davide Chen, Gabriel Takacs, Vijay Chandrasekhar, Ramakrishna Vedantham, Radek Grzeszczuk, Bernd Girod, International Conference on Image Processing, October 2010:

LDR (x t , X j , y t , y j ) = (1)

wherein ¾ represents the coordinates of a generic i-th interest point x; in the first image Il(s) of a generic image pair IP(m), yi represents the coordinates of i-th interest point y; in the second image I2(t) matched with the interest point x; in the first image Il(s) of the same image pair IP(m), Xj represents the coordinates of a different generic j-th interest point Xj in the first image Il(s) of the same image pair IP(m), and yj represents the coordinates of the j-th interest point in the second image I2(t) matched with the interest point Xj in the first image Il(s) of the same image pair IP(m). The interest points must be distinct, i.e., ¾ ≠ Xj, and yi≠yj, and the LDR is undefined for i =j. The LDR is a function of the length ratio, an invariant for similarities. Thanks to the presence of the logarithm operator, if the first image Il(s) of an image pair IP(m) is exchanged with the second image I2(t) of the same image pair IP(m), (x becomes y and vice versa), the LDR simply reverses sign. Given a set of L m matched interest points (¾ , yi) for a generic image pair IP(m) - including L m interest points Xi in the first image Il(s) of the pair and L m corresponding interest points y, in the second image I2(t) of the pair -, there exists a number N m = Lm (L ™ ^ 0 f distinct LDRs.

The next phase of the method 100 (block 140 of Figure 1) comprises collecting for each image pair IP(m) the corresponding N m LDRs generated at the preceding phase in order to compute a corresponding first representation of the statistical distribution thereof. According to an embodiment of the present invention, said first representation of the statistical distribution of the N m LDRs collected for each image pair IP(m) is a histogram, herein referred to as image pair histogram g m . In this way, M image pair histograms g m (m = 1 to M) are generated, i.e., a respective image pair histogram g m per each image pair IP(m).

Each image pair histogram g m shows how the values of the N m LDRs that have been calculated for the corresponding image pair IP(m) are distributed. The image pair histograms g m are expressed in form of frequency arrays:

gi = [gi(l), ..., gi(k), ...., gl (K)] gm = [gm(l), gm(k), gm(K)] gM = [gM(l), gM(k), g M (K)],

wherein each LDR may take values comprised within K predefined ordered intervals Ti, Tk, ..., Τκ - hereinafter referred to as bins -, and g m (k) is the number of LDRs

(calculated for the image pair IP(m)) whose values fall within the k-th bin Tk.

For each image pair histogram g m , the sum of histogram components g m (k) thereof is equal to the number N m of LDRs calculated for the corresponding image pair IP(m):

gm(l) + . . . + gm(k) +....+ g m (K) = N m .

The total number N of LDRs calculated for all the image pairs IP(m) obtained from the two video shots Vsl and Vs2 is equal to:

The next phase of the method 100 (block 150 of Figure 1) comprises calculating for each image pair IP(m) a corresponding second representation of the statistical distribution of LDRs obtained under the hypothesis that all the interest point matches in the image pair are outliers. According to an embodiment of the present invention, said second representation of the statistical distribution of LDRs obtained under the hypothesis that all the interest point matches in the image pair are outliers is a probability mass function, referred to as image pair outlier probability mass function p m :

pi = [pi(l), ..., pi(k), ...., pi(K)]

Pm = [Pm(l), ... , p m (k), ...., p m (K)]

PM = [PM(1), ... , p M (k), ...., p M (K)],

wherein p m (k) is the probability that, under the hypothesis that all the interest point matches for the m-th image pair IP(m) are outliers, a LDR calculated using a pair of interest point matches {(xi, yi), (xj, )} from said image pair IP(m) has a value that falls within the k-th bin Tk. The various image pair outlier probability mass functions p m may be calculated based on a discretization of an outlier probability density function whose closed form is:

wherein z is the LDR value, and d is the ratio between the standard deviations of the coordinates of the interest points in the images (see equation (6) of S. Lepsoy, G. Francini, G. Cordara, and P.P. de Gusmao, "Statistical modelling of outliers for fast visual search", in IEEE International Conference on Multimedia and Expo (ICME), pages 1-6, IEEE, 2011). In other words, each image pair outlier probability mass function p m corresponding to an image pair IP(m) is the probability mass function of LDRs calculated using pairs of interest point matches {(¾, yi), (xj, )} obtained by selecting the interest points from said image pair IP(m) in a random way.

It has to be appreciated that the image pair outlier probability mass functions p m corresponding to two different image pairs IP(m) may be different to each other, being dependent on the actual arrangement of the interest points Xi, yi in the two image pairs IP(m).

The phases of the method 100 described until now (blocks 110-150 of Figure 1) regarded operations which have been carried out on each image pair IP(m) in an independent way, i.e., without taking into considerations the relationships occurring among them.

The next phases of the method 100 (blocks 160-190 of Figure 1) will regard instead all the image pairs IP(m) considered together.

The first phase of the method 100 having said features (block 160) provides for generating a global representation of the statistical distribution of the LDR values computed for all the image pairs IP(m). According to an embodiment of the present invention, said global representation is a further histogram, herein referred to as global histogram g, which is indicative of how the values of the LDRs calculated for all the image pairs IP(m) are distributed among the K bins Ti , ... , Tk, ... , Τκ. The global histogram g is generated in the following way:

g = gi +...+ g m + . . . + gM = [g(l), ..., g(k), ..., g(K)], wherein:

g(k) = gi(k) + . . . + gm(k) + ... + g M (k)

is the number of LDRs (by considering all the image pairs IP(m)) whose values fall within the k-th bin Tk.

The next phase of the method (block 170) provides for generating a global representation of the statistical distribution of LDR values obtained under the hypothesis that all the interest point matches in all the image pairs IP(m) are outliers. According to an embodiment of the present invention, said global representation is a further probability mass function, herein referred to as global outlier probability mass function p, which is generated by means of a linear combination of the image pair outlier probability mass functions p m of all the image pairs IP(m):

p = [p(l), . .. , p(k), . .. , p(K)],

wherein:

l M

P(fr) = N m - Pm (k)

m=l

wherein p(k) is the probability that, under the hypothesis that all the interest point matches for all the image pairs IP(m) are outliers, a LDR calculated using a pair of interest point matches {(xi, yi), (xj, )} from a generic image pair IP(m) has a value that falls within the k-th bin Tk.

In other words, the global outlier probability mass function p is the probability mass function of LDRs calculated using pairs of interest point matches {(¾, yi), (xj, y)} obtained by selecting the interest points from any of the image pairs IP(m) in a random way.

The next phase of the method (block 180 of Figure 1), provides for comparing the global histogram g - which is indicative of how the values of the LDRs calculated for all the image pairs IP(m) obtained from the two video shots Vsl and Vs2 to be compared are distributed - with the global outlier probability mass function p - which is indicative of how the values of the LDRs are distributed if wrong (i.e. , random) interest point matches are selected from all the image pairs IP(m). This comparison is carried out by estimating the difference in shape between the global histogram g and the global outlier probability mass function p.

Indeed, the components of the global histogram g that are due to wrong matches will have a shape similar to that of global outlier probability mass function p, while the components of the global histogram g that are due to correct matches will have a shape different from that of the global outlier probability mass function p.

The difference in shape between the global histogram g and the global outlier probability mass function p is estimated by means of the known Pearson's test disclosed at pages 402-403 of "An introduction to Mathematical Statistics and its Applications" by R.J. Larsen and M.L. Marx, New Jersey, Prentice-Hall, second edition, 1986.

The Pearson's test statistic c is computed in the following way:

= y(ff (/c) - N - p(/c)) 2 The more the shape of the global histogram g is similar to that of the global outlier probability mass function p, the lower the value of the Pearson's test statistic c.

For this purpose, the next phase of the method 100 (block 190 of Figure 1) provides for checking whether the Pearson's test statistic c calculated above is higher or lower than a threshold TH.

If the Pearson's test statistic c is lower than the threshold TH (exit branch N of block 190), it means that the shape of the global histogram g is sufficiently similar to that of the global outlier probability mass function p to assume that the interest point matches among the M image pairs IP(m) are wrong {i.e., outliers). In this case, the video shots Vsl and Vs2 are considered not to contain a view of a same object (block 195).

If the Pearson's test statistic c is higher than the threshold TH (exit branch Y of block 190), it means that the shape of the global histogram g is sufficiently different from the shape of the global outlier probability mass function p to assume that there are a sufficiently high number of interest point matches among the M image pairs IP(m) which are correct {i.e., inliers). In this case, the video shots Vsl and Vs2 are considered to contain a view of a same object (block 197).

As it is well known to those skilled in the art, the value of the threshold TH to be exploited in the Pearson's test should be set based on the number of false positives which can be tolerated.

Compared with the known solutions, the proposed method is more robust, since it allows the identification of small and/or poorly detailed objects depicted in the images of the video shots. Indeed, even if only a small amount of interest points are selected that correspond to such small and/or poorly detailed objects, during the generation of the global histogram, the components corresponding to such few interest points are accumulated for each image pair, increasing their whole contribution. The capacity of assessing whether two video shots depict a same object or a same scene increases with the total number of interest point matches, such that video shots depicting a same object or a same scene are detected also when the number of inliers are few with respect to the total number of matched interest points.

Figure 4 schematically illustrates a possible scenario wherein the previously described method may be exploited for implementing a visual searching service according to embodiments of the present invention. The scenario of Figure 4 - identified with the reference 400 - is structured according to a client-server configuration, wherein a visual search server 410 is configured to interact with a plurality of terminals 420 for exchanging data through an external network 430, such as a MAN, a WAN, a VPN, Internet or a telephone network. Each terminal 420 may be a personal computer, a notebook, a laptop, a personal digital assistant, a smartphone, or whichever electronic device capable of managing a digital video shot.

According to an embodiment of the present invention illustrated in Figure 5A, all the main operations of the visual searching service are carried out by the visual search server 410.

A user of a terminal 420 requesting information related to an object depicted in a video shot, sends said video shot (query video shot) to the visual search server 410 through the network 430.

The visual search server 410 includes a server interface 502 adapted to interact with the network 430 for receiving/transmitting data from/to the terminals 420. Through the server interface 502, the visual search server 410 receives the query video shot to be analyzed.

The query video shot is provided to an interest point detection unit 504 configured to identify the interest points within the images of the query video shot.

The visual search server 410 further includes a matching unit 508 coupled with a reference database 510 storing a plurality of pre-processed reference video shots. For each reference video shot, and for each image pair comprising an image of said reference video shot and an image of the query video shot, a matching is made among interest points of the two images of said image pair.

The visual search server 410 further comprises a first processing unit 512 configured to:

- calculate for each reference video shot and for each image pair involving an image of said reference video shot and an image of the query video shot the LDRs for each corresponding interest point match generated by the matching unit 508,

- arranging the LDRs of each image pair in a corresponding image pair histogram, and

- calculating for each image pair a corresponding image pair outlier probability mass function.

The visual search server 410 further comprises a second processing unit 514 configured to generate for each reference video shot: - a global histogram (by using the image pair histograms corresponding to said reference video shot and said query video shot), and

- a global outlier probability mass function (by using the image pair outlier probability mass functions corresponding to said reference video shot and said query video shot).

The visual search server 410 further comprises a decisional unit 516 that is configured to assess whether there is a reference video shot containing a view of an object depicted in the query video shot. For this purpose, the decisional unit 516 is configured to make for each reference video shot a comparison between the corresponding global histogram and the global outlier probability mass function. The decisional unit 516 is further configured to provide the results to the terminal 420 through the network 430.

According to a further embodiment of the present invention illustrated in Figure 5B, the interest point detection unit 504 is directly included in the terminals 420 instead of being included in the visual search server 410. In this case, instead of sending the query video shot to the visual search server 410, each terminal 420 is capable of directly sending the interest points locally generated from the images of the query video shots.

The previous description presents and discusses in detail several embodiments of the present invention; nevertheless, several changes to the described embodiments, as well as different invention embodiments are possible, without departing from the scope defined by the appended claims.

For example, although in the present description reference has been made to the log distance ratio (LDR), similar considerations apply if the histograms are construed with a difference distance ratio, such as a plain distance ratio, without the logarithm; moreover, similar considerations apply if the histograms are construed with multiples and/or powers of the log distance ratio.

Moreover, the concepts of the present inventions can be applied even if the widths of the bins of the histograms are different to each other.