Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR IDENTIFYING AN IMAGE
Document Type and Number:
WIPO Patent Application WO/2015/079012
Kind Code:
A1
Abstract:
A method of identifying an image, comprises operating a processor to: receive a first set of image data; extract a feature from the received image data; identify a matching image in accordance with the extracted feature; and output data pertaining to the matching image; wherein operating the processor to identify a matching image comprises operating the processor to: generate a plurality of target features based on the extracted feature; determine that a candidate feature of a candidate image is similar to at least one target feature in accordance with a predefined similarity criterion; and identify the candidate image as a matching image.

Inventors:
HUGHES MARK (IE)
SMEATON ALAN (IE)
Application Number:
PCT/EP2014/075910
Publication Date:
June 04, 2015
Filing Date:
November 28, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV DUBLIN CITY (IE)
International Classes:
G06K9/00
Foreign References:
US20130044944A12013-02-21
EP2490171A12012-08-22
Other References:
DAVID G LOWE: "Distinctive Image Features from Scale-Invariant Keypoints", INTERNATIONAL JOURNAL OF COMPUTER VISION, KLUWER ACADEMIC PUBLISHERS, BO, vol. 60, no. 2, 1 November 2004 (2004-11-01), pages 91 - 110, XP019216426, ISSN: 1573-1405, DOI: 10.1023/B:VISI.0000029664.99615.94
KOEN E A VAN DE SANDE ET AL: "Evaluating Color Descriptors for Object and Scene Recognition", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY, USA, vol. 32, no. 9, 1 September 2010 (2010-09-01), pages 1582 - 1596, XP011327419, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2009.154
Attorney, Agent or Firm:
CASEY, Alan (4 Dublin, IE)
Download PDF:
Claims:
Claims

A method of identifying an image, the method comprising operating a processor to:

receive a first set of image data;

extract a feature from the received image data;

identify a matching image in accordance with the extracted feature; and output data pertaining to the matching image;

wherein operating the processor to identify a matching image comprises operating the processor to:

generate a plurality of target features based on the extracted feature;

determine that a candidate feature of a candidate image is similar to at least one target feature in accordance with a predefined similarity criterion; and

identify the candidate image as a matching image

The method of claim 1 , wherein the processer is operated to generate each of the plurality of target features by applying a respective function to the extracted feature.

The method of claim 1 or claim 2, wherein operating a processor to extract a feature from the received image data comprises operating the processor to:

determine that a subset of the first image data requires colour correction; and perform colour correction on the subsetof the first image data.

The method of any one of the preceding claims, wherein operating a processor to determine that a candidate feature is similar to at least one target feature comprises operating the processor to:

determine a vector representation of the target feature;

determine a vector representation of the candidate feature;

determine that the candidate feature is similar to the target feature if a distance between the vector representation of the targetfeature and the vector representation of the candidate feature is less than a threshold.

5. The method of any one of the preceding claims, wherein the first image data comprises data pertaining to an item of clothing.

The method of claim 5, wherein operating a processor to extract the feature from the received image data comprises operating the processor to:

identify a subset of the first image data, the identified subset comprising data pertaining to the item of clothing; and

extract the feature from the identified subset.

The method of claim 5 or claim 6, wherein the extracted feature comprises one or more of: data pertaining to a pattern on the item of clothing;

data pertaining to a colour of the item of clothing;

data pertaining to a shape of the item of clothing;

data pertaining to a closing means of the item of clothing; and

data pertaining to a texture of the item of clothing.

8. The method of any one of the preceding claims, wherein operating a processor to receive first image data comprises operating the processor to perform one or more of:

cause a camera to acquire the first image data;

receive a user input identifying the image data;

acquire the image data from a remote server.

9. A computer-readable medium comprising instructions, which when executed cause a

processor to perform a method according to any one of claims 1 to 8.

10. An apparatus for identifying an image, the apparatus comprising a processor configured to:

receive a first set of image data;

extract a feature from the received image data;

identify a matching image in accordance with the extracted feature; and output data pertaining to the matching image;

wherein identifying a matching image comprises:

generating a plurality of target features based on the extracted feature; determining that a candidate feature of a candidate image is similar to at least one target feature in accordance with a predefined similarity criterion; and identifying the candidate image as a matching image.

1 1. The apparatus of claim 10, wherein the processor is further configured to perform a method according to any one of claims 1 to 8. 12. A system for identifying an image, the system comprising:

a first apparatus according to claim 10 or claim 1 1 ; and

a user device configured to acquire image data and provide the acqjired image data to the first apparatus. 13. Processing circuitry configured to perform a method according to any one of claims 1 to8. 14. A method substantially as described herein with reference to figures l to 6.

A system substantially as described herein with reference to figures 1 to 6.

Description:
System and Method for Identifying an Image

Field of the Invention

This invention relates to identifying an image. More particularly, the disclosure relates to methods and systems of identifying a matching image for a received set of image data.

Background of the Disclosure

As the use of computing devices, and in particular mobile computing devices increases, the amount of information available over the internet has increased dramatically. However, the increase in the amount of information available can make it difficult, especially for users of mobile computing devices, to identify relevant information.

In particular, identification of relevant image data can be particularly problematic. Currently, image searches are text-based. Accordingly, a user must firstly identify a suitable text-based description for a desired image and search for images that have been classified or associated with a similar text- based description.

It would clearly be desirable for a user to instead perform a search for images based on identified image data.

Summary of the Disclosure

In accordance with an aspect of the invention, there is provided a method of identifying an image, the method comprising operating a processor to: receive a first set of image data; extract a feature from the received image data; identify a matching image in accordance with the extracted feature; and output data pertaining to the matching image; wherein operating the processor to identify a matching image comprises operating the processor to: generate a plurality of target features based on the extracted feature; determine that a candidate feature of a candidate image is similar to at least one target feature in accordance with a predefined similarity criterion; and identify the candidate image as a matching image. The processer may, for example, be operated to generate each of the plurality of target features by applying a respective function to the extracted feature. In this manner, a candidate image may be identified as a matching image if it matches the extracted feature or a variation of the extracted feature generated by applying a function thereto.

In an embodiment of the invention, operating a processor to extract a feature from the received image data comprises operating the processor to determine that a subset of the first image data requires colour correction; and to perform colour correction on the subset of the first image data. Advantageously, this allows for colour correction or colour balancing in local regions of the image without affecting colour in other regions of the image. In an embodiment of the invention, operating a processor to determine that a candidate feature is similar to at least one target feature comprises operating the processor to: determine a vector representation of the target feature; determine a vector representation of the candidate feature; determine that the candidate feature is similar to the target feature if a distance between the vector representation of the target feature and the vector representation of the candidate feature is less than a threshold

The first image data may comprise data pertaining to an item of clothing. Operating a processor to extract the feature from the received image data may then comprise operating the processor to: identify a subset of the first image data, the identified subset comprising data pertaining to the item of clothing; and extract the feature from the identified subset. The extracted feature may comprise one or more of: data pertaining to a pattern on the item of clothing; data pertaining to a colour of the item of clothing; data pertaining to a shape of the item of clothing; data pertaining to a closing means of the item of clothing; and data pertaining to a texture of the item of clothing. Operating a processor to receive first image data may comprise operating the processor to perform one or more of: cause a camera to acquire the first image data; receive a user input identifying the image data; and acquire the image data from a remote server.

In accordance with an aspect of the invention, there is provided a computer-readable medium comprising instructions, which when executed cause a processor to perform any of the above- described methods.

In accordance with an aspect of the invention, there is provided an apparatus for identifying an image, the apparatus comprising a processor configured to: receive a first set of image data; extract a feature from the received image data; identify a matching image in accordance with the extracted feature; and output data pertaining to the matching image; wherein identifying a matching image comprises: generating a plurality of target features based on the extracted feature; determining that a candidate feature of a candidate image is similar to at least one target feature in accordance with a predefined similarity criterion; and identifying the candidate image as a matching image.

The processor comprised within the apparatus may be further configured to perform any of the above-described methods.

In accordance with an aspect of the invention, there is provided a system for identifying an image, the system comprising: a first apparatus according to as described above; and a user device configured to acquire image data and provide the acquired image data to the first apparatus.

In accordance with an aspect of the invention, there is provided processing circuitry configured to perform any of the above-described methods. Brief Description of the Drawings

Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

Fig. 1 is a diagram of an image matching system in accordance with an embodiment of the invention.

Fig. 2 is a flow diagram depicting a method of identifying an image in accordance with an embodiment of the invention.

Fig. 3 is a flow diagram depicting an exemplary method of identifying a matching image according to an embodiment of the invention.

Fig. 4 is a flow diagram depicting an exemplary method of extracting a feature from received image data according to an embodiment of the invention.

Fig. 5 is a flow diagram depicting an exemplary method of determining that a candidate feature is similar to a target feature according to an embodiment of the invention. Fig. 6 depicts exemplary processing stages of a method according to an embodiment of the invention.

Detailed Description

Embodiments of the system comprise an image matching system 10 for processing received image data and identifying an image determined to match the image data. The system 10 comprises at least one user device 12 configured to receive an input, selection or identification of image data and to provide the image data to an image matcher 14 over a network 16.

The at least one user device 12 may comprise any device or terminal suitable for displaying image data and for receiving a user input via a user interface e.g. a Graphical User Interface (GUI). For example, the user device 12 may comprise one or more of a personal (or 'desktop') computer; a mobile computing device such as a digital camera, a smartphone, a tablet computer, a watch etc. ; and any other suitable device. The image matcher 14 may be any suitable system or apparatus for processing received image data to identify an image determined to match the received data and output data in accordance with, or indicative of, the identified matching image. In an exemplary embodiment, the image matcher 14 comprises or is comprised within a server configured to communicate with one or more remote devices over the network 16. Additionally or alternatively, the image matcher 14 may comprise a computing device such as a personal (or 'desktop') computer; a mobile computing device such as a digital camera, a

smartphone, a tablet computer, a watch etc. The image matcher 14 may additionally or alternatively comprise functionality comprised within one or more of the above described computing devices.

In what follows, the image matcher 14 will be referred to as a single element within the system 100. However, it will be appreciated that the image matcher may comprise multiple individual elements at which a received image is processed. Similarly, in what follows the image matcher 14 will be referred to as distinct from the user device 102. However, it will be appreciated that the image matcher 14 may comprise functionality comprised within the user device 12.

The user device 12 may communicate with the image matcher 14 using any suitable means. For example, the user device 12 and the image matcher 14 may communicate using one or more of Bluetoothâ„¢; Near-Field Communication (NFC); Infra-Red (IR) Communication; Magnetic Induction; or over a wired or wireless network 104.

In an exemplary embodiment, the network 16 may comprise any network across which

communications can be transmitted and received. For example, the network 16 may comprise a wired or wireless network. The network 16 may, for example, comprise one or more of: the internet; a local area network; a radio network such as a mobile or cellular network; a mobile data network or any other suitable type of network. In one embodiment the user device 12 communicates over the internet with the image matcher 14 and/or the second network node 1 10 operating on 'a cloud'.

In an embodiment in which the image matcher 14 functionality is comprised within the user device 12, the network 16 may comprise any suitable data connection or bus across which data may be communicated to the image matching functionality.

One or both of the image matcher 14 and the user device 12 may be configured to communicate with one or more respective databases 108, for example, via a wired or wireless connection. For example, the image matcher 14 and/or the user device 12 may write data to the one or more databases 108. Additionally or alternatively, the image matcher 14 and/or the user device 12 may retrieve data stored in, or accessible to, the database 108 via a wired or wireless connection, for example, via the network 16. Figure 2 depicts an exemplary method 200 of identifying an image in accordance with an embodiment of the invention. The processing steps performed by the image matcher 14 may be performed by any suitable processing circuitry. For example, the method 200 may be performed by one or more processors operating at, or in association with, the image matcher 14.

At block 202, the image matcher 14 receives a first set of image data from the user device 12. The user device may acquire the image data by any suitable means. For example, the image data may be acquired using camera functionality comprised within or in communication with the user device 12.

Additionally or alternatively, a user may input a selection of a subset of image data stored on, or accessible to, a one or both of the user device 12 and the image matcher 14. For example, the user may select data pertaining to (corresponding to, representative of etc.) one or more images available Online' and/or stored in the database 18.

In an exemplary embodiment, a user viewing (or browsing through) images stored on, or accessible to, the user device 12 may select data pertaining to some or all of the image and communicate the selected data to the image matcher 14.

For example, a user may select data pertaining to an image provided in a magazine, newspaper or website etc. Additionally or alternatively, a user may select one or more sets of image data from a plurality of image data options.

The first set of image data may comprise data pertaining to (or indicative or representative of) an entire image. Additionally or alternatively, the first set of data image may comprise a subset of image data, wherein the subset is determined (or defined) to relate to a 'region of interest' of a larger set of image data.

Extraction of the data determined to relate to the 'region of interest' may be performed by the user device 12 prior to transmission of the first set of data to the image matcher 14. Advantageously, this reduces the amount of data that must be transmitted over the network 16.

Additionally or alternatively extraction of the 'region of interest' may be performed (or further refined) at the image matcher 14. It will be appreciated that extraction of the region of interest at the image matcher 14 reduces the processing requirements at the user device. In an exemplary embodiment, in which the user device acquires a preliminary image of a person wearing the item of clothing standing in front of a background, the first set of data pertains to the item of clothing only.

In an embodiment in which the user selects one or more sets of image data from a plurality of image data Options', the first set of image data corresponds to characteristics of clothing selected from a plurality of potential characteristics. For example, a user may be presented with a plurality of choices relating to style, colour, size etc. of a desired clothing item. In this case, the received image data may then comprise clothing characteristics desired/required by the user. In this manner, a user can identify a desired clothing item even if an image of the desired clothing item is not available. Responsive to receiving the image data at block 202, the image matcher 14 extracts at least one feature from the received image data.

The at least one feature may be extracted using any suitable image processing techniques. For example, the at least one feature may be extracted using any known local feature extraction techniques such as one or more of: edge detection; corner detection; blob detection; ridge detection; scale-invariant feature transformation; intensity thresholding; template matching; applying a Hough transform; and any other method of detecting and isolating portions or shapes of the image. The at least one feature may comprise any subset of the image data. For example, in an embodiment in which the first set of image data comprises data representative of an item of clothing, the at least one feature may comprise a subset of the image data determined to correspond to the item of clothing. In an exemplary embodiment, in which the first set of image data comprises data representative of a person wearing an item of clothing, extracting the at least one feature comprises extracting (or identifying) data determined to correspond to the item of clothing from the remaining data comprised within the first set. Additionally or alternatively, in an embodiment in which the first set of image data corresponds to an item of clothing, extracting a feature of the first set of image data may comprise extracting (or identifying) one or more of: data pertaining to a pattern on the item of clothing; data pertaining to a colour of the item of clothing; data pertaining to a closing means (e.g. a zip, fastener, button etc.) of the item of clothing; data pertaining to a texture of the item of clothing; and any other identifiable feature of the item of clothing.

At block 206, the image matcher 14 identifies at least one matching image in accordance with the extracted feature. In an exemplary embodiment, the image matcher searches through a plurality of candidate images, which may for example be stored in the database 18, and determines that one or more of the candidate images is a 'match' for the extracted feature. The image matcher 14 identifies a candidate image to be a 'match' for the extracted feature if the candidate image is determined to comprise data pertaining to a feature similar to, or the same as, the extracted feature.

Identification of a matching image is discussed in more detail in relation to figure 4.

At block 208, the image matcher 14 outputs data corresponding to the matching image identified at block 206. The data output at block 208 may comprise any data indicative of, or relating to the identified image. For example, the output data may comprise metadata relating to the identified image, e.g. a hyperlink or other identifier of a location at which the identified image can be viewed. Additionally or alternatively, the output data may comprise one or more of: data pertaining to a thumbnail (or reduced size) image of the identified image; a subset of the identified image (e.g. the subset of the data determined to pertain to a feature similar to the extracted feature); data pertaining to the (full-sized) identified image; and any other data indicative or representative of the identified image. The image matcher 14 may output the data at block 208 via any suitable means. For example, the image matcher 14 may output the data via an audio and/or visual display comprised within the image matcher. Additionally or alternatively, at block 208, the image matcher 14 may transmit the data to one or more user devices 12 over the network 16. In an exemplary embodiment, prior to (or during) extraction of the feature at block 204, the method 200 further comprises performing pre-processing of the image as depicted in Fig. 3.

At block 302, the image matcher 14 identifies a predetermined number of subsets or sub-regions of the received image data. For example, the image matcher 14 may divide the image into 8X8 sub- regions in a grid type pattern.

At block 304, the image matcher 14 determines one or more respective characteristic values for each of the identified subsets. For example, the image matcher 14 may determine a respective indication of contrast and/or brightness for each subset.

At block 306, the image matcher 14 determines whether pre-processing of the image is necessary. The image matcher 14 may make this determination in accordance with any suitable measurement or indication determined from the received image data. In an exemplary embodiment, the image matcher 14 determines an indication of a global mean or average characteristic value based on the respective characteristic values of each of the identified subsets and determines whether or not pre-processing is necessary based on the value of global mean. It will be appreciated that the global mean may be an arithmetic mean. However, the global mean may additionally or alternatively be the median, the mode, or any other central or typical value of the respective values determined for each sub-region.

For example, responsive to determining that the global mean is greater than a predefined threshold, the image matcher 14 may perform pre-processing on some or all of the identified subsets.

Similarly, responsive to determining that the global mean is less than a predefined threshold, the image matcher 14 may determine that pre-processing of the identified subsets is not required.

At block 308, responsive to determining that pre-processing of the data subsets is necessary, the image matcher 14 performs pre-processing on each subset in accordance with the respective characteristic value determined for that subset. The pre-processing may for example comprise one or more of: colour correction; smoothing; noise reduction; encoding; skin detection and removal; and/or any other techniques of processing an image prior to extraction or identification of an image feature. In an exemplary embodiment, the image matcher 14 performs pre-processing by implementing any one or more of: histogram equalisation; adaptive histogram equalisation; and contrast limited adaptive histogram equalisation.

Performance of the pre-processing with respect to a characteristic value of the subset (and not, e.g. a characteristic value of the entire image) avoids losing (e.g. blurring) data pertaining to features of the image. For example, pre-processing of the full set of image data may result in smoothing of the image, resulting in loss or distortion of features such as a texture, a colour, and/or a pattern on an item of clothing or a part thereof.

As discussed above, the pre-processing may comprise any suitable process performed on image data prior to extraction of one or more features from the image data. For example, the pre- processing step may comprise performing colour balancing (grey balancing; neutral balancing or white balancing) over the identified subset.

It will be appreciated that balancing the colour within a subset of the image in this manner is not the same as balancing the colour over the entire image. In particular, balancing the colour within a subset only means that characteristic (or desired) colour variations within the image data are preserved.

For example, in an embodiment in which the image data comprises data pertaining to an item of clothing with a particular colour, distortions (e.g. shadows, reflections, glare etc.) in the image data can be corrected using data in a region local to that in which the distortion occurs without altering the true colour of the item of clothing.

Figure 4 depicts a method of identifying a matching image at block 206 of the method 200. At block 402, the image matcher 14 generates a plurality of target features based on, or in accordance with, the feature extracted at block 204. The plurality of target features comprise the extracted feature together with at least one additional feature which may be generated by applying a respective one of a plurality of functions to the extracted feature.

In an exemplary embodiment, the at least one additional feature is generated by generating at least one 'deformable model' or 'active contour model' based on the extracted feature. In this case, each of the at least one deformable models is generated by applying a respective 'deformation' function to the extracted feature. The deformation functions may comprise any suitable functions, for example any one or more of: a rotation; a skew; a perspective transformation; a spherical and/or pinch deformation etc. In an exemplary embodiment in which the extracted feature corresponds to a feature of an item of clothing, application of a plurality of functions to the extracted feature results in a plurality of target features corresponding to the extracted clothing feature in a plurality of poses, orientations, or positions. For example, if the extracted feature is a striped pattern on a pair of trousers, the target features may correspond to multiple orientations of the striped pattern which may arise as a wearer of the trousers moves or runs. Similarly, if the extracted feature is a logo on a tee-shirt, the target features may correspond to multiple deformations of the logo which may arise in accordance with body-shape of the wearer. In this manner, subsequent identification of 'matching' or similar images is not performed solely by matching the extracted feature, but additionally or alternatively by matching a plurality of 'variations' of the extracted feature. In particular, the variations of the extracted feature may be determined in accordance with information relating to the type of image data received. For example, as described above, where the image data is determined to comprise data pertaining to an item of clothing, the variations may be determined in accordance with one or more of potential positions, orientations, and movements of a wearer of the clothing.

At block 404, the image matcher 14 determines that a candidate feature of a candidate image is similar to at least one of the plurality of target features. As discussed above, the similarly determination may be made using any suitable means. In an exemplary embodiment, the image matcher searches through a plurality of candidate images, which may for example be stored in the database 18, and determines that one or more of the candidate images is a 'match' for at least one of the target features. The image matcher 14 identifies a candidate image to be a 'match' for a target feature if the candidate image is determined to comprise data pertaining to a feature similar to, or the same as, the target feature.

As discussed in more detail in relation to figure 5, the image matcher 14 may determine that a candidate image comprises a feature similar to at least one of the target features using any suitable means. For example, the image matcher 14 may use any one or more of the following functions: Singular Value Decomposition (SVD); Eigenvalue Decomposition; nearest neighbour determination; approximate nearest neighbour determination; locality sensitive hashing or any other suitable function.

At block 406, the image matcher 14 identifies the candidate image (i.e. the image determined to comprise a feature similar to at least one of the target features) as a matching image. In an exemplary embodiment, at block 406 the image matcher 14 performs an additional geometrical verification that the candidate image is a matching image. The geometrical verification may, for example, comprise a comparison of one or more characteristics of the candidate feature within the candidate image with a corresponding characteristic of the target feature. For example, the image matcher 14 may determine whether one or more characteristics of the candidate feature relative to other features in the candidate image, such as the location, size, prominence, or any other characteristic of the candidate feature relative to the candidate image and/or other features within the candidate image are similar to a corresponding characteristic of the target feature.

Figure 5 depicts an exemplary method of determining that a candidate feature of a candidate image is similar to a target feature in accordance with an embodiment of the invention.

At block 502, the image matcher 14 determines a feature vector representation of the target feature. As is known in the art, a feature vector is an n-dimensional vector of numerical feature values that are representative of a given feature. For example, the numerical feature values may correspond to (or be indicative of) pixel values of the image data pertaining to the target feature, In an exemplary embodiment, the numerical feature values of the feature vector are indicative of intensity values, or differences in intensity values, across the image data pertaining to the target feature. Additionally or alternatively, the numerical feature values may be indicative of any other suitable characteristic of the image data pertaining to the target feature.

At bock 504, the image matcher 14 determines a feature vector representation of a candidate feature identified in (or extracted from) a candidate image. The candidate feature may be identified or extracted in any suitable manner. For example, the candidate feature may be identified using any of the means discussed above in relation to extraction of the feature from the received image data at block 204 or generation of the target features in relation to block 402.

At block 506, the image matcher 14 compares the vector that was determined for the target feature, the 'target vector', with the vector that was determined for the candidate feature, the 'candidate vector'. This comparison may be performed in any suitable manner. For example, the image matcher 14 may determine that the target vector is similar to the candidate vector if the distance between the candidate and target vectors in the feature vector space is less than a predefined threshold.

Responsive to determining that the target feature matches the candidate feature, at block 506, the image matcher 14 continues processing at block 406 of figure 4.

In an exemplary embodiment, at block 504, the image matcher 14 determines a vector

representation of a plurality of candidate features. In this case, at block 506 the image matcher 14 may identify a candidate vector from the plurality of candidate vectors that is a 'best match' for the target vector. For example, a vector may be identified as a 'best match' if it is the candidate vector closest to the target vector in the feature vector space.

The image matcher 14 may determine which of the plurality of candidate vectors is closest to the target vector in any suitable manner. For example, the image matcher may identify the closest candidate vector using one or more of: a ratio test; a nearest neighbour/proximity/similarity search; an approximate nearest neighbour search; or any other suitable means of identifying a vector from the plurality of candidate vectors that is a 'best match' or 'most similar' to the target vector.

In an embodiment, in which the image matcher 14 determines a vector representation of a plurality of candidate features, at block 506 the image matcher 14 identifies a first candidate vector that is determined to be the closest candidate vector to the target vector, and a second candidate vector that is determined to be the next closest candidate vector to the target vector (i.e. the second candidate vector is the closest of the plurality of candidate vectors other than the first candidate vector). The image matcher 14 then determines that the first candidate vector matches the target vector if a distance between the first candidate vector and the target vector is less than a distance between the second candidate vector and the target vector by more than a threshold amount.

In this manner, a candidate vector is only identified as a match for the target vector if it is closer to the target vector than the next closest vector by more than a threshold amount. Accordingly, the identified candidate vector comprises a 'unique' match or at least a significantly better match to the target vector than any other candidate vector.

If, at block 506, the image matcher 14 determines that the target feature does not match the candidate feature the image matcher 14 may repeat the processing at block 504 in respect of a further candidate feature. The further candidate feature may, for example, be identified from the same candidate image. Additionally or alternatively, the further candidate feature may be identified from a further candidate image.

In this manner, blocks 504 and 506 may be repeated until a matching candidate feature (and, accordingly, a matching candidate image) is identified. Additionally or alternatively, the image matcher 14 may repeat the processing at blocks 504 and 506 until detection of a limiting condition.

The limiting condition may for example comprise one or more of: repetition of the processing at blocks 504 and 506 a predefined number of times; repetition of the processing at blocks 504 and 506 a predefined number of times for a predefined duration of time; determination that no more candidate images are available; and any other suitable limiting condition. On detection of a limiting condition the process may exit and the image matcher 14 may output data indicating that no matches have been identified.

In an exemplary embodiment, responsive to determining that no match has been identified before occurrence of a limiting condition, the image matcher 14 may modify the similarity criteria applied at block 506. For example, the image matcher 14 may reduce the required similarity threshold so that candidate features previously determined not to be similar to the target feature may now be determined to be similar to the target feature. The degree of similarity required to determine a match may then be iteratively reduced until such time as a 'best match' is identified. In this manner, the image matcher 14 will always return at least one matching candidate image. Figure 6 depicts exemplary processing stages of a method according to an embodiment of the invention. At step 1 , a smartphone user acquires an image comprising data pertaining to a person wearing a dress. As discussed previously, the image data may be acquired via a camera function comprised within the smartphone. Additionally or alternatively, the image data may be obtained from storage comprised within or accessible by the smartphone. For example, the image data may be acquired from a remote server over a network 16.

At step 2 data pertaining to a region of interest, in this example the dress, is extracted from the image acquired at step 1 . As discussed in relation to figure 2, step 2 may be performed at one or both of the smartphone and the image matcher 14.

At step 3, the image matcher 14 determines that at lest one subset of the data pertaining to the region of interest requires colour correction. In the exemplary method of figure 6, the colour correction is performed by balancing the red, green and blue pixels across each subset of the data. In this manner, characteristic features of the dress (for example the colour pattern of the dress, the fastening on the left-side of the waste, and the gathering on the left hip) are preserved during the correction step.

At step 4, the image matcher 14 identifies and extracts target features from the data pertaining to the dress and identifies images determined to comprise similar features. In the example depicted in figure 6, the extracted features comprise the colour pattern of the dress, the fastening on the left-side of the waste, and the gathering on the left hip are extracted as target features.

As discussed in relation to figure 4, the image matcher 14 searches for matching images based not only the extracted target features but also features based on deformations of these features. This can be seen, for example, in the features of the candidate images that are determined to match the target feature. In particular, it can be seen that the features of the candidate images that are determined to match the target features have different orientations, forms etc. to the extracted feature and would not therefore be considered a match for the original target feature itself.

As discussed in relation to figure 2, the image matcher 14 may search for matching images from amongst any set of available images. For example, the image matcher 14 may search for matching images from amongst a plurality of images stored in, or association with, the server within which the image matcher 14 is comprised and/or the software application running on the user's smartphone.

At step 5, the image matcher 14 then transmits the data pertaining to the identified image to the user smartphone for display.

The invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention. It will be appreciated that the methods described are by way of example only and various modifications of the disclosed methods may be made. For example, the order in which steps of the methods are performed may be altered and/or individual steps may be omitted.