Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
QUALIFICATION OF A DERMASCOPE IMAGING DEVICE
Document Type and Number:
WIPO Patent Application WO/2024/002649
Kind Code:
A1
Abstract:
Qualifying an unqualified dermascope imaging device for use with an image classification algorithm is described. A qualification data set comprising a plurality of pairs of images of skin lesions is accessed, wherein each pair comprises an image of a skin lesion captured by an unqualified dermascope imaging device and an image of the skin lesion captured by a qualified dermascope imaging device. Using the image classification algorithm, a confidence value of classification of each image is computed. A similarity metric is measured between the unqualified and qualified dermascope imaging device using differences in the confidence values between images of each pair. Qualifying the unqualified dermascope imaging device for use with the image classification algorithm is done in response a comparison between the similarity metric and a similarity threshold.

Inventors:
GREENHALGH JACK HECTOR (GB)
Application Number:
PCT/EP2023/065292
Publication Date:
January 04, 2024
Filing Date:
June 07, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SKIN ANALYTICS LTD (GB)
International Classes:
G06T7/00; A61B5/00
Other References:
CUGMAS BLAZ ET AL: "Color constancy in dermatoscopy with smartphone", PROGRESS IN BIOMEDICAL OPTICS AND IMAGING, SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, BELLINGHAM, WA, US, vol. 10592, 7 December 2017 (2017-12-07), pages 105920G - 105920G, XP060098312, ISSN: 1605-7422, ISBN: 978-1-5106-0027-0, DOI: 10.1117/12.2297252
Attorney, Agent or Firm:
CMS CAMERON MCKENNA NABARRO OLSWANG LLP (GB)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method for qualifying an unqualified dermascope imaging device for use with an image classification algorithm, the method comprising the steps of:

- accessing a qualification data set comprising a plurality of pairs of images of skin lesions, wherein each pair comprises an image of a skin lesion captured by an unqualified dermascope imaging device and an image of the skin lesion captured by a qualified dermascope imaging device;

- computing, using the image classification algorithm, a confidence value of classification of each image;

- measuring a similarity metric between the unqualified and qualified dermascope imaging device using differences in the confidence values between images of each pair;

- qualifying the unqualified dermascope imaging device for use with the image classification algorithm in response a comparison between the similarity metric and a similarity threshold.

2. The computer-implemented method of claim 1, wherein the similarity metric is any of: a mean squared difference, a dot product, a cross entropy loss.

3. The computer-implemented method of claim 1 or claim 2, wherein the similarity metric is where A is a matrix of confidence values of image elements of image ij from the qualified dermascope imaging device, B is a matrix of confidence values of image elements of image ij from the unqualified dermascope imaging device, i is a confidence associated with a lesion type, j is a specific lesion image in the calibration data set, N is the total number of lesion types, and K is the total number of lesion image pairs.

4. The computer-implemented method of claim 1, wherein the similarity threshold is calculated by: obtaining a calibration data set of skin lesion images where a confidence value of each image computed using the image classification algorithm is known and wherein each skin lesion image is labelled as a lesion type comprising any of: malignant, premalignant, benign; applying a same series of parameterised distortions and/or transformations to each image from the calibration data set; computing, using the image classification algorithm, a confidence value of a classification of each image of the distorted and/or transformed images; measuring the similarity metric between each image of the calibration data set and each corresponding distorted/transformed image; determining a receiver operator curve, AUROC, for each skin lesion image of type malignant and premaligant, the AUROC having a confidence interval; recording the similarity metric and AUROC for each pair comprising a calibration image and a distorted/transformed image; selecting the largest similarity metric before any AUROC drops below its confidence interval as the similarity threshold.

5. The computer-implemented method of claim 4, wherein the distortions and/or transformations comprise any one or more of: scaling of a red RGB colour channel, scaling of a green RGB colour channel, scaling of a blue RGB colour channel, randomised down-sampling, randomised cropping, randomised rotation, randomised blurring, addition of randomised noise to the pixel values of the image, image translation, use of a generative adversarial network (GAN/CycleGAN) to generate images which appear to be captured using a different capture device.

6. The computer-implemented method of any of claims 4 to 5, wherein the confidence interval is calculated using bootstrapping with replacement.

7. The computer-implemented method of any preceding claim, wherein each pair of images are of lesion type benign. The computer-implemented method of any preceding claim, where the image classification algorithm is a machine-learning based image classification algorithm. The computer-implemented method of claim 8, wherein the confidence value of classification of each image is calculated using a final layer of the machinelearning based image classification algorithm. The computer-implemented method of claim 8, wherein the confidence value of classification of each image is calculated using a layer other than the final layer of the machine-learning based image classification algorithm. The computer-implemented method of any preceding claim, wherein the unqualified dermascope imaging device comprises an imaging device and a dermascope. The computer-implemented method of claim 11, wherein the imaging device is a smartphone device or a DSLR camera. The computer-implemented method of claim 3, wherein the total number of lesions is between 100 and 200. The computer-implemented method of any preceding claim, wherein the qualified dermascope imaging device was qualified using the method of claim 1. A dermascope imaging device comprising a processor and storage configured to perform the method of any preceding claim.

Description:
QUALIFICATION OF A DERMASCOPE IMAGING DEVICE

BACKGROUND

[1] Dermoscopy, also known as dermatoscopy, is a form of skin surface microscopy, often used in skin cancer diagnosis and, more generally, the examination of skin lesions. A dermascope, also known as a dermatoscope, is an instrument used to perform dermoscopy; the dermascope comprises a magnifier and a light source (polarized or non-polarized). A dermascope can be attached to an image-capture device (such as a digital camera or a smartphone), and a captured image of a skin lesion, by the imagecapture device combined with the dermascope, is processed using an algorithm to distinguish between benign and malignant lesions.

SUMMARY

[2] The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not intended to identify key features or essential features of the claimed subject matter nor is it intended to be used to limit the scope of the claimed subject matter. Its sole purpose is to present a selection of concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.

[3] Disclosed herein is a computer-implemented method for qualifying an unqualified dermascope imaging device for use with an image classification algorithm, the method comprising the steps of: obtaining a qualification data set comprising a plurality of pairs of images of skin lesions, wherein each pair comprises an image of a skin lesion captured by an unqualified dermascope imaging device and an image of the skin lesion captured by a qualified dermascope imaging device; computing, using the image classification algorithm, a confidence value of classification of each image; measuring a similarity metric between the unqualified and qualified dermascope imaging device using differences in the confidence values between images of each pair; and qualifying the unqualified dermascope imaging device for use with the image classification algorithm in response to a comparison between the similarity metric and a similarity threshold. DESCRIPTION OF THE DRAWINGS

[4] The present description will be beter understood from the following detailed description read in light of the accompanying drawings, wherein:

FIG. 1 is a schematic diagram of an example dermascope imaging device;

FIG. 2 is a schematic representation of distortions and/or transformations applied to an image of a skin lesion;

FIG. 3 is a block diagram relating to an example method for qualifying an unqualified dermascope imaging device for use with an image classification algorithm.

DETAILED DESCRIPTION

[5] The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present examples are constructed or utilized. The description sets forth the functions of the examples and the sequence of operations for constructing and operating the examples. However, the same or equivalent functions and sequences may be accomplished by different examples.

[6] The present technology is concerned with qualifying an image-capture device with a particular dermascope. A dermascope, also known as a dermatoscope, is an instrument used to perform dermoscopy, the dermascope comprises a magnifier and a light source (polarized or non-polarized). A dermascope can be atached to an imagecapture device, and a captured image of a skin lesion, by the image-capture device combined with the dermascope, is processed using an algorithm to distinguish between benign and malignant lesions. An image-capture device combined with a dermascope is referred to as dermascope imaging device.

[7] The term “qualifying” is used to refer to validating an image-capture device to be operable with a dermascope and specified image processing algorithm. The term “image-capture device” is used to refer to any device with an optical instrument to capture images, e.g. a digital camera, a hand-held smartphone, a tablet computer or other image-capture device. [8] At present, the most common approach for qualifying a dermascope imaging device is to validate said dermascope imaging device using a prospective clinical study. This is both costly and very time-consuming, and also uses up considerable resources of the health services. Given the rate at which new image-capture devices, such as smartphones, are released or image-capture device software is updated, and the impracticality of running a prospective study for every new image-capture device to be qualified, an improved method to qualify an image-capture device in combination with a dermascope (i.e. a dermascope imaging device) has been developed.

[9] In order for a dermascope imaging device to be regarded as qualified, running a classification algorithm (e.g. a malignant/benign skin classification algorithm) on the “unqualified” dermascope imaging device produces similar predictions for a particular lesion or skin patch to a reference device. A reference device is a “qualified” dermascope imaging device which is known to reliably operate with the classification algorithm. The predictions can be quantified in terms of confidence values assigned to each captured image of a skin lesion. For example, a lesion with a confidence value of 0.6 for ‘benign melanocytic nevus’, 0.1 for ‘solar lentigo’, and 0.3 for ‘seborrheic keratosis’ for one dermascope imaging device should produce similar confidence scores when captured with another dermascope imaging device. If this is not the case, the dermascope imaging device is unsuitable for use with the classification algorithm.

[10] The inventors have recognized that there is enough variation between benign lesions captured from the general population to ensure that if the assertion (similar confidence scores regardless of capture device) holds for an unqualified dermascope imaging device with respect to a reference device, the unqualified dermascope imaging device can be qualified for use with the classification algorithm.

[11] There are several examples of the types of variation which can be observed between the captured images of different dermascope imaging devices, such as variations in white balance, variations in image resolution, proprietary post-processing (including blurring and sharpening of the image), differences in cross-polarised light emitting diodes LEDs used in the dermascope, variations in magnification used in the dermascope and/or de-mosaicking algorithms. If a classification algorithm produces similar confidence values for two different capture devices, the sensitivity and specificity of classifying the lesions will also be similar, regardless of the variations between capture devices. [12] FIG. 1 shows an image-capture device la attached to a dermascope lb, i.e. a dermascope imaging device 1. In some examples, the image-capture device la comprises a dermascope lb, i.e. they are formed as a single device.

[13] A metric, the mean confidence distance (MCD), has been defined by the inventors which is calculated to define whether a dermascope imaging device is suitable for use with the classification algorithm. A method for determining the threshold to be applied to the MCD metric in order to qualify a new dermascope imaging device has also been defined by the inventors.

Table 1: Breakdown of lesion type numbers for an Exemplary Threshold Calibration Data Set

[14] Table 1 describes an exemplary data set used to develop the dermascope imaging device qualification method and calculate the MCD upper threshold. The MCD upper threshold is used to determine whether a dermascope imaging device has passed device qualification. The exemplary data described in Table 1 was collected empirically with appropriate consents. [15] The labels for the lesions in Table 1 were provided by the exemplary sources as described in Table 2, with malignant (Melanoma, Squamous Cell Carcinoma (SCC) and Basal Cell Carcinoma (BCC)) lesions confirmed by histopathology.

Table 2: Breakdown of source of ground-truth for an Exemplary Threshold Calibration Data Set

[16] The qualification method compares images by using the predicted confidence values from a layer of a neural network of a machine learning based classification algorithm, e.g. the final layer of the neural network of the machine learning based classification algorithm. Other layers, such as the penultimate layer, or any other layer of the neural network, can alternatively or additionally be used.

[17] To compare images of the same lesion from two capture devices (wherein one of the devices is a reference device which is already known to be operable with the classification algorithm), a matrix of confidence values is calculated for each image in a pair of images and the mean squared error (MSE) between the matrices is found. The MSE is calculated between image pairs for the two dermascope imaging devices (i.e. the unqualified dermascope imaging device and the qualified reference device), and the mean MSE over all the possible pairs calculated. This metric, referred to as the mean confidence distance (MCD), is a single value, which describes the similarity of the response of a classification algorithm for two devices.

[18] The MCD is defined as follows: where A and B are matrices comprising confidence values for lesions for capture device A and B, respectively, i is the confidence for a lesion type, j is a specific lesion in the data set, N is the total number of lesion types, and K is the total number of lesions. The value of K, the total number of lesions, can be any number less than 1000, and in some scenarios between 100 and 200, e.g. 120. Two devices with a MCD below a pre-defined threshold can be assumed to give similar sensitivity and specificity in terms of a classification task. The above equation for the mean confidence distance is expressed in words as: the average mean squared difference between the confidence values of image ij from capture device A and image ij from capture device B. The confidence values of image ij are given in matrix format since there is one confidence value per predicted lesion type output by the classification algorithm, and one set of confidence values per image.

[19] As an alternative to calculating the MCD (i.e. the similarity metric), or in addition to, the reciprocal of the dot product between the confidence values for each of the reference device and the unqualified device can be calculated as the similarity metric (instead of the mean square error). In another alternative to calculating the MCD as the similarity metric, the Cross Entropy Loss between the confidence values for each of the reference device and the unqualified device can be calculated, rather than the mean square error. Although the MCD metric is referred to throughout this application, the skilled person would understand that the alternatives mentioned could also be used as the similarity metric instead of the MCD metric.

[20] In order for a dermascope imaging device to be regarded as qualified, the similarity metric between the dermascope imaging device and the reference device (qualified imaging device) is lower or equal to a predetermined threshold.

[21] One method for determining the MCD threshold (i.e. the similarity threshold) involves first acquiring a statistically significant number of biopsy-labelled lesion images from a large number of dermascope imaging devices, including malignant, premalignant and benign lesion types. Each lesion is captured using a variety of dermascope imaging devices, ideally of varying image quality. Gathering a real-world data set of this nature would be impractical, time consuming, expensive and difficult. A more practical approach is to synthesise this data set by applying a set of distortions and/or transformations to a data set of images from a single dermascope imaging device, to simulate multiple dermascope imaging devices.

[22] Confidence values are found for the subset of lesions in the exemplary Threshold Calibration Data Set (Table 2) that are labelled by teledermatology. In table 2, this includes 1290 images. Typically, this subset of captured images is expected to comprise of mostly or all “clearly benign” lesions whereby a face-to-face specialist appointment or biopsy is not required. Therefore, these lesions are similar to the lesions used to perform dermascope imaging device qualification (as a user performing these is unlikely to have access to non-benign lesions).

[23] A series of parameterised distortions/transformations are applied to the captured images, using randomised (or otherwise) parameter values, and the similarity metric is calculated between the distorted and non-distorted images. These distortions/transformations may include any one or more of the following: Scaling of a red RGB colour channel, Scaling of a green RGB colour channel, Scaling of a blue RGB colour channel, Randomised down-sampling (reducing the size of the image, and then resizing to the original size), Randomised cropping (where the crop size is randomised), Randomised rotation (where the angle of rotation is randomised), Randomised blurring (where the size of the Gaussian kernel used to blur is randomised). Other distortions/transformations may include any of the following: Addition of randomised noise to the pixel values of the image, Image translation (whereby the pixels are shifted vertically, horizontally or diagonally) use of a generative adversarial network (GAN/CycleGAN) to generate images which appear to be captured using a different capture device. The term distortion includes transformations as well as distortions, i.e. any alteration of the captured image.

[24] The artificial distortions/transformations are applied to the captured images to simulate dermascope imaging devices of varying quality. The inventors have recognized that a reason why this works effectively when used in the device qualification method is that if the classification algorithm (e.g. machine-learning classification algorithm) gives a different confidence value for a particular dermascope imaging device compared to a reference device, the performance of that particular dermascope imaging device is likely to also be different. Therefore, dermascope imaging devices that are inappropriate for the purpose of reliably and accurately identifying lesions when paired with a dermascope can be detected without needing to know exactly why or how the captured images vary.

[25] Fig. 2 shows various distortions applied to the same lesion image. The process of applying distortions/transformations is repeated for a number of iterations (e.g. 100 iterations). In some examples, for every iteration, each distortion/transformation is applied once. As seen in Fig. 2, four iterations of applying distortions/transformations are shown (Distortion 1 - Distortion 4).

[26] The images labelled by teledermatology are distorted using identical distortions parameters (at each iteration). Therefore, in some examples, every image will have the green RGB channel scaled by the same random value. The confidence values for these distorted images are then found via the classification algorithm, and the similarity metric is calculated between the distorted and original, non-distorted images. Table 3 shows the similarity values for each of the exemplary distortions from Fig. 2:

Table 3: Mean confidence distance (MCD) values achieved for the ‘teledermatology discharge’ lesions from exemplary Threshold Calibration data set, calculated for the original image and 4 sets of distortions.

[27] It can be seen in Table 3 that the severely distorted image in Distortion 3 (as seen in Fig. 2) provides the highest similarity value whereas the least distorted image (Distortion 1) provides the lowest similarity value.

[28] At each iteration, the same parameterized distortions are applied to at least some (or all) of the remaining images in the exemplary threshold calibration data set (which comprises images labelled by histopathology and teledermatology). The exemplary data set comprises histologically confirmed malignant and premalignant lesions, and histologically and clinically confirmed benign lesions. The same randomised parameters as before are used to distort these reference images, and the area under the receiver operator characteristic curve (AUROC) is found for each malignant and premalignant lesion type, as ‘one-against-all’ (e.g. melanoma vs non-melanoma, basal cell carcinoma bcc vs non-bcc etc.). Alternatives to this metric could also be used, including sensitivity, specificity, or accuracy.

[29] The AUROCs for each malignant and premalignant lesion type are found for the original, non-distorted data set, along with the confidence intervals at, for example, 95% (the confidence intervals can be calculated using bootstrapping with replacement).

[30] The MCD and AUROC for each lesion is recorded for each simulated dermascope imaging device, for a total of, for example, 100 simulated dermascope imaging devices. These values are then sorted by MCD in ascending order, and the largest MCD before any AUROC drops below its lower confidence interval is taken as the MCD Threshold used, i.e. the predetermined MCD threshold. Table 4 shows the AUROCs achieved on this exemplary data set after the distortions shown in Fig. 2 have been applied.

Table 4: Area under receiver operator characteristic (AU ROC) curve for each lesion type, for non-distorted and distorted images created using 4 different sets of distortion parameters.

[31] By running the classification algorithm described above on the exemplary data set, an MCD threshold of 0.005871 is produced. At this level, the AUROC scores for the lesions are within this range. The next highest MCD was ‘0.006270’, which was still within range for the lesion types apart from ‘BCC’, for which the AUROC was ‘93.05%’ (less than the lower bound of 93.38%). The AUROC scores for this data are shown in Table 5, along with the acceptable AUROC ranges.

Table 5: Area under receiver operator characteristic (AU ROC) curve for each lesion type, for non-distorted images and the distorted images used to determine the mean confidence distance (MCD) threshold (AU ROC value with is out of range is marked in boldface).

[32] Based on the exemplary data set, any unqualified dermascope imaging device which produces an MCD less than the predetermined threshold of 0.005871 (when compared to the reference device) has passed the device qualification and is, therefore, regarded as a qualified dermascope imaging device that is operable with the same classification algorithm and dermascope as the reference device.

[33] To apply the dermascope imaging device qualification method, a new data set of lesion images is collected. The data set comprises a plurality (e.g. 120) of images of skin lesions captured using a reference dermascope imaging device (e.g. Apple iPhone (trade mark) with DL1 (trade mark) basic dermascope), and a plurality of images (ideally the same number as the number of images taken by the reference device) of the same skin lesions captured with the dermascope imaging device to be qualified. An advantage of the present invention is that there is no need for the collected data set of lesion images to include malignant lesions, any lesions will do, whether they are malignant or benign - this means a human expert is not required to look through the captured images.

[34] The solution of the present invention has the benefit of being quicker, more practical and cheaper than running an entire prospective study to qualify a new imagecapture device.

[35] The present application discloses, as shown in Fig. 3, a computer- implemented method for qualifying an unqualified dermascope imaging device for use with an image classification algorithm, the method comprising the steps of: obtaining 301 a qualification data set comprising a plurality of pairs of images of skin lesions, wherein each pair comprises an image of a skin lesion captured by an unqualified dermascope imaging device and an image of the skin lesion captured by a qualified dermascope imaging device; computing 302, using the image classification algorithm, a confidence value of classification of each image; measuring 303 a similarity metric between the unqualified and qualified dermascope imaging device using differences in the confidence values between images of each pair; qualifying 304 the unqualified dermascope imaging device for use with the image classification algorithm in response to the similarity value being below a similarity threshold.

[36] In some examples, the similarity threshold is calculated by: obtaining a calibration data set of skin lesion images where the confidence value of each image is known and wherein each skin lesion image is labelled as a lesion type comprising any of: malignant, premalignant, benign; applying a same series of parameterised distortions and/or transformations to each image from the calibration data set; computing, using a classification algorithm, a confidence value of a classification of each image of the distorted and/or transformed images; measuring the similarity metric between each image of the calibration data set and each corresponding distorted/transformed image; determining a receiver operator curve, AUROC, for each skin lesion image of type malignant and premaligant, the AUROC having a confidence interval; recording the similarity metric and AUROC for each pair comprising a calibration image and a distorted/transformed image; selecting the largest similarity metric before any AUROC drops below its confidence interval as the similarity threshold. This way of computing the similarity threshold is found to give reliable and accurate results when the similarity threshold is used as part of the process for qualifying image capture devices. It is unexpectedly found that distorting/transforming the images gives a high quality similarity threshold and means that it is not necessary to obtain empirical images from vast numbers of labelled images of skin lesions obtained from different image capture devices. Various different similarity metrics are usable such as mean squared difference, dot product, cross entropy loss.

[37] In some examples, the similarity metric is equal to MCD(A, B) =

Sj-oS.-o l (,,p — (y)l_ j s a mean squared difference between A and B over all the images, where A is a matrix of the confidence values for all images from the qualified capture device, B is a matrix of the confidence values for all images from the unqualified capture device (or vice versa), i is a confidence associated with a lesion type, j is a specific lesion image in the calibration data set, N is the total number of lesion types, and K is the total number of lesion image pairs. Using this similarity metric with the mean squared difference is found to be efficient to compute and to give high quality results in practice.

[38] In some examples, the distortions and/or transformations comprise any of: scaling of a red RGB colour channel, scaling of a green RGB colour channel, scaling of a blue RGB colour channel, randomised down-sampling, randomised cropping, randomised rotation, randomised blurring, addition of randomised noise to the pixel values of the image, image translation and/or use of a generative adversarial network (GAN/CycleGAN) to generate images which appear to be captured using a different capture device. In the case of scaling of a colour channel, the image capture device is a colour image capture device with a plurality of colour channels. Each of these ways of computing the distortions and/or transformations is found to be effective and give high quality results despite the fact that in many of these cases significant information is lost so that it may have been assumed the process would not work.

[39] In some examples, each pair of images of a skin lesion captured by an unqualified dermascope imaging device and by a qualified dermascope imaging device are of lesion type benign. This is a significant benefit as obtaining images of benign skin lesions is much more straightforward than for other types of skin lesion. In various examples the images captured by the unqualified dermascope imaging device are unlabelled, that is, it is unknown whether they depict benign, premalignant or malignant skin lesions.

[40] In some examples, the confidence interval is calculated using bootstrapping with replacement. This is found to be efficient and to give effective, high quality results.

[41] In some examples, the image classification algorithm is a machine-learning based image classification algorithm.

[42] In some examples, the confidence value of classification of each image is calculated using a final layer of the machine-learning based image classification algorithm. Using the outputs from the final layer enables a confidence value for each image element of the image to be computed in an efficient and accurate maimer. During training the machine learning image classifier has been trained to produce accurate values at the output layer and so using these accurate values is beneficial.

[43] In alternative examples, the confidence value of classification of each image is calculated using a layer other than the final layer of the machine-learning based image classification algorithm. Using values from an earlier layer which is not the final layer improves processing speed since computation is not through all of the layers. Using values from an earlier layer is likely to give values which are less confident overall so that bigger differences between the confidence values of the image pairs are observed making the similarity metric more sensitive.

[44] In some examples, the unqualified dermascope imaging device comprises an imaging device and a dermascope. In some examples, the imaging device is a smartphone device. These options are practical devices which are readily commercially available and operable by lay people in domestic settings.

[45] In alternative examples, the imaging device is a DSLR camera.

[46] In some examples, the total number of lesion images is between 100 and 200. Thus the task of obtaining the images is practical to achieve manually.

[47] In some examples, the qualified dermascope imaging device was qualified using the above method.

[48] The present application also discloses a dermascope imaging device comprising a processor and storage storing computer executable instructions configured to perform the above method. [49] The computer executable instructions are provided using any computer- readable media that is accessible by networking router. Computer-readable media includes, for example, computer storage media such as memory and communications media. Computer storage media, such as memory, includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or the like. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), electronic erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that is used to store information for access by a computing device. In contrast, communication media embody computer readable instructions, data structures, program modules, or the like in a modulated data signal, such as a carrier wave, or other transport mechanism. As defined herein, computer storage media does not include communication media. Therefore, a computer storage medium should not be interpreted to be a propagating signal per se. Although the computer storage media (memory) is shown within the computing-based device it will be appreciated that the storage is, in some examples, distributed or located remotely and accessed via a network or other communication link (e.g. using communication interface).

[50] The term 'computer' or 'computing-based device' is used herein to refer to any device with processing capability such that it executes instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the terms 'computer' and 'computing-based device' each include personal computers (PCs), servers, mobile telephones (including smart phones), tablet computers, set-top boxes, media players, games consoles, personal digital assistants, wearable computers, and many other devices.

[51] The methods described herein are performed, in some examples, by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the operations of one or more of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The software is suitable for execution on a parallel processor or a serial processor such that the method operations may be carried out in any suitable order, or simultaneously.

[52] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

[53] It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to 'an' item refers to one or more of those items.

[54] The operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.

[55] The term 'comprising' is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.

[56] It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this specification.