Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PROCESSING TASK SCHEDULING
Document Type and Number:
WIPO Patent Application WO/2019/175584
Kind Code:
A1
Abstract:
A computer system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier, the first data processing instance being one of a plurality of data processing instances, the first data processing instance being configured to: transmit, at a first start time of a repeated period, the unique identifier to a nominee data location in a storage location accessible by the plurality of data processing instances; read, after a dwell period from the first start time, nominee data stored in the nominee data location; and send, if the nominee data is the unique identifier of the first processing instance, an execution message to one data processing instance of a first set of the plurality of data processing instances, the execution message instructing the one data processing instance of the first set of the plurality of data processing instances to execute a processing task.

Inventors:
STURA, Christopher Albert Adam (10a Kings Road, West Drayton Greater London UB7 9EF, UB7 9EF, GB)
Application Number:
GB2019/050710
Publication Date:
September 19, 2019
Filing Date:
March 13, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CLOUDREACH EUROPE LIMITED (3rd Floor Saffron House, 6-10 Kirby Street, London EC1N 8TS, EC1N 8TS, GB)
International Classes:
G06F9/48; G06F9/50
Domestic Patent References:
WO2014203023A12014-12-24
Foreign References:
US5513354A1996-04-30
US9647889B12017-05-09
Attorney, Agent or Firm:
SLINGSBY PARTNERS LLP (1 Kingsway, London Greater London WC2B 6AN, WC2B 6AN, GB)
Download PDF:
Claims:
CLAIMS

1. A computer system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier, the first data processing instance being one of a plurality of data processing instances, the first data processing instance being configured to:

transmit, at a first start time of a repeated period, the unique identifier to a nominee data location in a storage location accessible by the plurality of data processing instances;

read, after a dwell period from the first start time, nominee data stored in the nominee data location; and

send, if the nominee data is the unique identifier of the first processing instance, an execution message to one data processing instance of a first set of the plurality of data processing instances, the execution message instructing the one data processing instance of the first set of the plurality of data processing instances to execute a processing task.

2. A computer system according to claim 1 , wherein the first start time is one of a plurality of start times, one start time being associated with each repeated period.

3. A computer system according to claims 1 or 2, wherein the first start time is at a particular point in the repeated period.

4. A computer system according to any preceding claim, wherein the first data processing instance is configured to generate a clock and the repeated period and start time is monitored with reference to the clock.

5. A computer system according to any preceding claim, wherein the first data processing instance is configured to transmit the unique identifier by writing the unique identifier to the nominee data location.

6. A computer system according to any preceding claim, wherein the dwell period has a length of time that means the reading of nominee data occurs during the respective repeated period.

7. A computer system according to any preceding claim, wherein the first data processing instance is configured to send the execution message by transmitting the execution message to a distribution unit, the distribution unit being configured to permit the execution message to be received by only one data processing instance of the first set of the plurality of data processing instances.

8. A computer system according to any preceding claim, wherein the first data processing instance is configured to send an execution message to one data processing instance for each of a plurality of sets of the plurality of data processing instances, the execution message instructing the one data processing instance for each of a plurality of sets to execute the processing task.

9. A computer system according to any preceding claim, wherein the plurality of data processing instances are grouped into a plurality of sets of data processing instances, and the first data processing instance is configured to send the execution message by sending the execution message to one data processing instance of each set of the plurality of data processing instances.

10. A computer system according to any preceding claim, wherein the first data processing instance is configured to send the execution message by transmitting the execution message to a plurality of distribution units, each distribution unit being associated with one set of the plurality of data processing instances, each distribution unit being configured to permit the execution message to be received by only one data processing instance of the respective sets of the plurality of data processing instances.

1 1. A computer system according to any preceding claim, wherein each set of the plurality of data processing instances is associated with a respective location.

12. A computer system according to any preceding claim, wherein each of the plurality of data processing instances has a respective unique identifier.

13. A computer system according to any preceding claim, the computer system comprising at least one hardware processor configured to execute a second data processing instance, the second data processing instance being one of the plurality of data processing instances, the second data processing instance being configured to: receive the execution message to one of a first set of the plurality of data processing instances; and

execute the processing task.

14. A computer system according to claim 13, wherein the at least one hardware processor is configured to execute a first data processing instance and the at least one hardware processor is configured to execute the second data processing instance are the same at least one hardware processor.

15. A distributed computing system comprising:

a first computing system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier, the first data processing instance being one of a plurality of data processing instances, the first data processing instance being configured to:

transmit, at a first start time of a repeated period, the unique identifier to a nominee data location in a storage location accessible by the plurality of data processing instances;

read, after a dwell period from the first start time, nominee data stored in the nominee data location; and

send, if the nominee data is the unique identifier of the first processing instance, an execution message to one of a first set of the plurality of data processing instances, the execution message instructing the one of a first set of the plurality of data processing instances to execute a processing task; and a second computing system comprising at least one hardware processor configured to execute a second data processing instance, the second data processing instance being one of the plurality of data processing instances, the second data processing instance being configured to:

receive the execution message to one of a first set of the plurality of data processing instances; and

execute the processing task.

16. A distributed computing system according to claim 15, the distributed computing system comprising a distribution unit, and wherein the first data processing instance is configured to send the execution message by transmitting the execution message to a distribution unit, and the distribution unit is configured to permit the execution message to be received by only one data processing instances of the first set of the plurality of data processing instances.

17. A distributed computing system according to claim 16, wherein the second data processing instance is configured to receive the execution message by requesting an execution message from the distribution unit.

18. A distributed computing system according to claim 17, wherein the second data processing instance is configured to request the execution message from the distribution unit on a periodic basis.

19. A distributed computing system according to any of claims 15 to 18, the distributed computing system comprising the storage location, the storage location being accessible by the plurality of data processing instances.

20. A method for causing the execution of a processing task, the method comprising:

transmitting, at a first start time of a repeated period, a unique identifier to a nominee data location in a storage location accessible by a plurality of data processing instances, the unique identifier being associated with a first data processing instance; reading, after a dwell period from the first start time, nominee data stored in the nominee data location; and

sending, if the nominee data is the unique identifier of the first processing instance, an execution message to one of a first set of the plurality of data processing instances, the execution message instructing the one of a first set of the plurality of data processing instances to execute a processing task.

Description:
PROCESSING TASK SCHEDULING

This invention relates to a computer system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier.

In a distributed computing system, a plurality of data processing instances may be running across a plurality of computing systems. The data processing instances may run directly on the hardware of the computing system, may be a virtual machine running on a hardware abstraction layer running on the hardware of the computing system, and/or may be a container image that runs in an isolated process in the user space of an underlying operating system that may be a virtual machine or run directly on the hardware of the computing system.

These data processing instances may be used to execute processing tasks on a periodic basis. It can be advantageous if the processing instances are all similar in the way in which they operate and process tasks. This is because it means that new processing instances can be substantiated without customising the way in which the processing instances run. Therefore, the processing instances may be largely homogeneous and ideally have no specific specialisms for any individual processing instances. In this way, new processing instances can be easily initiated or processing instances can be replaced without requiring a specific configuration for the instance at initiation.

Such homogeneity in the way in which the processing instances operate can cause problems in the scheduling of the execution of processing tasks on a periodic basis. This is because it can be difficult to avoid having to specialise at least some of the processing instances to avoid the duplication of the execution of these processing tasks. Alternatively, it requires a highly complex system of locking of data to make sure that the processing tasks are not duplicated in an undesirable fashion. If at least some of the processing instances are not specialised to schedule the processing tasks then there is a high risk that more than one data processing instances may run on more than one processing instance when the task should only be run on one such instance. This means that the underlying computer system resources may be used in an inefficient manner because more resources may be consumed, across multiple processing instances, than is required for the execution of the processing task.

Therefore, it is desirable for there to be an improved system for the scheduling of the execution of a processing task across a plurality of data processing instances.

According to a first aspect of the present invention there is provided a computer system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier, the first data processing instance being one of a plurality of data processing instances, the first data processing instance being configured to: transmit, at a first start time of a repeated period, the unique identifier to a nominee data location in a storage location accessible by the plurality of data processing instances; read, after a dwell period from the first start time, nominee data stored in the nominee data location; and send, if the nominee data is the unique identifier of the first processing instance, an execution message to one data processing instance of a first set of the plurality of data processing instances, the execution message instructing the one data processing instance of the first set of the plurality of data processing instances to execute a processing task.

The first start time may be one of a plurality of start times, one start time may be associated with each repeated period. The first start time may be at a particular point in the repeated period. The first data processing instance may be configured to generate a clock and the repeated period and start time may be monitored with reference to the clock. The first data processing instance may be configured to transmit the unique identifier by writing the unique identifier to the nominee data location. The dwell period may have a length of time that means the reading of nominee data occurs during the respective repeated period.

The first data processing instance may be configured to send the execution message by transmitting the execution message to a distribution unit, the distribution unit may be configured to permit the execution message to be received by only one data processing instance of the first set of the plurality of data processing instances. The first data processing instance may be configured to send an execution message to one data processing instance for each of a plurality of sets of the plurality of data processing instances, the execution message instructing the one data processing instance for each of a plurality of sets to execute the processing task. The plurality of data processing instances may be grouped into a plurality of sets of data processing instances, and the first data processing instance may be configured to send the execution message by sending the execution message to one data processing instance of each set of the plurality of data processing instances. The first data processing instance may be configured to send the execution message by transmitting the execution message to a plurality of distribution units, each distribution unit may be associated with one set of the plurality of data processing instances, each distribution unit may be configured to permit the execution message to be received by only one data processing instance of the respective sets of the plurality of data processing instances.

Each set of the plurality of data processing instances may be associated with a respective location. Each of the plurality of data processing instances may have a respective unique identifier. The computer system may comprise at least one hardware processor configured to execute a second data processing instance, the second data processing instance may be one of the plurality of data processing instances, the second data processing instance may be configured to: receive the execution message to one of a first set of the plurality of data processing instances; and execute the processing task. The at least one hardware processor may be configured to execute a first data processing instance and the at least one hardware processor may be configured to execute the second data processing instance are the same at least one hardware processor.

According to a second aspect of the present invention there is provided a distributed computing system comprising: a first computing system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier, the first data processing instance being one of a plurality of data processing instances, the first data processing instance being configured to: transmit, at a first start time of a repeated period, the unique identifier to a nominee data location in a storage location accessible by the plurality of data processing instances; read, after a dwell period from the first start time, nominee data stored in the nominee data location; and send, if the nominee data is the unique identifier of the first processing instance, an execution message to one of a first set of the plurality of data processing instances, the execution message instructing the one of a first set of the plurality of data processing instances to execute a processing task; and a second computing system comprising at least one hardware processor configured to execute a second data processing instance, the second data processing instance being one of the plurality of data processing instances, the second data processing instance being configured to: receive the execution message to one of a first set of the plurality of data processing instances; and execute the processing task.

The distributed computing system may comprise a distribution unit, and wherein the first data processing instance may be configured to send the execution message by transmitting the execution message to a distribution unit, and the distribution unit may be configured to permit the execution message to be received by only one data processing instances of the first set of the plurality of data processing instances. The second data processing instance may be configured to receive the execution message by requesting an execution message from the distribution unit. The second data processing instance may be configured to request the execution message from the distribution unit on a periodic basis. The distributed computing system may comprise the storage location, the storage location may be accessible by the plurality of data processing instances.

According to a third aspect of the present invention there is provided a method for causing the execution of a processing task, the method comprising: transmitting, at a first start time of a repeated period, a unique identifier to a nominee data location in a storage location accessible by a plurality of data processing instances, the unique identifier being associated with a first data processing instance; reading, after a dwell period from the first start time, nominee data stored in the nominee data location; and sending, if the nominee data is the unique identifier of the first processing instance, an execution message to one of a first set of the plurality of data processing instances, the execution message instructing the one of a first set of the plurality of data processing instances to execute a processing task.

The present invention will now be described by way of example with reference to the accompanying drawings. In the drawings: Figure 1 shows a schematic diagram of a distributed computing system.

Figure 2 shows a logical diagram of the distributed computing system.

Figure 3 shows a flow diagram of a method for allocating processing tasks.

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art.

The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.

The present invention relates to a computer system comprising at least one hardware processor configured to execute a first data processing instance having a unique identifier, the first data processing instance being one of a plurality of data processing instances. The first data processing instance may be configured to transmit, at a first start time of a repeated period, the unique identifier to a nominee data location in a storage location accessible by the plurality of data processing instances. The first data processing instance may be configured to read, after a dwell period from the first start time, nominee data stored in the nominee data location. The first data processing instance may be configured to send, if the nominee data is the unique identifier of the first processing instance, an execution message to one data processing instance of a first set of the plurality of data processing instances, the execution message instructing the one data processing instance of the first set of the plurality of data processing instances to execute a processing task.

Figure 1 shows an example of a distributed computing system. This system can be used to schedule the execution of processing tasks by selected ones of a plurality of data processing instances. The distributed computer system comprises a plurality of computing systems 1 . A first computing system 10 is described by way of example. The other computing systems 1 may be configured in an analogous, but not necessarily identical, manner. Each of these computing systems 1 may be a discrete server, or they may be part of a cluster of servers that work together to support the running of processing instances.

The first computing system 10 may comprise a processing section 12 and a storage location 14. The first computing system 10 may be capable of executing code to enable processing instances to run thereon to perform the methods described herein to schedule the execution of processing tasks. These methods may ultimately be implemented and controlled by the processing section 12. The processing section 12 could perform its methods using dedicated hardware, using at least one general purpose processing executing software code, or using a combination of the two. The processing section may comprise at least one hardware processor 16. The processor 16 executes software code stored in a non-transient way in software memory 18 in order to perform its methods. The processing section can read/write data from/to storage location 14. The storage location 14 may be in the form of a memory. Storage location 14 may comprise non-volatile memory and/or may be in the form of an array of discrete banks of memory such as hard disks. Whilst shown in Figure 1 as schematically being part of first computing system 10, the storage location 14 may be separate to first computing system 10 and connected to first computing system 10. The storage location 14 may be part of a distributed storage system. Such a distributed storage system may enable the mirroring of data stored in the storage location 14 with other storage locations. These other storage locations may be located in different locations to storage location 14.

As shown in figure 1 , the computing systems 10 may be grouped in to logical groups E.g. as shown by groups of computing systems 2, 3 and 4. These groups may be based on the physical location of the computing systems 10. I.e. one set of computing systems 1 may be grouped together because they are located in the same general physical location or alternatively because they together form a logical group of computing systems for instance they are provided by a common platform provider. The data processing instances that run on the computing systems may also be so grouped. For instance, a first set of data processing instances may be logically grouped together because they are running on computer systems that form a logical grouping (e.g. because they are in the same general physical location). Thus, the plurality of data processing instances running across the distributed computing system may be divided into a number of sets of data processing instances. It will be appreciated that although only three groups of computing systems, and thus processing instances, are shown in figure 1 there could be any number.

The data processing instances may each be configured to maintain an internal clock. The internal clock provides a reference to the passing of time. The data processing instances may maintain the internal clock by reference to a clock being maintained by the underlying computing system. The internal clock may be synchronised to an external time source such as an network time provider.

The computing systems may be connected to a computer network 20 to permit communication between the computing systems themselves and also between the computing systems and other devices for instance to allow other devices to access data provided by the computing systems.

A logical diagram of the distributed computing systems is shown in figure 2. Figure 2 shows a plurality of data processing instances 25,26, 27, 28 which are referenced generally as data processing instances 24. These data processing instances are executed by the computer systems 1 described herein. The data processing instances may be executed by one or more hardware processors comprised within the computer systems 1.

To assist with the description of the task scheduling one of the data processing instances is referenced as a first data processing instance 25 with the rest of the plurality of computing systems referenced as data processing instances 24. It will be understood that this distinction is merely for the purposes of describing the task scheduling system and any of the plurality of data processing instances 24 can operate in the manner described with reference to the first data processing instance 25. The distributed computing system may comprise a storage location 30. The plurality of data processing instances 24 are connected to the storage location 30. The storage location 30 is accessible by the plurality of computing systems 1 to enable the data processing instances 24 to access the storage location 30. The storage location 30 may be a distributed storage location. The storage location 30 may be configured so that data stored in the storage location is mirrored between a plurality of storage locations 30. The plurality of storage locations 30 may be distributed in different locations to enable quick access of the data stored within the storage location 30 by the plurality of data processing instances 24 that are located in the same location as one of the storage locations. The data processing instances 24 can therefore read and write data to the storage location 30. The storage location 30 may be provided by a data processing instance 24 being executed by the computing systems 1. The storage location 30may be configured to be a highly scalable distributed file system service that is capable of real-time data replication.

The distributed computing system may comprise a task distribution unit 31. The task distribution unit 31 may be configured to provide a scheduler queue. The task distribution unit 31 may operate as a First-In-First-Out (FIFO) buffer. The task distribution unit 31 may be configured to store data associated with a plurality of processing tasks. The data associated with the processing task may provide instructions to a data processing instance 24 to enable the data processing instance 24 to execute the processing task. Data can be read out of the task distribution unit 31 in the order in which it was written to the task distribution unit 31 . When data is read out of the task distribution unit 31 the data is removed from the task distribution unit 31 so that the processing task is only assigned to one data processing instance 24 which accesses the task distribution unit 31. The data processing instances 24 may be capable of writing data to the task distribution unit 31. This data may be a processing task descriptor. I.e. data associated with a processing task that describes to a data processing instance 24 the details the processing task so that the data processing instance 24 can execute the processing task. The distribution unit 31 may deliver data being read out of the task distribution unit 31 to a selected data processing instance. The data processing instances 24 may be configured to request data from the distribution unit 31 . When a data processing instance 24 requests a read out of data from the task distribution unit 31 the data may then be removed from the data store of the task distribution unit 31 so that it cannot be read by another data processing instance 24. The data processing instance 24 may read out instructions associated with a single processing task when requesting data from the task distribution unit 31 . The distribution unit 31 may be configured to deliver published messages (i.e. information about data processing tasks) to subscribing consumers (i.e. data processing instances).

The method by which the distributed computing system allocates processing tasks will now be described with reference to figure 3. The allocation of tasks in the manner described herein is designed to assist in the execution of a code component (i.e. a processing task) at a regular interval with even distribution of execution across a potentially horizontally infinite cluster of data processing instances (i.e. processing instances running directly on physical hardware, processing instances running as virtual machines running on a hardware abstraction layer, and/or processing containers that run in user memory space in another processing instance). There may be no differentiation between the data processing instances. I.e. there may be no specialism to the data processing instances and so has a very high level of redundancy. The allocation of tasks may be undertaken by any data processing instance at any given time. The decision over which data processing instance is assigned to allocate the tasks may be undertaken in the following way.

As shown at step 301 , a data processing instance may be launched. This may be by physically booting up a computer system and/or by instantiating a virtual machine or processing container.

Once launched the data processing instance may be configured to seek available processing tasks for processing and to attempt to become the data processing instance that allocates the processing task. The allocation of processing tasks occurs on a periodic basis. I.e. at the start of a repeated period of time. Thus, as shown in step 302, once the data processing instance has launched, the data processing instance starts to monitor the repeated period.

Whilst time can be used for this repeated period and is used in the example described herein, other infinitely incremental widely available environmental agents could be used. For instance, these could include artificial agents such as temperature variations or light saturation variations that produce a timeline. The environmental agent just needs to be such that it periodically changes at the same rate for all of the data processing elements of the distributed computing system.

The data processing instance may monitor the repeated period by referencing a incremental environmental agent. Where the repeated period is a repeated time period the data processing instance may monitor the repeated period by reference to a clock generated as described herein. The data processing instance may monitor the repeated period to determine when the repeated period has reached the start time of the repeated period. The start time of the repeated period may be when the clock reaches zero seconds for a given minute but another threshold repeated time may be used instead.

When the repeated period reaches a first start time, the data processing instance is triggered to nominate itself as the data processing instance that allocates the next processing task. As shown in step 303, the data processing instance may nominate itself to allocate the next processing task by writing a unique identifier associated with the data processing instance to a storage location 30. The data processing instance may write the unique identifier to a nominee data location in the storage location. The data processing instance may have been assigned the unique identifier when the data processing instance was launched. The data processing instance may generate the unique identifier itself based on information associated with the data processing instance. For example, the unique identifier may be based on the location of the data processing instance, the environment in which the data processing instance is running, and/or the time and/or date on which the data processing instance was first launched.

The storage location 30 is a storage location that is configured to be accessible by the data processing instances that form part of the distributed computing system. Hence, the storage location 30 may be assessible by a plurality of data processing instances. Each of the data processing instances that are part of the distributed computing system are configured to also write their own unique identifiers to the storage location 30. In particular, they write their unique identifiers to the nominee data location in the storage location 30. The clocks of each of the data processing instances may be synchronised so in theory each of the data processing instances should write to the nominee data location at the same moment. However, in practice environmental factors mean that each of the data processing instances writes to the storage location as different times. For instance, network synchronisation, latency, and/or current workload may mean that the write to the storage location is delayed from the exact start time. Thus, in effect they transmit the unique identifier at the start time but it may not be written to the data location at exactly that time. This means that there will be one write to the nominee data location that occurs after the others. As each write overwrites the data that was currently in the nominee data location, this means that the unique identifier of the data processing instances that writes to the nominee data location last is the unique identifier that is present in the nominee data location at the end of the write process.

After a predefined period from the start time, which may be known as a dwell time, each of the data processing instances reads the unique identifier that is stored in the nominee data location. The data processing instances may read nominee data that is stored in the nominee data location. This is as shown in step 304. The nominee data may be a unique identifier of one of the plurality of data processing instances. Notionally the one that wrote to the nominee data location last. The dwell time may be a non-zero period of time. The dwell time may be less than the repeated period so that the data processing instances check the nominee data location before the next round of writes to the nominee data location occur. Preferably, the dwell time is long enough that all of the data processing instances have been given time to write to the nominee data location but short enough that there is time before the next start time to instruct the execution of the processing task. For example, the dwell time may be 5 seconds, 10 second, or 15 seconds.

As shown in step 305, the data processing instances compare the nominee data to their own unique identifier. If the unique identifier contained in the nominee data matches the unique identifier of the data processing instance, then that data processing instance is taken to have been nominated to schedule the execution of the next processing task. As shown in step 306, the data processing instance that has been nominated sends an execution message that instructs the execution of the processing task. The execution message may be sent in a way that means only one of the plurality of data processing instances is instructed to execute the processing task. This may be done by sending the execution message to a distribution unit 31. The plurality of data processing instances may poll the distribution unit 31 periodically to see whether there is a new instruction to execute a processing task. When one of the data processing instances polls the distribution unit 31 and reads an instruction to execute the distribution unit 31 may then remove the instruction from the data store of the distribution unit 31 . In this way the distribution unit 31 may operate as herein described. For instance, the distribution unit 31 may be a FIFO buffer.

The process restarts at step 302 once the next start time of the repeated period is reached.

This system is advantageous because there are no specialisms within the data processing instances. The scheduling of the tasks is not reliant on a specialist task scheduler data processing instance because all of the data processing instances that take part in the distributed computing system are each capable of scheduling the execution of a task. This means that the underlying computing systems are more reliable in executing the processing tasks and, because of the reliability, are more efficient at executing the tasks.

As described herein, the plurality of data processing instances may be divided up based on their location. Therefore, there may be more than one set of data processing instances. In some situations, it may be desirable for the processing task to be executed in each of the locations and so by one data processing instance in each set of data processing instances. In this case, there may be a distribution unit 31 associated with each location. The nominated data processing instance may therefore send the instruction to execute the data processing task to each of the distribution units 31 , one of each set of the plurality of data processing instances. Each set of data processing instances 31 may poll their associated distribution unit 31 and thus one of the data processing instances 31 from each set receives the instruction to execute the task. This means that one data processing instance from each set, and thus potentially from each location, is instructed to and executes the processing task.

The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.