Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DATA SET ACCESS FOR UPDATING MACHINE LEARNING MODELS
Document Type and Number:
WIPO Patent Application WO/2020/118432
Kind Code:
A1
Abstract:
Systems and methods for managing resources for a main server that updates implementations of machine learning models used by client devices. The main server allocates resources to jobs for client devices, including updates on machine learning models operating on or operated for client devices based on restrictions of use for data sets generated by the client devices. In addition to restrictions (or lack thereof) as to the uses for the client's data set, other bases for allocating resources to the jobs and needs of the client on the main server may be used.

Inventors:
GAGNON PAUL (CA)
BENJAMIN MISHA (CA)
Application Number:
PCT/CA2019/051784
Publication Date:
June 18, 2020
Filing Date:
December 11, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ELEMENT AI INC (CA)
International Classes:
G06N20/00
Domestic Patent References:
WO2018125264A12018-07-05
WO2006102122A22006-09-28
Foreign References:
US20170364822A12017-12-21
US9721296B12017-08-01
US20170140262A12017-05-18
US20170034023A12017-02-02
US20140280065A12014-09-18
Attorney, Agent or Firm:
BRION RAFFOUL (CA)
Download PDF:
Claims:
Attorney Docket No. 1355P032W001

What is claimed is:

1. A system for managing resources for at least one main server, said at least one main server being used for machine learning applications, the system comprising:

- a data set reception module for receiving at least one data set from at least one client device;

- at least one resource allocation module for allocating resources of said at least one main server for use in service of said at least one client device, an allocation of said resources being based on at least one predetermined criterion;

- a model distribution module for distributing updates for at least one implementation of a machine learning model used by said at least one client device, said updates being distributed to said at least one client device;

- a model update module for updating said at least one implementation of said machine learning model; wherein

- said model update module updates said at least one implementation based on resources allocated for said at least one client device.

2. The system according to claim 1, wherein said model distribution module distributes said updates to said at least one client server based on said at least one predetermined criterion.

3. The system according to claim 1, wherein said updates are transmitted to said at least one client device whenever a new update is available.

4. The system according to claim 1, wherein said updates are transmitted to said at least one client device only at specific time intervals.

5. The system according to claim 1, wherein said at least one client device receives a fixed number of said updates per specific time interval. Attorney Docket No. 1355P032W001

6. The system according to claim 1, wherein updates are sent to said at least one client device based on whether data sets from said client server are available for use by said main server in updating said at least one implementation of said at least one machine learning model.

7. The system according to claim 1, wherein updates are sent to said at least one client device based on whether data sets from said client device are available for use by said main server in creating at least one implementation of at least one other machine learning model.

8. The system according to claim 1, wherein said resources are allocated based on whether data from said at least one client device is available for use by said main server.

9. The system according to claim 1, wherein said updates are based exclusively on data sets received from said at least one client device.

10. The system according to claim 1, wherein said updates are based on data sets from multiple one client devices.

11. The system according to claim 1, wherein said updates are based only on data sets from a specific set of client devices.

12. A system for managing resources for at least one main server, said at least one main server being used for machine learning applications, the system comprising:

- at least one parameter gathering module for gathering parameters regarding said at least one main server and at least one client device;

- a calculation module for calculating resource metrics based on said parameters gathered by said at least one parameter gathering module;

- a decision module for generating at least one decision regarding resource allocation based on said resource metrics and based on predetermined criteria;

- at least one resource allocation module for allocating resources of said at least one main server for use in service of said at least one client device, an allocation of said resources being based on said at least one decision generated by said decision module; Attorney Docket No. 1355P032W001

wherein said at least one main server is for updating at least one implementation of a machine learning model that operates on data from said at least one client device.

13. The system according to claim 12, wherein said resources include at least one of:

- processing time / processing cycles on said at least one main server;

- data transfer bandwidth for communicating with said at least one client device;

- an amount of random access memory for use by said main server;

- a priority setting for at least one job or task for said at least one client device; and

- an order of priority for executing said at least one job or task for said at least one client device;

- parameters used by said machine learning model; and

- weights used by said machine learning model.

14. The system according to claim 12, wherein said predetermined criteria includes existing agreements that provide for sending updated versions of said at least one implementation of said machine learning model to said at least one client device.

15. The system according to claim 14, wherein said at least one implementation of said machine learning model is updated based only on data received from said at least one client device.

16. The system according to claim 14, wherein said at least one implementation of said machine learning model is updated based on data received from a plurality of different client devices.

17. The system according to claim 14, wherein said at least one implementation of said machine learning model is updated based on data received from a specific set of client devices.

18. The system according to claim 14, wherein said updated versions of said at least one implementation of said machine learning model is transmitted to said at least one client device whenever a new update is available. Attorney Docket No. 1355P032W001

19. The system according to claim 14, wherein said updated versions of said at least one implementation of said machine learning model is transmitted to said at least one client device at specific time intervals.

20. The system according to claim 14, wherein said at least one client device receives a fixed number of said updated versions of said at least one implementation of said machine learning model per specific time interval.

21. The system according to claim 12, wherein said decision is based on whether data from said at least one client device is available for use by said main server.

22. The system according to claim 21, wherein said decision is based on whether said data from said at least one client device is available for use by said main server in updating said at least one implementation.

23. The system according to claim 21, wherein said decision is based on whether said data from said at least one client device is available for use by said main server in a creation of at least on implementation of at least one other machine learning model.

Description:
DATA SET ACCESS FOR UPDATING MACHINE LEARNING MODELS

TECHNICAL FIELD

[0001] The present invention relates to resource allocation for computer servers. More specifically, the present invention relates to systems and methods for allocating resources for updating implementations of machine learning models as well as for executing client jobs.

BACKGROUND

[0002] The rise of artificial intelligence, and more specifically, machine learning, has placed machine assisted decision making at the forefront of some businesses. Computers running implementations of machine learning models are now making decisions for businesses and these decisions run the gamut from bank loan applications, the pricing of products, to video game related decisions. Such machines and the implementations of machine learning models running on such devices are fast becoming the backbone of many enterprises. However, as is well-known, such models may constantly need updating as more and more data gets generated. Such updates ensure that the model performs properly and that outlier situations are properly considered. Models that are not updated or that are updated infrequently run the risk of becoming stagnant and prone to bad performance.

[0003] Such updates highlight two needs of machine learning systems as a whole: access to data and access to computing resources. Access to data is key to maintaining and improving the performance of such machine learning models— new data sets can be used to reinforce existing model behaviours, or they can be used to introduce new behaviours that address outlier data. The need for computing resources should be clear— without such resources, data sets cannot be uploaded from devices that operate the models (i.e. clients) to servers that update, maintain, and generate such Attorney Docket No. 1355P032W001

implementations of these models (i.e. the main servers). As well, without access to such resources (which include data transmission bandwidth, computing cycles, power resources, etc., etc.), the implementations of these models would not be updated in a timely fashion and, as noted above, these models run the risk of becoming obsolete or ineffective.

[0004] Based on the above, there is therefore a need for systems and methods that can manage resources for a main server, including managing the reception of data sets from clients as well as managing the updating of such implementations of machine learning models. Preferably, such systems and methods would take into account the efficiencies and/or economics of the situation as well as the desires of the operators of the clients.

SUMMARY

[0005] The present invention provides systems and methods for managing resources for a main server that updates implementations of machine learning models used by client devices. The main server allocates resources to jobs for client devices, including updates on machine learning models operating on or operated for client devices based on restrictions of use for data sets generated by the client devices. In addition to restrictions (or lack thereof) as to the uses for the client’s data set, other bases for allocating resources to the jobs and needs of the client on the main server may be used.

[0006] In a first aspect, the present invention provides a system for managing resources for at least one main server, said at least one main server being used for machine learning applications, the system comprising:

- a data set reception module for receiving at least one data set from at least one client device; Attorney Docket No. 1355P032W001

- at least one resource allocation module for allocating resources of said at least one main server for use in service of said at least one client device, an allocation of said resources being based on at least one predetermined criterion;

- a model distribution module for distributing updates for at least one implementation of a machine learning model used by said at least one client device, said updates being distributed to said at least one client device;

- a model update module for updating said at least one implementation of said machine learning model; wherein

- said model update module updates said at least one implementation based on resources allocated for said at least one client device.

[0007] In a second aspect, the present invention provides a system for managing resources for at least one main server, said at least one main server being used for machine learning applications, the system comprising:

- at least one parameter gathering module for gathering parameters regarding said at least one main server and at least one client device;

- a calculation module for calculating resource metrics based on said parameters gathered by said at least one parameter gathering module;

- a decision module for generating at least one decision regarding resource allocation based on said resource metrics and based on predetermined criteria;

- at least one resource allocation module for allocating resources of said at least one main server for use in service of said at least one client device, an allocation of said resources being based on said at least one decision generated by said decision module; Attorney Docket No. 1355P032W001

wherein said at least one main server is for updating at least one implementation of a machine learning model that operates on data from said at least one client device.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] The embodiments of the present invention will now be described by reference to the following figures, in which identical reference numerals in different figures indicate identical elements and in which:

FIGURE 1 is a block diagram of a system according to one aspect of the present invention; and

FIGURE 2 is a block diagram of a system according to another aspect of the present invention.

DETAILED DESCRIPTION

[0009] The present invention relates to systems and methods for managing resources for at least one main server that is in communication with at least one client device. As noted above, a main server is one that updates/generates/trains implementations of machine learning models and distributes the generated/trained/updated

implementation to clients or client devices. These clients are devices that run the implementations of the machine learning models and may be servers themselves or they may be edge devices such as mobile devices, personal computers, or any other data processing device. If the clients are servers, these servers may run/operate the implementation of the machine learning model on behalf of or for edge devices such as those noted previously.

[0010] In one embodiment, the main server operates to update the implementation of the machine learning model running on the clients. This update may involve a retraining of the model using one or more data sets from a specific client, from multiple clients, Attorney Docket No. 1355P032W001

or from a specific set of clients. Depending on the configuration of the main server and the client, the system of the present invention may only use data sets from that specific client to update the model running on that client. This ensures that the implementation of the model is specific to the data generated by that client. For a broader applicability of the update, the system may use the data from the specific client as well as data sets from other clients. The resulting updated model should then be applicable for use by clients other than the specific client. In yet another configuration, the data sets used to update the model may only be from specific clients. For this configuration, the clients may have a commonality suitable for the model. As an example, for a specific client, that client might only be interested in a model trained on or updated with data sets from clients dealing with retail sales. Thus, for this example, data sets from clients who deal with business to business transactions would not be used in updating the model.

[0011] For implementations that operate with the differing variants noted above, the main server may thus have multiple versions of the same machine learning model, with each version being updated/trained using differing data sets. For such

implementations, the main server would ensure that the correct version of the model is transmitted to the correct client.

[0012] In a related implementation, the main server would also operate to update the models on the various clients, but the clients may control the main server’s access to the data generated by the clients. For this implementation, a client may allow the data it generates to be used in the updating of the model that the client currently deploys. However, the client may also disallow the use of its data for updating the model for other clients (i.e. only the specific client’s version of the model would be updated with that specific client’s data). This limitation may be implemented to address privacy concerns, business considerations, etc., etc. Similarly, the client may allow the main server to use that client’s data in the updating of one or more models being used by that client. As well, a client may allow the main server free rein in using that client’s data: the main server may thus use that client’s data to update any models, create other models, or other wise use the data for whatever uses the main Attorney Docket No. 1355P032W001

server may see fit. Of course, the client may also constrain the main server’s use of the data to protect its own privacy as well as the privacy of its users (or the privacy of those whose data may be included in the data set).

[0013] While data access is important, other variants of the present invention may be

concerned with differing delivery options on the updates. As an example, a client may receive an update as soon as the update is ready, and this client may be configured to receive as many updates as there are available. Alternatively, a client may receive only one update a month (even if there are multiple updates in a month). Fixed delivery time periods other than monthly can, of course, be implemented. Or a client may be entitled to only a fixed number of updates per calendar year. For this option, the client may need to manually pull or download the update from the main server as opposed to the main server pushing or sending the update to the client.

Once a client has downloaded the number of updates that it is entitled to, the client can no longer download updates unless a different arrangement is made with the operators of the main server. Such limitations may be imposed on the client by the operators of the main server for various reasons including the potential need to reoptimize the main server’s configuration if the client downloads more updates, potential abuse of the system by the client, bandwidth efficiency concerns, etc., etc.

[0014] It should be clear that the access to the updates may be tied to the main server’s access to the client’s data. As an example, in one implementation, a client may be entitled to receive as many updates as there are available if that client removes any restrictions on the use of its data sets. Thus, in exchange for the ability to access and use a client’s data sets, the main server may configure its parameters so that that client would receive all updates. Similarly, if a client only allows the main server to use that client’s data to update only a specific set of models (e.g. only the models that the client is actually using), the main server may be configured to only allow that client to access a limited amount or number of updates. For such

implementations that tie access to the client’s data sets to the client’s access to the updates, the general rule may be that increased access (or lack of restrictions) to a client’s data set would entitle the client to access to more updates. Attorney Docket No. 1355P032W001

[0015] It should be clear that the access to the client’s data sets (and the number of

restrictions as to their uses) may be tied to the resources that a main server allocates to the specific client or to servicing that specific client. The resources that may be allocated may include computing cycles, time executing jobs specific to the specific client, processing or computing power (e.g. a number of virtual machines operating on the main server dedicated to servicing the specific client’s needs), an amount of RAM allocated for jobs specific to the specific client, a number of processor cores dedicated to that specific client’s needs/jobs, an amount of virtual memory dedicated to that specific client’s needs/jobs, a priority given to the specific client’s jobs/processes, etc., etc. Again, as with a client’s access to updates, the concept is that of tying the main server’s access to the client’s data to the amount of resources allocated to servicing that client’s needs. As an example, updates to a model for the specific client may be given a higher priority on the main server if the specific client has minimal restrictions on the use of the data sets it generates. Similarly, processes on the main server that lead to updates to the model may only be run/executed for the specific client if that specific client has placed multiple restrictions on the uses for its data sets.

[0016] While the above description mentions access to a client’s data as being tied to the amount of resources allocated to that client’s jobs/needs and that same access as being tied to the frequency and number of updates to models that the client may be running, other options are, of course, possible. As one option, a client may be able to access more resources and/or more updates if the client’s owners/operators pay a premium to the operators/owners of the main server. Of course, a sliding scale of premium payments may be used such that higher payments would entitle a client to more resources and/or more (and more frequent) updates. This scheme can be combined with differing access/restrictions to the client’s data sets so that a client may offset higher premium payments with more data set use restrictions to still be able to access sufficient resources and/or updates. Similarly, a client may pay lower premium payments and offset those with lower data set use restrictions to be able to access sufficient resources and/or updates. Conversely, a client may pay high premium payments and not allow for any use of its data sets and still be able to Attorney Docket No. 1355P032W001

access sufficient resources and updates. As can be seen, in this scheme, higher premium payments can be used to offset low data set use (i.e. high restrictions on the use of the client generated data sets) to be able to access suitable resources and updates. The scheme can also be extended to take into account the quality of the data horn the client as well as whether the main server actually needs or requires more data. Such an extension would have the main server determining whether the data horn the client is suitable for the main server’s needs and, depending on that determination, applying suitable adjustments to premium payments due horn the client. Or, in another variant, the main server may request specific types of data (with a suitably low data set restrictions) horn the client and, if the client accedes to the request, the main server would, in essence, reward the client with more access to resources.

[0017] It should also be clear that the above scheme may be extended to include the concept of real-time pricing for access to computing resources and/or updates. Such an extension of the above scheme would involve a real-time or near real-time monitoring of conditions involving the main server and any clients that would be affected. Based on the prevailing conditions, access to higher levels of resources and/or to more updates and/or to more frequent updates may require less restrictions on the client's data sets and/or higher premium payments. As an example, if an update for a model operating on the client is currently available and the transmission conduits between the main server and the client are busy with data traffic, the main server can check to see if such traffic can be pre-empted to upload the update to the client. Lesser restrictions on the client's data sets as well as a higher amount of payments may allow the main server to pre-empt the existing traffic. Of course, the main server would need to check the client's profile in a database to determine the restrictions (if any) on the client's data sets, the level of payments for the client (i.e. is the client on a high tier of payments necessitating a higher level of service), as well as the level of service to be provided to the client. Similarly, if a new data set is ready for use by the main server to train or update a model being used by the client, the main server may check the client's profile in the database to determine if other jobs running on the main server may be pre-empted by the model update or if the Attorney Docket No. 1355P032W001

model update may be placed further down the priority queue. If the model update is to be placed further down the queue, this means that other jobs may be placed ahead of the model update and the client may not receive a model update until a later time.

[0018] It should, of course, be possible that the client does not run the implementation of the machine learning model. For such a variant, the main server may operate to run the model on behalf of the client. The client would thus upload the data to be run through the model on the main server and, depending on the prevailing conditions, the predetermined service level associated with that client, and the availability of the main server's resources, the main server would run the data through the model and transmit the results to the client. As noted above, the scheduling as to when the data would be passed through the model on the main server may be determined by conditions and parameters such as how busy the main server is, the service level for that client, and the main server's resources available to be tasked to the client's needs. Of course, if the client's data set is freely available to the main server for use in updating models and if the client is at a high service level, then the data may be placed at a high priority level. If, on the other hand, the client's data cannot be used for updates and the client is at a lower service level (perhaps due to subscribing to a lower payment tier), the data may be placed at a lower priority level. Thus, jobs for a specific client and the priority assigned to such jobs may be affected by the main server's access to the client's data set as well as the payment plan/payment tier that the client subscribes to. Other possibilities may, of course, exist. As an example, a client's jobs may be placed at a higher priority level if the prevailing conditions indicate that the main server is not busy but if the main server is busy, then that client's jobs may be placed at a lower priority. Or, conversely, a client's jobs may be placed at a high priority regardless of the prevailing conditions or a client's jobs may be at a priority level such that it cannot be pre-empted in its place in the queue for execution.

[0019] For the above variant where the main server operates an implementation of a

machine learning model on behalf of a client, with the client uploading data sets to be run through that implementation, the updates to that model may also be governed Attorney Docket No. 1355P032W001

by restrictions to the client's data as well as the client's payment tier. As noted above, the frequency of model updates may be dependent on whether restrictions are placed on the use of the client's data set as well as the service level that the client is entitled to, given the client's payment tier.

[0020] While the discussion mentions the use of data sets, such data sets are not the only forms of useful data that the main server may need and for which the main server may receive in exchange for advantages for the client (e.g. more access to resources on the main server, etc., etc.) In one variant, models which have been updated on the client side may be uploaded to the main server in exchange for such advantages. Or, to conserve bandwidth, instead of a completely updated model, the client may share weights and/or parameters or hyperparameters that have been updated. These weights and/or parameters may be uploaded to the main server and, depending on the main server’s need for such updates, the main server may reward the client with suitable advantages as noted above. In yet another variant, instead of simply the weights and/or parameters being exchanged between the main server and the client, the differences between the model on the main server and the model on the client may be what is exchanged. These differences, whether in the form of new weights, new parameters, updated links between nodes in the models, etc., etc., may be exchanged regardless of where the model has been updated. When the main server receives new data that is useful for its models and/or the processes it is executing, the main server can provide some form of advantage to the client that the client may not have had previously, or the main server may extend a period of time by which the client has an already existing advantage. A form of compensation from the main server to the client, in exchange for the data useful to the main server, can thus be made. As noted above, this compensation may take the form of a technical advantage such as faster processing, more access to resources, and/or more access to data/models.

[0021] It should be clear that the above concept of exchanging data other than the data sets and the models themselves may have added advantages. Instead of exchanging data sets between the main server and the client, representations of these data sets may be Attorney Docket No. 1355P032W001

exchanged, thereby mitigating potential risks to privacy in the event of a data breach or in the event that the communications between the main server and the client are intercepted. In addition, by exchanging representations of the data sets instead of the data sets themselves, a higher bandwidth efficiency may be achieved as the representations would, in effect,“compress” the actual data sets. Similarly, by exchanging the weights and/or parameters for updated models instead of data sets that would be used to update/retrain the model, bandwidth efficiencies may again be achieved. Such weights and/or parameters may have a smaller data footprint than the data sets.

[0022] Referring to Figure 1, a block diagram of a system according to one aspect of the present invention is illustrated. As can be seen, the system 10 includes a data reception module 20, multiple resource allocation modules 30A, 30B, 30C, a model distribution module 40, and a model update module 50. A database 60 may also be present to store client profiles.

[0023] The data reception module 20 receives data and data sets from client devices and processes these data sets accordingly. When a data set is received, the system checks the profile of the client that the data set came from and, depending on the contents and settings in that profile, the data sets are processed accordingly. As an example, a data set coming from a client who has restricted the use of its data sets such that these data sets cannot be used for updating the model would be segregated from other data sets that would be used for model updates. Similarly, data sets from clients that allow their data sets to be used for model updates may have those data sets stored for these model updates.

[0024] The multiple resource allocation modules 30A, 30B, 30C would each be tasked with allocating different resources for specific clients. As an example, the module 30A may be tasked with allocating processing cycles (i.e. processing time) to specific jobs for specific clients. The system would, again, assess that client’s profile from the database 60 to determine the service level associated with that client based on the client’s setting for the main server’s access to the client’s data sets as well as the Attorney Docket No. 1355P032W001

client’s payment tier. The system would then use module 30A to allocate processing cycles to the jobs for that specific client. If a data set has just been received from that client, then that data set may be used in an update to the model and the module 30A would allocate processing cycles to that model update process based on the client’s profile. Similarly, module 30B may allocate a priority to the jobs for that client. With the processing cycles allocated, the module 30B would then assign a priority to the model update job for that client and place the job in a suitable queue for execution. The priority would, again, depend on the client’s profile, including the client’s restrictions on the use of its data sets and its payment tier. The third module 30C may be tasked with allocating processor cores to the jobs for clients. With the processing cycles allocated and priorities assigned, the system may thus assign which processor cores to execute the model update. Again, this assignment of processor cores and how many cores to assign may be based on the client’s profile. Of course, jobs other than model updates may be executed for the client. In the above described variant, the system may actually execute the implementation of the machine learning model on the client’s data. Passing the client’s data through the model would be a job executed on behalf of the client and would be subject to the settings and decisions assigned by the resource allocation modules.

[0025] As a specific module for use with implementations of the invention that distribute model updates, the system includes a model distribution module 40. For this implementation, the updates to the models operating in the clients would be distributed by the module 40. The schedule for the distribution as well as other parameters surrounding that distribution may be determined by module 40 based on the contents of the client’s profile as explained in the examples above.

[0026] A model update module 50 is also present for implementations where the main

server updates machine learning models being executed on either the main server or on the client devices. This module 50 performs the update of the various models using the resources allocated by the resource allocation modules. Once a model has been updated, it can then be passed to the model distribution module. Attorney Docket No. 1355P032W001

[0027] The database 60, as noted above, contains the profiles of the various clients. The profiles may contain details of the limitations or restrictions placed by the client on its data sets. In addition, the profile may contain the client’s payment tier, its associated service level, as well as other details relating to the client’s account. The treatment of the client’s data as well as of the client’s version of the model being executed would, as explained above, depend on the entries in the client’s profile.

[0028] A more general system that can be used in real-time or near real-time pricing

schemes is illustrated in Figure 2. As can be seen, the system 100 includes multiple parameter gathering modules 110A, 110B, HOC, a calculation module 120, a decision module 130, and multiple resource allocation modules 140A, 140B, HOC.

A database 150 contains the various client profiles that would be used by the system in determining how to allocate resources.

[0029] It should be clear that the resource allocation modules in this variant of the present invention operate in a manner similar to the same modules in the system illustrated in Figure 1.

[0030] The various parameter gathering modules 110A, 110B, 1 IOC modules of the system operate to gather current operating conditions for both the main server and the various clients being serviced by the main server. These parameters may include data traffic congestion both within the main server and between the main server and the various clients. As well, the parameters may include the number of jobs/tasks queued for execution on the main server, the amount of available memory, the processor utilization metrics for each processor core, as well as other metrics detailing the amount of activity in and for the main server. In addition, the parameter gathering modules gather data that serve as an indication of the amount of available resources to the main server.

[0031] The calculation module 120 determines the actual resources available to the main server based on the parameters gathered by the various modules 110A, 110B, and HOC. Attorney Docket No. 1355P032W001

[0032] As can be imagined, the current conditions are determined by the calculation module and the decision module 130 determines how to allocate resources to the various jobs for the various clients. This is done by consulting the various client profiles stored in the database 150. The decision module would, based on predetermined criteria such as available resources, priority requirements, and others, determine which resources (and how much) are to be allocated to which job for which client. For real time or near real time“pricing” of resources, the current conditions can be taken into account when determining charges for a client’s access to resources such as processing time and data transmission bandwidth. The system may be configured such that, generally, at times of resource scarcity, more will be charged for a client to access the scarce resource. The system may also be configured such that, instead of currency, the clients would be charged in terms of access to their data sets - to access resources at times of resource scarcity, more access (i.e. less restrictions on their use by the main server) would be needed to the client’s data sets. This amount of access can, of course, be offset by an increase in the premium payments from the client. Once a resource allocation decision has been made by the decision module, the resource allocation modules would implement such a decision.

[0033] For clarity, the modules in Figures 1 and 2 are not to be taken as substitutes for one another. Rather, these modules may be taken as complementary - some implementations of the present invention may use some modules from Figure 1 and some modules from Figure 2. As such, hybrid implementations of the variants in Figures 1 and 2 are possible and, indeed, quite possibly preferable.

[0034] It should be clear that the various aspects of the present invention may be

implemented as software modules in an overall software system. As such, the present invention may thus take the form of computer executable instructions that, when executed, implements various software modules with predefined functions.

[0035] Additionally, it should be clear that, unless otherwise specified, any references

herein to 'image' or to 'images' refer to a digital image or to digital images, comprising pixels or picture cells. Likewise, any references to an 'audio file' or to Attorney Docket No. 1355P032W001

'audio files' refer to digital audio files, unless otherwise specified. 'Video', 'video files', 'data objects', 'data files' and all other such terms should be taken to mean digital files and/or data objects, unless otherwise specified.

[0036] The embodiments of the invention may be executed by a computer processor or similar device programmed in the manner of method steps or may be executed by an electronic system which is provided with means for executing these steps. Similarly, an electronic memory means such as computer diskettes, CD-ROMs, Random Access Memory (RAM), Read Only Memory (ROM) or similar computer software storage media known in the art, may be programmed to execute such method steps. As well, electronic signals representing these method steps may also be transmitted via a communication network.

[0037] Embodiments of the invention may be implemented in any conventional computer programming language. For example, preferred embodiments may be implemented in a procedural programming language (e.g., "C" or "Go") or an object-oriented language (e.g., "C++", "java", "PHP", "PYTHON" or "C#"). Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.

[0038] Embodiments can be implemented as a computer program product for use with a computer system. Such implementations may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or electrical communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming Attorney Docket No. 1355P032W001

languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink-wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server over a network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention may be implemented as entirely hardware, or entirely software (e.g., a computer program product).

[0039] A person understanding this invention may now conceive of alternative structures and embodiments or variations of the above all of which are intended to fall within the scope of the invention as defined in the claims that follow.