Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MACHINE LEARNING MODEL DISTRIBUTION
Document Type and Number:
WIPO Patent Application WO/2022/171536
Kind Code:
A1
Abstract:
According to an example embodiment, a client device is configured to receive a validation model from a centralised unit device, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collect radio measurements corresponding to the input of the validation model and parameters corresponding to the output of the validation model; obtain predicted parameters as the output of the validation model by feeding the collected radio measurements as the input into the validation model; compare the collected parameters and the predicted parameters; compute a plurality of gradient vectors for the plurality of model parameters of the validation model based on the comparison between the collected parameters and the predicted parameters; and transmit the plurality of gradient vectors for the plurality of model parameters of the validation model to the centralised unit device. Devices, methods and computer programs are disclosed.

Inventors:
ZHAO QIYANG (FR)
PARIS STEFANO (FR)
BUTT MUHAMMAD MAJID (FR)
ALI-TOLPPA JANNE (FI)
Application Number:
PCT/EP2022/052714
Publication Date:
August 18, 2022
Filing Date:
February 04, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
H04L41/16; H04L41/0893; H04W24/00
Foreign References:
US20040185786A12004-09-23
US20200186227A12020-06-11
US20200382968A12020-12-03
Attorney, Agent or Firm:
NOKIA EPO REPRESENTATIVES (FI)
Download PDF:
Claims:
CLAIMS :

1. A client device (200), comprising: at least one processor (202); and at least one memory (204) including computer program code; the at least one memory and the computer pro- gram code configured to, with the at least one proces- sor, cause the client device to: receive a validation model (601) from a cen- tralised unit device (300), wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collect radio measurements corresponding to the input of the validation model and parameters corre- sponding to the output of the validation model; obtain predicted parameters as the output of the validation model by feeding the collected radio measurements as the input into the validation model; compare the collected parameters and the pre- dicted parameters; compute a plurality of gradient vectors (603) for the plurality of model parameters of the validation model based on the comparison between the collected pa- rameters and the predicted parameters; and transmit the plurality of gradient vectors (603) for the plurality of model parameters of the val- idation model to the centralised unit device (300). 2. The client device (200) according to claim

1, wherein the input of the validation model corresponds to radio measurements and the output of the validation model corresponds to quality of service parameters.

3. The client device (200) according to claim 2, wherein the radio measurements comprises at least one of: reference signal received power, channel state in- formation, or buffer status report, and/or wherein the quality of service parameters comprises at least one of: delay, error probability, or data rate.

4. The client device (200) according to any preceding claim, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to compare the collected parameters and the predicted pa- rameters by calculating a loss between the collected parameters and predicted parameters using a loss func- tion indicated by the centralised unit device.

5. The client device (200) according to any preceding claim, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to: receive a cluster model from a centralised unit device (300), wherein the cluster model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collect radio measurements corresponding to the input of the cluster model and parameters corre- sponding to the output of the cluster model; obtain predicted parameters as the output of the cluster model by feeding the collected radio meas- urements as the input into the cluster model; compare the collected parameters and the pre- dicted parameters; compute a plurality of gradient vectors for the plurality of model parameters of the cluster model based on the comparison between the collected parameters and the predicted parameters; and update the plurality of parameters of the cluster model based on the gradient vectors for the plurality of model parameters of the cluster model.

6. The client device (200) according to claim 5, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to: transmit the plurality of model parameters (608) of the cluster model to the centralised unit de- vice (300).

7. The client device (200) according to any preceding claim, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to use the cluster model, and/or the updated cluster model for data transmission.

8. A centralised unit device (300), compris- ing: at least one processor (302); and at least one memory (304) including computer program code; the at least one memory and the computer pro- gram code configured to, with the at least one proces- sor, cause the centralised unit device to: transmit a validation model (601) to a plu- rality of client devices (200), wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; receive a plurality of gradient vectors (603) for the plurality of model parameters of the validation model from each client device (200) in the plurality of client devices; cluster the plurality of client devices based on the plurality of gradient vectors, wherein the clus- tering groups client devices with similar gradient vec- tors into one cluster; and transmit clustering data (605) to at least one distributed unit device (620), wherein the clustering data indicates the clustering of client devices con- nected to the at least one distributed unit device.

9. The centralised unit device (300) according to claim 8, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the centralised unit device to cluster the plurality of client devices based on the plurality of gradient vectors by performing: compute a pairwise gradient similarity for each client device pair in the plurality of client de- vices; and assign each client device in the plurality of client devices to a cluster that maximises the pairwise gradient similarity between the client device and client devices in the cluster.

10. The centralised unit device (300) accord- ing to claim 8 or claim 9, wherein the at least one memory and the computer program code are further con- figured to, with the at least one processor, cause the centralised unit device to: generate a cluster model for each cluster based on the gradient vectors received from the client devices in the cluster, wherein the cluster model com- prises a machine learning model configured to predict an output from an input based on a plurality of model parameters; and transmit the cluster model of each cluster to the client devices in the cluster.

11. The centralised unit device according to any of claims 8 - 10, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the centralised unit device to: receive a plurality of model parameters (608) of a cluster model from each client device in the clus- ter; update the plurality of parameters of the cluster model based on the received model parameters from the client devices in the cluster; and transmit the updated model parameters (610) of the cluster model to the client devices in the clus- ter.

12. The centralised unit device (300) accord- ing to any of claims 8 - 11, wherein the at least one memory and the computer program code are further con- figured to, with the at least one processor, cause the centralised unit device to: update the parameters of the validation model based on the received gradient vectors; and transmit the updated parameters of the vali- dation model to the plurality of client devices.

13. A method (1100) comprising: receiving (1101) a validation model from a centralised unit device, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collecting (1102) radio measurements corre- sponding to the input of the validation model and pa- rameters corresponding to the output of the validation model; obtaining (1103) predicted parameters as the output of the validation model by feeding the collected radio measurements as the input into the validation model; comparing (1104) the collected parameters and the predicted parameters; computing (1105) a plurality of gradient vec- tors for the plurality of model parameters of the val- idation model based on the comparison between the col- lected parameters and the predicted parameters; and transmitting (1106) the plurality of gradient vectors for the plurality of model parameters of the validation model to the centralised unit device.

14. A method (1200) comprising: transmitting (1201) a validation model to a plurality of client devices, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; receiving (1202) a plurality of gradient vec- tors for the plurality of model parameters of the val- idation model from each client device in the plurality of client devices; clustering (1203) the plurality of client de- vices based on the plurality of gradient vectors, wherein the clustering groups client devices with sim- ilar gradient vectors into one cluster; and transmitting (1204) clustering data to at least one distributed unit device, wherein the cluster- ing data indicates the clustering of client devices con- nected to the at least one distributed unit device. 15. A computer program product comprising pro- gram code configured to perform the method according to claim 13 or claim 14, when the computer program product is executed on a computer.

Description:
MACHINE LEARNING MODEL DISTRIBUTION

TECHNICAL FIELD

The present application generally relates to the field of wireless communications. In particular, the present application relates to a client device and a centralised unit device, and related methods and com- puter programs.

BACKGROUND

Machine learning (ML) may be used in future telecommunication networks for, for example, network optimisation and automation. For a global implementation of an ML model, the network should collect relatively large amounts of features and data in order to effec- tively model different radio environments, RAN config- urations, user types, etc. This increases the complexity of the hyperparameters in the model. A complex model would converge slowly to the optimal state or it would require large amount of data and iterations to optimize the parameters. This also increases the amount and fre- quency of radio measurements in different network enti- ties. Moreover, signalling of the measured data between network entities increases the load in control channel. Furthermore, the inference time is also increased by the model complexity, which introduces extra delay in making decisions .

SUMMARY

The scope of protection sought for various ex- ample embodiments of the invention are set out by the independent claims. The example embodiments and fea- tures, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various example embodiments of the invention.

An example embodiment of a client device com- prises at least one processor and at least one memory comprising computer program code. The at least one memory and the computer program code are configured to, with the at least one processor, cause the client device to: receive a validation model from a centralised unit device, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collect radio measurements corresponding to the input of the validation model and parameters corresponding to the output of the validation model; obtain predicted param- eters as the output of the validation model by feeding the collected radio measurements as the input into the validation model; compare the collected parameters and the predicted parameters; compute a plurality of gradi- ent vectors for the plurality of model parameters of the validation model based on the comparison between the collected parameters and the predicted parameters; and transmit the plurality of gradient vectors for the plu- rality of model parameters of the validation model to the centralised unit device. The client device may, for example, enable the centralised unit device to find an ML model applicable to the client device.

An example embodiment of a client device com- prises means for performing: receive a validation model from a centralised unit device, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collect radio measurements corre- sponding to the input of the validation model and pa- rameters corresponding to the output of the validation model; obtain predicted parameters as the output of the validation model by feeding the collected radio meas- urements as the input into the validation model; compare the collected parameters and the predicted parameters; compute a plurality of gradient vectors for the plural- ity of model parameters of the validation model based on the comparison between the collected parameters and the predicted parameters; and transmit the plurality of gradient vectors for the plurality of model parameters of the validation model to the centralised unit device.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the input of the validation model corresponds to radio meas- urements and the output of the validation model corre- sponds to quality of service parameters. The client de- vice may, for example, utilise ML models for predicting quality of service parameters.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the radio measurements comprises at least one of: reference signal received power, channel state information, or buffer status report, and/or wherein the quality of ser- vice parameters comprises at least one of: delay, error probability, or data rate. The client device may, for example, utilise ML models for predicting delay, error probability, and/or data rate. In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to compare the collected param- eters and the predicted parameters by calculating a loss between the collected parameters and predicted parame- ters using a loss function indicated by the centralised unit device. The client device may, for example, effi- ciently compare the compare the collected parameters and the predicted parameters.

In an example embodiment, alternatively or in addition to the above-described example embodiments, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to: receive a cluster model from a centralised unit device, wherein the clus- ter model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; collect radio measurements corre- sponding to the input of the cluster model and parame- ters corresponding to the output of the cluster model; obtain predicted parameters as the output of the cluster model by feeding the collected radio measurements as the input into the cluster model; compare the collected pa- rameters and the predicted parameters; compute a plu- rality of gradient vectors for the plurality of model parameters of the cluster model based on the comparison between the collected parameters and the predicted pa- rameters; and update the plurality of parameters of the cluster model based on the gradient vectors for the plurality of model parameters of the cluster model. The client device may, for example, efficiently obtain a ML model applicable to the environment of the client de- vice.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to: transmit the plurality of model parameters of the cluster model to the centralised unit device. The client device may, for example, enable the centralised unit device to improve the cluster model based on parameters obtained from different client de- vices in a cluster.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the client device to use the cluster model, and/or the updated cluster model for data transmission. The client device may, for example, efficiently predict pa- rameters needed for packet transmission using the clus- ter model.

An example embodiment of a centralised unit device comprises at least one processor and at least one memory comprising computer program code. The at least one memory and the computer program code are configured to, with the at least one processor, cause the central- ised unit device to: transmit a validation model to a plurality of client devices, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; receive a plurality of gradient vec- tors for the plurality of model parameters of the val- idation model from each client device in the plurality of client devices; cluster the plurality of client de- vices based on the plurality of gradient vectors, wherein the clustering groups client devices with sim- ilar gradient vectors into one cluster; and transmit clustering data to at least one distributed unit de- vice, wherein the clustering data indicates the clus- tering of client devices connected to the at least one distributed unit device. The centralised unit device may, for example, find an ML model applicable to each client device.

An example embodiment of a centralised unit device comprises means for performing: transmit a val- idation model to a plurality of client devices, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; receive a plurality of gradient vectors for the plurality of model parameters of the validation model from each client device in the plurality of client devices; cluster the plurality of client devices based on the plurality of gradient vec- tors, wherein the clustering groups client devices with similar gradient vectors into one cluster; and transmit clustering data to at least one distributed unit de- vice, wherein the clustering data indicates the clus- tering of client devices connected to the at least one distributed unit device.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the centralised unit device to cluster the plu- rality of client devices based on the plurality of gra- dient vectors by performing: compute a pairwise gradient similarity for each client device pair in the plurality of client devices; and assign each client device in the plurality of client devices to a cluster that maximises the pairwise gradient similarity between the client de- vice and client devices in the cluster. The centralised unit device may, for example, efficiently cluster the client devices.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the centralised unit device to: generate a cluster model for each cluster based on the gradient vectors received from the client devices in the cluster, wherein the cluster model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; and transmit the clus- ter model of each cluster to the client devices in the cluster. The centralised unit device may, for example, efficiently obtain an ML model that is applicable to each client device in a cluster.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the centralised unit device to: receive a plural- ity of model parameters of a cluster model from each client device in the cluster; update the plurality of parameters of the cluster model based on the received model parameters from the client devices in the cluster; and transmit the updated model parameters of the clus- ter model to the client devices in the cluster. The centralised unit device may, for example, efficiently update the cluster model in each device base on param- eters obtained form other client devices in the cluster.

In an example embodiment, alternatively or in addition to the above-described example embodiments, the at least one memory and the computer program code are further configured to, with the at least one processor, cause the centralised unit device to: update the param- eters of the validation model based on the received gradient vectors; and transmit the updated parameters of the validation model to the plurality of client de- vices. The centralised unit device may, for example, efficiently update the validation model based on the gradient vectors obtained from each client device.

An example embodiment of a method comprises: receiving a validation model from a centralised unit device, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters; col- lecting radio measurements corresponding to the input of the validation model and parameters corresponding to the output of the validation model; obtaining predicted parameters as the output of the validation model by feeding the collected radio measurements as the input into the validation model; comparing the collected pa- rameters and the predicted parameters; computing a plu- rality of gradient vectors for the plurality of model parameters of the validation model based on the compar- ison between the collected parameters and the predicted parameters; and transmitting the plurality of gradient vectors for the plurality of model parameters of the validation model to the centralised unit device.

An example embodiment of a computer program product comprises program code configured to perform the method according to any of the above client device re- lated example embodiments, when the computer program product is executed on a computer.

An example embodiment of a method comprises: transmitting a validation model to a plurality of client devices, wherein the validation model comprises a ma- chine learning model configured to predict an output from an input based on a plurality of model parameters; receiving a plurality of gradient vectors for the plu- rality of model parameters of the validation model from each client device in the plurality of client devices; clustering the plurality of client devices based on the plurality of gradient vectors, wherein the clustering groups client devices with similar gradient vectors into one cluster; and transmitting clustering data to at least one distributed unit device, wherein the cluster- ing data indicates the clustering of client devices con- nected to the at least one distributed unit device.

An example embodiment of a computer program product comprises program code configured to perform the method according to any of the above centralised unit device related example embodiments, when the computer program product is executed on a computer.

DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are included to provide a further understanding of the example em- bodiments and constitute a part of this specification, illustrate example embodiments and together with the description help to explain the principles of the exam- ple embodiments. In the drawings:

Fig. 1 shows an example embodiment of the sub- ject matter described herein illustrating an example system in which various example embodiments of the pre- sent disclosure may be implemented;

Fig. 2 shows an example embodiment of the sub- ject matter described herein illustrating a client de- vice;

Fig. 3 shows an example embodiment of the sub- ject matter described herein illustrating a centralised unit device;

Fig. 4 shows an example embodiment of the sub- ject matter described herein illustrating ML model data flow;

Fig. 5 shows an example embodiment of the sub- ject matter described herein illustrating a flow chart for machine learning model distribution;

Fig. 6 shows an example embodiment of the sub- ject matter described herein illustrating a signalling diagram;

Fig. 7 shows an example embodiment of the sub- ject matter described herein illustrating simulation results;

Fig. 8 shows another example embodiment of the subject matter described herein illustrating simulation results; Fig. 9 shows another example embodiment of the subject matter described herein illustrating simulation results;

Fig. 10 shows another example embodiment of the subject matter described herein illustrating simulation results;

Fig. 11 shows an example embodiment of the sub- ject matter described herein illustrating a method; and

Fig. 12 shows another example embodiment of the subject matter described herein illustrating another method.

Like reference numerals are used to designate like parts in the accompanying drawings.

DETAILED DESCRIPTION

Reference will now be made in detail to example embodiments, examples of which are illustrated in the accompanying drawings. The detailed description pro- vided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present disclosure may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different example em- bodiments.

Fig. 1 illustrates an example system 100 in which various example embodiments of the present dis- closure may be implemented. An example representation of the system 100 is shown depicting a plurality of client device 200 and a plurality of base stations 201.

It may be challenging to enable on-broad ML inference in client devices 200. The client device 200 needs to obtain the updated ML model and intermediate output based on the task and environment. Further, ML applications in a Radio Access Network (RAN) require timely decisions. For example, scheduling a packet by allocating resource blocks is typically operated at transmission time interval (TTI) levels of milliseconds. If the client device 200 requests the decisions from other network entities, such as a base station 201, extra delay and signalling load are introduced over the control channel. Thus, it may be beneficial to transfer pretrained ML model parameters from the network to cli- ent devices 200 for execution. However, the dataset in RAN applications can be sensitive to the dynamics of radio environment reported by the client device 200. Thus, the ML model should be optimised based on timely environment changes in order to provide effective deci- sions.

For a global implementation of an ML model, the network should collect relatively large amounts of fea- tures and data in order to effectively model different radio environment, RAN configurations, user types, etc. This increases the complexity of the hyperparameters in the model, thus it may be difficult to find the convex hypersurface. A complex model would converge slowly to the optimal state, or it would require large amount of data and iterations to optimize the parameters. This also increases the amount and frequency of radio meas- urements in different network entities, which can be ineffective for battery and memory constrained 5G ter- minals. Moreover, signalling of the measured data be- tween network entities increases the load in control channel. Furthermore, the inference time is also in- creased by the model complexity, which introduces extra delay in making decisions related to, for example, for packet transmission.

Another challenge is how to train a global ML model across a network in the case of one ML model will be used in global scenarios rather than only used in a local scenario. In order to achieve this, the ML model should be able to differentiate the radio environment. One approach could be to model the environment with human knowledge. For example, in the physical layer, one can use received signal strength, interference, modula- tion and coding scheme, and channel bandwidth to predict the channel capacity. However, in the higher layers pro- tocols such features are very complex to model. For instance, the split, aggregation, retransmission, se- quencing of packets at different layers, has impacts to the packet delay, throughput, success probability. Fur- thermore, to obtain the data of all of these features can require multiple network entities to report the measurements at the same time scale, which may not be realistic. The amount of measurement reports and the complexity of training the model is probably not suit- able for timely decisions in RAN applications.

For a distributed implementation (i.e. at cli- ent devices 200, such as 5G terminals), the ML model can have poor generalization for the radio environment. This is because the data used to train the model is within a specific scenario around each client device 200. For example, the optimal transmit power can change according to the propagation loss, interference, resource amount, etc. Collecting enough data samples for these features can require a longer time than the global model. More- over, as the client device 200 is experiencing different scenario when moving around, the model is difficult to a stationary decision, and quickly adapt to the envi- ronment changes. Furthermore, training a complex model at the client device 200 can also consume significant amounts of power and memory.

In the example use case of multi-connectivity, an accurate prediction of received signal and traffic load at difference cells is difficult to achieve by a global ML model. For example, the propagation distribu- tion changes according to the building blocks, multi- path transmission, Doppler effect, etc. It is difficult to capture the impact of different factors using a clas- sical channel model. Similar problem also persists in traffic prediction, where the client devices with dif- ferent type of services, such as internet of things (IoT), vehicle-to-everything (V2X), extended reality (XR), could have significantly different packet arrival distribution, data size, session length, etc. Further- more, it is difficult to find a correlation between the received signal, channel state, and traffic load and the packet delay, reliability, and throughput based on a mathematical model from ML. For example, it is hard to quantify the impact of different scheduling schemes over the packet delay unless a large amount of data samples is used, which may not be realistic in practice.

In order to reduce the cost of building a ML model with large feature sets while providing accurate results to client devices 200 in dynamic environments, it may be beneficial to enable the system to automati- cally identify the environment differences from the ob- served data, and apply with a well configured ML model. This can also avoid retraining of the ML model when the client device 200 is frequently moving in different sce- narios. The example embodiments disclosed herein can intelligently train, apply different model parameters and coordinate multiple client devices to adapt the ra- dio environment and configurations without introducing extra measurements.

The client device 200 may comprise, for exam- ple, a mobile phone, a smartphone, a tablet computer, a smart watch, or any hand-held or portable device or any other apparatus, such as a vehicle, a robot, or a re- peater. The client device 200 may also be referred to as a user equipment (UE). The client device 200 may communicate with the centralised unit device 300 via e.g. an air/space born vehicle communication connection, such as a service link.

Some terminology used herein may follow the naming scheme of 4G or 5G technology in its current form. However, this terminology should not be considered limiting, and the terminology may change over time. Thus, the following discussion regarding any example embodiment may also apply to other technologies, such as 6G.

Fig. 2 is a block diagram of a client device 200 in accordance with an example embodiment.

The client device 200 comprises one or more processors 202, and one or more memories 204 that com- prise computer program code. The client device 200 may also comprise a transceiver 205, as well as other ele- ments, such as an input/output module (not shown in FIG. 2), and/or a communication interface (not shown in FIG. 2).

According to an example embodiment, the at least one memory 204 and the computer program code are configured to, with the at least one processor 202, cause the client device 200 to receive a validation model from a centralised unit device, wherein the val- idation model comprises a machine learning model con- figured to predict an output from an input based on a plurality of model parameters.

The input and output may correspond to any data that the client device 200 may use to, for example, make decisions related to packet transmission.

Herein, transmitting or receiving an ML model, such as the validation model, may refer to transmitting or receiving any data based on which the recipient can use the ML. For example, if the recipient, such as the client device 200, already knows the structure of the ML model, such as the structure of a neural network, it may suffice to transmit parameters of the ML model.

The client device 200 may be further configured to collect radio measurements corresponding to the input of the validation model and parameters corresponding to the output of the validation model.

The client device 200 may be further configured to obtain predicted parameters as the output of the validation model by feeding the collected radio meas- urements as the input into the validation model. The procedure of inputting data into an ML model and, as a result, obtaining the output may also be referred to as inference.

The client device 200 may be further configured to compare the collected parameters and the predicted parameters .

Based on the comparison, the client device 200 can assess how well the validation model predicted the correct output.

The client device 200 may be further configured to compute a plurality of gradient vectors for the plu- rality of model parameters of the validation model based on the comparison between the collected parameters and the predicted parameters.

The gradient vectors can indicate how the model parameters should be changed to improve the prediction provided by the validation model.

The client device 200 may be further configured to transmit the plurality of gradient vectors for the plurality of model parameters of the validation model to the centralised unit device.

According to an example embodiment, the input of the validation model corresponds to radio measure- ments and the output of the validation model corresponds to quality of service parameters. For example, the radio measurements may comprise at least one of: reference signal received power, channel state information, or buffer status report, and/or the quality of service pa- rameters may comprise at least one of: delay, error probability, or data rate.

Although the client device 200 may be depicted to comprise only one processor 202, the client device 200 may comprise more processors. In an example embod- iment, the memory 204 is capable of storing instruc- tions, such as an operating system and/or various ap- plications .

Furthermore, the processor 202 is capable of executing the stored instructions. In an example embod- iment, the processor 202 may be embodied as a multi- core processor, a single core processor, or a combina- tion of one or more multi-core processors and one or more single core processors. For example, the processor 202 may be embodied as one or more of various processing devices, such as a coprocessor, a microprocessor, a con- troller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or var- ious other processing devices including integrated cir- cuits such as, for example, an application specific in- tegrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accel- erator, a special-purpose computer chip, or the like. In an example embodiment, the processor 202 may be con- figured to execute hard-coded functionality. In an ex- ample embodiment, the processor 202 is embodied as an executor of software instructions, wherein the instruc- tions may specifically configure the processor 202 to perform the algorithms and/or operations described herein when the instructions are executed.

The memory 204 may be embodied as one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non-volatile memory devices. For ex- ample, the memory 204 may be embodied as semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.).

The client device 200 may be any of various types of devices used by an end user entity and capable of communication in a wireless network, such as user equipment (UE). Such devices include but are not limited to smartphones, tablet computers, smart watches, lap top computers, Internet-of-Things (IoT) devices, etc.

Fig. 3 is a block diagram of a centralised unit device 300 in accordance with an example embodiment.

The centralised unit device 300 comprises one or more processors 302, and one or more memories 304 that comprise computer program code. The centralised unit device 300 may also comprise a transceiver 305, as well as other elements, such as an input/output module (not shown in FIG. 3), and/or a communication interface (not shown in FIG. 3).

The centralised unit device 300 may also be referred to as a centralised unit, CU, CU device, or similar.

The centralised unit device 300 may be any device/module/node in a network that is configured to perform at least some of the functionality as disclosed herein. For example, the CU device 300 may be part of a base station 201, such as a gNB. For example, in 5G a base station 201 may be implemented using a so-called centralised unit - distributed unit (CU-DU) architec- ture. In such an architecture, the CU can support ser- vice data adaptation (SDAP), radio resource control ( RRC), and packet data convergence protocol (PDCP) lay- ers, while the DU can support radio link control (RLC), media access control (MAC), and Physical Layer. One CU can belong to multiple gNB and multiple DUs can be con- nected to one CU.

According to an example embodiment, the at least one memory 304 and the computer program code are configured to, with the at least one processor 302, cause the centralised unit device 300 to transmit a validation model to a plurality of client devices, wherein the validation model comprises a machine learn- ing model configured to predict an output from an input based on a plurality of model parameters.

The centralised unit device 300 may be further configured to receive a plurality of gradient vectors for the plurality of model parameters of the validation model from each client device in the plurality of client devices.

The centralised unit device 300 may be further configured to cluster the plurality of client devices based on the plurality of gradient vectors, wherein the clustering groups client devices with similar gradient vectors into one cluster.

The centralised unit device 300 may be further configured to transmit clustering data to at least one distributed unit device, wherein the clustering data indicates the clustering of client devices connected to the at least one distributed unit device.

Although the centralised unit device 300 is depicted to comprise only one processor 302, the cen- tralised unit device 300 may comprise more processors. In an example embodiment, the memory 304 is capable of storing instructions, such as an operating system and/or various applications. Furthermore, the processor 302 is capable of executing the stored instructions. In an example embod- iment, the processor 302 may be embodied as a multi- core processor, a single core processor, or a combina- tion of one or more multi-core processors and one or more single core processors. For example, the processor 302 may be embodied as one or more of various processing devices, such as a coprocessor, a microprocessor, a con- troller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or var- ious other processing devices including integrated cir- cuits such as, for example, an application specific in- tegrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accel- erator, a special-purpose computer chip, or the like. In an example embodiment, the processor 302 may be con- figured to execute hard-coded functionality. In an ex- ample embodiment, the processor 302 is embodied as an executor of software instructions, wherein the instruc- tions may specifically configure the processor 302 to perform the algorithms and/or operations described herein when the instructions are executed.

The memory 304 may be embodied as one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non-volatile memory devices. For ex- ample, the memory 304 may be embodied as semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.). Further features of the centralised unit device 300 directly result from the functionalities and param- eters of the client device 200.

The centralised unit device 300 can estimate the similarity of client data distributions, and dynam- ically cluster the client devices 200 for training with- out introducing extra measurements or signalling of ra- dio data. This is achieved by implementing a general validation model for all client devices 200 to compute the parameter gradients based on a generic loss function from their local observed data. The client devices 200 with sufficiently large similarities of their gradient vectors can be allocated within a cluster model for coordinated training.

The clustered client devices 200 can be changed according to the variation of gradients similarities. For example, if a client device moved to an area with a different radio environment (i.e. propagation loss dis- tribution), the client device 200 can be assigned to a new cluster model.

The validation model can be updated in the net- work by collecting the gradient vectors from a plurality of client devices 200 computed by the loss function of a batch of samples. The clustered models can be updated independently in each client device 200 and synchronized periodically to ensure effective generalization for each cluster's environment.

The client device 200 and the centralised unit device 300 can solve the problem of how to find the best granularity in enabling multiple client devices 200 sharing common ML model parameters, from effective split of sampled radio data based on their similarity of dis- tributions, to improve the model accuracy and general- ization to different radio scenarios. For example, the problem occurs when a client device 200 is in different propagation environments, or with different moving speeds. The solution allows the centralised unit device 300 to identify these differences from the converging direction of the model parameters.

Fig. 4 shows an example embodiment of the sub- ject matter described herein illustrating ML model data flow.

For clarity, an example embodiment is disclosed considering the model illustrated in Fig. 4 for the use case of link quality prediction in multi-connectivity. In uplink grant-free transmission, the Ultra-Reliable Low-Latency Communication (URLLC) client devices 200 can select multiple cells to send a data packet. The objec- tive is to effectively predict the QoS (delay, error probability, data rate) of each link from the radio measurements (RSRP, channel, buffer status), and select the best cells satisfying the QoS target of the client device 200.

The client device 200 may perform the predic- tion at a time instant t. The client device 200 can collect the radio measurements 401, such as RSRP, CSI, and BSR, in the past t 0 TTIs and predict the values of the radio measurements 403 in the future t 1 TTIs based on a neural network function 402 of parameter q b .

The client device 200 can use the predicted radio parameters 403 to predict QoS parameters 405, such as packet delay, error probability, and data rate, based on a neural network function 404 of parameter 0 S . The client device 200 can use the model θ b , θ s over all con- nected cells, and generated probabilities of packet transmission over each from the predicted QoS 405. Based on the predicted QoS parameters 405, the client device can make decisions related to packet transmissions 407 using a decision function 406.

An objective is to identify client device re- ported data set x and y under the same distribution and make the model θ e and θ S converge to a similar direction. In this context, the client devices 200 in the same radio environment and traffic pattern should share the parameter θ e , and the client devices with the same RAN parameter configuration should share the parameter θ s .

Fig. 5 shows an example embodiment of the sub- ject matter described herein illustrating a flow chart for ML model distribution.

The CU 300 can initialise 501 a validation model with parameter matrix θ v , that is capable to pre- dict y, such as Delay, Error Rate, and/or Transmission Rate, from a sequence of x, such as RSRP, CSI, and/or BSR. The CU 300 can then broadcast 502 θ v , and an indi- cation of a loss function L, and indicate the batch size of validation samples to client device 200.

Each client device 200 can compute 503 a gra- dient vector g v of the parameters θ v based on the loss of observed y and predicted P(y'|x,θ v ) from a fixed batch of local x, and send g v to the CU 300.

The CU 300 can collect 504 the gradient vec- tors from the client devices 200. Once all client de- vices 200 report g v , the CU 300 can compute 505 the pairwise gradient similarities for each pair of client devices 200. The CU 300 can then perform clustering 506 of the client devices 200 to maximize the similarity between client devices in each cluster. The CU 300 can transmit the cluster labels to each DU. For example, given gradient of client de- vice j and gradient of client device j, the CU 300 can compute the cosine similarity and assign k-th client device u k to cluster c= argmaX i∈c s(U k ,U i ).

The CU 300 can initialise 512 a cluster model and transmit the cluster model to each client device 200 in a cluster.

The CU 300 can perform coordinative model up- dates on, for example, every periodical TTIs until new clusters are indicated. The can CU 300 collect 508 the parameters used to make packet transmission decisions from the client devices 200 in a cluster (noted as clus- ter model θ C) , and the number of training samples t. The CU 300 can then aggregate 509 the parameters of from each client device i by weighting with sample numbers t i /t, generate a new clustered model θ C , and transmit 510 it to each DU (scheduled) or client device (grant-free).

DU (scheduled) or UE (grant-free) can use θ C to make transmission decisions and update 511 the model based on locally observed x and y until the loss is min- imized.

The CU 300 can update 507 the validation model parameters θ v based on aggregated gradients of all cli- ent devices 200 and signal the new θ v to each client device 200. The validation model can be updated based on an equal amount of data samples from all client devices 200, which can ensure that the gradient similarities are not biased by unbalanced data size on each client device 200 during a fixed gap of TTIs. On the contrary, the cluster model can be updated locally on each client device 200 and aggregated periodically, which can ensure timely optimization for the decisions. Only the aggre- gation criterion may need to be changed when the client device 200 is assigned to different clusters.

The CU 300 and the client devices 200 should cache two sets of model parameters. The network should indicate to the client device different criteria of up- dating the models, and the client device 200 should report the gradients or parameters to the network.

Fig. 6 shows an example embodiment of the sub- ject matter described herein illustrating a signalling diagram.

The CU 300 can transmit the validation model 601 to client devices 200. The validation model can indicate necessary radio measurements, such as RSRP, channel state information, and traffic load, and QoS parameters, such as delay, error probability, and throughput.

The client devices 200 can collect 602 a batch of radio measurements and QoS parameters from packet transmissions. The client devices 200 can then compute the gradients of the validation model parameters based on a loss function of the model predicted and observed QoS. According to an example embodiment, the client device 200 is configured to compare the collected pa- rameters and the predicted parameters by calculating a loss between the collected parameters and predicted pa- rameters using a loss function indicated by the cen- tralised unit device 300.

The client devices can report the gradient vec- tors 603 to the CU 300.

The CU 300 can receive the gradient vectors 603 from each client device 200 and compute the pairwise gradient similarities for each pair of client devices 200.

The CU 300 can then perform clustering 604 of the client devices 200 to maximize the average pairwise similarity.

According to an example embodiment, the cen- tralised unit device 300 is configured to compute a pairwise gradient similarity for each client device pair in the plurality of client devices and assign each cli- ent device in the plurality of client devices to a clus- ter that maximises the pairwise gradient similarity be- tween the client device and client devices in the clus- ter.

The CU 300 may transmit indication of the clus- ters 605, such as cluster labels, to the gNBs/DUs 620 to which the client devices 200 are connected. The in- dication of the clusters 605 may be referred to as clus- tering data. The gNB/DU 620 can then coordinates the connected client devices 200 to share cluster models for data transmission. According to an example embodiment, the CU de- vice 300 is further configured to update 606 the param- eters of the validation model based on the received gradient vectors 603 and transmit the updated parameters of the validation model 607 to the plurality of client devices.

The CU 300 can update 606 the validation model based on the gradient vectors 603 received from the client devices 200. The CU 300 may then transmit the updated validation model parameters 607 to the client devices 200.

According to an example embodiment, the client device 200 is further configured to receive a cluster model from a centralised unit device 300, wherein the cluster model comprises a machine learning model con- figured to predict an output from an input based on a plurality of model parameters.

Herein a cluster model may refer to any ML model that is applicable to client devices in a cluster formed by the CU 300. A cluster model may also be re- ferred to as clustered model or similar.

The client device 200 may collect radio meas- urements corresponding to the input of the cluster model and parameters corresponding to the output of the clus- ter model.

The client device 200 may obtain predicted pa- rameters as the output of the cluster model by feeding the collected radio measurements as the input into the cluster model and compare the collected parameters and the predicted parameters. The client device 200 may compute a plurality of gradient vectors for the plurality of model parame- ters of the cluster model based on the comparison be- tween the collected parameters and the predicted param- eters.

The client device 200 may update the plurality of parameters of the cluster model based on the gradient vectors for the plurality of model parameters of the cluster model.

According to an example embodiment, the client device 200 is further configured to transmit the plu- rality of model parameters 608 of the cluster model to the centralised unit device 300.

According to an example embodiment, the cen- tralised unit device 300 is further configured to gen- erate a cluster model for each cluster based on the gradient vectors received from the client devices in the cluster, wherein the cluster model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters. The CU device 300 may then transmit the cluster model of each cluster to the client devices in the cluster.

The CU 300 can collect the cluster model pa- rameters 608 from the client devices 300 in a cluster. The CU 300 can then generate 609 a new cluster model for each cluster and transmit the updated parameters 610 to the client devices 200, for example, periodically until a new cluster indicated by the network.

According to an example embodiment, the CU de- vice 300 is further configured to receive a plurality of model parameters 608 of a cluster model from each client device in the cluster, update the plurality of parameters of the cluster model based on the received model parameters from the client devices in the cluster, and transmit the updated model parameters 610 of the cluster model to the client devices in the cluster.

The CU 300 can also update the validation model parameters by aggregating the gradients of all client devices 200 and transmit the updated parameters to all client devices 200.

The client device 200 may make decisions based on the cluster model and update parameters 611.

According to an example embodiment, the client device 200 is further configured to use the cluster model, and/or the updated cluster model for data trans- mission.

The proposed solution is general for ML-based solutions with distributed learners. However, for clar- ity, some example embodiments are disclosed in relation to the use case of packet transmission in multi-connec- tivity. In this context, the client devices 200 can use the cluster model to predict the RSRP, channel state, traffic load and estimation the link quality at differ- ent cells for packet transmission.

At least some example embodiments can also be used in other levels of global and local model scenar- ios. For example, in other use cases, the validation model can be managed by the Network Data Analytics Func- tion (NWDAF) to cluster the gNBs based on their reported data and assign them with different shared models.

In the following, an example embodiment is pro- vided for the use case of packet transmission in multi- connectivity. The combined model θ e and θ s predicts the delay, error, and rate when transmitting packet on a link under the given RSRP, CSI, and BSR during the past t 0 TTIs.

To measure the gradient similarities of every pair of client devices 200, the cosine distance of the validation models' gradients is computed over all ker- nels and bias. An example clustering algorithm of Den- sity-based Spatial Clustering (DBSCAN) is used to split the client devices 200 into clusters and apply with separated model aggregation criteria. The algorithm pro- cedure is disclosed as follows.

1. Initialize a graph with random parameters θ e and θ s , which has the following relationship:

- x i : RSRP (downlink signal), CSI (estimated BLER), BSR (UE buffered bytes)

- y i : Delay (received PDU), Error (PER), Rate (chan- nel throughput)

2. Each client device 200 computes the gradi- ent of the validation model based on local data and reports to network

-Measures x on every TTI over each connected cell, when a packet arrives in buffer (at time t), feed to compute send packet on cells

C i that satisfy its requirements of

Measures after receiving ACKs from DU, repeat for k packets to obtain vectors of compute the average gradients over all parameters in θ e , θ s to get a vector 3. CU 300 clusters the client devices 200 based on gradient similarities and send the labels to DUs

-Collects g ui from all the client devices 200 (in different TTIs depending on packet arrivals), computes cosine distance of every two client devices 200 to get a similarity vector

-performs clustering of client devices 200 by la- belling them within minimum distance, send labels to gNB: o loop over all client devices 200 from any cluster: find neighbours n ←u k over minimum distance if neighbour client devices 200 larger than minimum size: InI≥ N , assign a cluster label to client devices 200 in n:c i ← n if a client device 200 out of any cluster u i ∉ c, assign C j ← U j

4. CU 300 collects parameters θ and sample number n from each DU or client devices 200 on every T TTIs, aggregate the parameters of client devices 200 in each cluster and assign to each DU or client device 200: (clustered model, updated every T TTIs)

- DU or client device 200 uses 0 C. to select cells to transmit packets and update locally until T

TTIs

5. CU 300 updates parameters Q based on aver- age gradients of all UEs and assign to each: (validation model, updated every k packets)

- DU or client device 200 uses θ to compute gradients over every k packets and report to CU 300

In the following, results are presented for simulations using a system-level simulator in the het- erogeneous network scenario, where each client device 200 is connected to four best cells. The used parameters are: 10 client devices, three macro and 12 micro cells, 5 dBm client transmit power, 10 MHz bandwidth, FTP3 model traffic, 1 packet/s, 500kB, Time sequence of meas- urements: 12 TTIs, and Batch size of training samples: 100.

Clustering of client devices was performed over the similarity distance which results in three clustered models. The average gradient distribution of the client devices in these clusters are shown in Figs. 7- 9. Each figure corresponds to one cluster. This indicates that all of the models converge to a minimum loss but on different directions. This is caused by different prop- agation area and traffic arrival pattern making the pre- diction of RSRP and load from θ b different, and differ- ent MCS selected making the prediction of delay, error probability and throughput from θ S different.

Fig. 10 shows another example embodiment of the subject matter described herein illustrating simulation results.

The prediction performance of the three clus- tered models illustrated in Figs. 7 - 9 is shown in Fig. 10. Curve 1001 corresponds to training data, i.e. the- oretical performance. Curve 1002 corresponds to a cen- tralised system model. Curve 1003 corresponds to a first cluster model. Curve 1004 corresponds to a second clus- ter model. Curve 1005 corresponds to a third cluster model.

The prediction performance is computed as the mean square error loss of the predicted tuple (delay, error rate, transmission rate) of PDUs transmitted in different cells. A single centralized system model ap- plied to all client devices, corresponding to curve 1002, is used as a baseline. The theoretical upper bound is the loss of training data, corresponding to curve 1001, which is shown to converge quickly. However, when using the model on the measured data, it shows a stable higher loss even though more samples are used to update the model in the following TTIs. On the other hand, the prediction loss of the three clustered models all con- verge to the upper bound of the training data after around 100 TTIs in this scenario. In the initial stage (i.e., first 100 TTIs), the loss is higher than the centralised model, because the clustered models have fewer collected samples in this initial period of time. However, it provides more accurate prediction with fur- ther trainings and approaches a good generalization on the measured data.

Fig. 11 shows an example embodiment of the sub- ject matter described herein illustrating a method 1100.

According to an example embodiment, the method 1100 comprises receiving 1101 a validation model from a centralised unit device, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters . The method 1100 may further comprise collect- ing 1102 radio measurements corresponding to the input of the validation model and parameters corresponding to the output of the validation model.

The method 1100 may further comprise obtaining

1103 predicted parameters as the output of the valida- tion model by feeding the collected radio measurements as the input into the validation model.

The method 1100 may further comprise comparing

1104 the collected parameters and the predicted param- eters.

The method 1100 may further comprise computing

1105 a plurality of gradient vectors for the plurality of model parameters of the validation model based on the comparison between the collected parameters and the pre- dicted parameters.

The method 1100 may further comprise trans- mitting 1106 the plurality of gradient vectors for the plurality of model parameters of the validation model to the centralised unit device.

Fig. 12 shows an example embodiment of the sub- ject matter described herein illustrating a method 1200.

According to an example embodiment, the method 1200 comprises transmitting 1201 a validation model to a plurality of client devices, wherein the validation model comprises a machine learning model configured to predict an output from an input based on a plurality of model parameters.

The method 1200 may further comprise receiving 1202 a plurality of gradient vectors for the plurality of model parameters of the validation model from each client device in the plurality of client devices. The method 1200 may further comprise cluster- ing 1203 the plurality of client devices based on the plurality of gradient vectors, wherein the clustering groups client devices with similar gradient vectors into one cluster.

The method 1200 may further comprise trans- mitting 1204 clustering data to at least one distributed unit device, wherein the clustering data indicates the clustering of client devices connected to the at least one distributed unit device.

It is to be understood that the order in which operations 1101-1106 and/or 1201 - 1204 are performed, may vary from the example embodiment depicted in Figs. 11 and 12.

The method 1100 may be performed by the client device 200 of Fig. 2. The method 1200 may be performed by the centralised unit device 300 of Fig. 3. Further features of the methods 1100, 1200 directly result from the functionalities and parameters of the client device 200 and/or the centralised unit device 300. The methods 1100, 1200 can be performed, at least partially, by computer program (s).

At least some example embodiments disclosed herein may enable identifying the similarities of hidden radio environment and configuration of client devices in an implicit manner, without requiring extra radio measurements and signalling of radio data samples.

At least some example embodiments disclosed herein may enable splitting the ML model according to the radio data with similar distributions, regardless of different convergence speed on each client device. The gradient similarity represents the client devices model parameter having the same converging direction to a stable model. The exchange of actual data samples is not needed, which is also inaccurate for validation if the converging state at each UE is different.

At least some example embodiments disclosed herein may provide cluster models with generalized and stable outputs in dynamic environments. The client de- vice can be assigned with different parameters once switched to different clusters, without the need of gra- dient descends to retrain the model from the data ob- served in new environment.

An apparatus may comprise means for performing any aspect of the method (s) described herein. According to an example embodiment, the means comprises at least one processor, and memory comprising program code, the at least one processor, and program code configured to, when executed by the at least one processor, cause per- formance of any aspect of the method.

The functionality described herein can be per- formed, at least in part, by one or more computer program product components such as software components. Accord- ing to an example embodiment, the client device 200 and/or centralised unit device 300 comprise a processor configured by the program code when executed to execute the example embodiments of the operations and function- ality described. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application- specific Integrated Circuits (ASICs), Application-spe- cific Standard Products (ASSPs), System-on-a-chip sys- tems (SOCs), Complex Programmable Logic Devices (CPLDs), and Graphics Processing Units (GPUs).

Any range or device value given herein may be extended or altered without losing the effect sought. Also any example embodiment may be combined with another example embodiment unless explicitly disallowed.

Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equiv- alent features and acts are intended to be within the scope of the claims.

It will be understood that the benefits and advantages described above may relate to one example embodiment or may relate to several example embodiments. The example embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to 'an' item may refer to one or more of those items.

The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter de- scribed herein. Aspects of any of the example embodi- ments described above may be combined with aspects of any of the other example embodiments described to form further example embodiments without losing the effect sought.

The term 'comprising' is used herein to mean including the method, blocks or elements identified, but that such blocks or elements do not comprise an exclu- sive list and a method or apparatus may contain addi- tional blocks or elements.

It will be understood that the above descrip- tion is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exem- plary embodiments. Although various example embodiments have been described above with a certain degree of par- ticularity, or with reference to one or more individual example embodiments, those skilled in the art could make numerous alterations to the disclosed example embodi- ments without departing from the spirit or scope of this specification.