Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PREDICTING NETWORK BEHAVIOUR
Document Type and Number:
WIPO Patent Application WO/2020/148773
Kind Code:
A1
Abstract:
The present disclosure relates to a method of a computing device (18) of predicting behaviour of a target network, and a computing device (18) performing the method. In an aspect, a method of a computing device (18) of predicting behaviour of a target network is provided. The method comprises applying (S101), to the target network, learnings of a previously computed time-dependent source network model (30) appropriate for modelling the target network, thereby creating a target network model (40), estimating (S102) values of at least one target network property utilizing training data provided to the target network model (40), and adapting (S103) the target network model (40) such that the estimated values sufficiently complies with measured true values of said at least one network property.

Inventors:
SARKAR ABHISHEK (IN)
DEY KAUSHIK (IN)
HEGDE DHIRAJ NAGARAJA (IN)
ROY ASHIS KUMAR (IN)
Application Number:
PCT/IN2019/050039
Publication Date:
July 23, 2020
Filing Date:
January 16, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
SARKAR ABHISHEK (IN)
International Classes:
H04L12/26
Foreign References:
US5774633A1998-06-30
EP3139313A22017-03-08
US9015093B12015-04-21
Attorney, Agent or Firm:
SINGH, Manisha (IN)
Download PDF:
Claims:
CLAIMS

1. A method of a computing device (18) of predicting behaviour of a target network, comprising:

applying (SI 01), to the target network, learnings of a previously computed time- dependent source network model (30) appropriate for modelling the target network, thereby creating a target network model (40);

estimating (SI 02) values of at least one target network property utilizing training data provided to the target network model (40); and

adapting (SI 03) the target network model (40) such that the estimated values sufficiently complies with measured true values of said at least one network property.

2. The method of claim 1, the adapting (S103) of the target network model (40) comprising: adapting the target network model (40) such that a difference between the estimated values and the measured values does not exceed a prediction accuracy threshold value.

3. The method of claim 1, the adapting (S103) of the target network model comprising: adapting the target network model (40) so as to minimize a loss function based on a difference between the estimated values and the measured values.

4. The method of any one of claims 1-3, wherein the computed time-dependent source network model (30) is configured to comprise stacked Long Short-Term Memory, LSTM, layers (32a, 32b) provided with the training data followed by a softmax function (33) outputting estimated values of the at least one network property.

5. The method of claim 4, wherein the computed time-dependent source network model (30) further is configured to comprise a feature creation function (31) at its input in order to create a reduced feature set from a provided larger feature set, wherein the reduced feature set is provided to the stacked LSTM layers (32a, 32b). 6. The method of claim 5, wherein the feature creation function (31) comprises an auto encoder.

7. The method of any one of claims 4-6, the adapting (SI 03) of the target network model comprising: inserting one or more LSTM layers between a final LSTM layer of the source network model and the softmax function.

8. The method of claim 7, the adapting of the target network model comprising:

adding one or more LSTM nodes to existing LSTM nodes in one or more LSTM layers and training the added nodes with a higher learning rate as compared to the existing LSTM nodes.

9. A computing device (18) configured to predict behaviour of a target network, comprising a processing unit (50) and a memory (52), said memory containing instructions (51) executable by said processing unit (50), whereby the computing device (18) is operative to:

apply, to the target network, learnings of a previously computed time-dependent source network model (30) appropriate for modelling the target network, thereby creating a target network model (40);

estimate values of at least one target network property utilizing training data provided to the target network model (40); and

adapt the target network model (40) such that the estimated values sufficiently complies with measured true values of said at least one network property.

10. The computing device (18) of claim 9, further being operative to, when adapting the target network model (40):

adapt the target network model (40) such that a difference between the estimated values and the measured values does not exceed a prediction accuracy threshold value.

11. The computing device (18) of claim 9, further being operative to, when adapting the target network model:

adapt the target network model (40) so as to minimize a loss function based on a difference between the estimated values and the measured values. 12. The computing device (18) of any one of claims 9-11, wherein the computed time- dependent source network model (30) is configured to comprise stacked Long Short-Term Memory, LSTM, layers (32a, 32b) provided with the training data followed by a softmax function (33) outputting estimated values of the at least one network property.

13. The computing device (18) of claim 12, wherein the computed time-dependent source network model (30) further is configured to comprise a feature creation function (31) at its input in order to create a reduced feature set from a provided larger feature set, wherein the reduced feature set is provided to the stacked LSTM layers (32a, 32b). 14. The computing device (18) of claim 13, wherein the feature creation function (31) comprises an auto-encoder.

15. The computing device (18) of any one of claims 12-14, further being operative to, when adapting the target network model:

insert one or more LSTM layers between a final LSTM layer of the source network model and the softmax function.

16. The computing device (18) of claim 15, further being operative to, when adapting of the target network model comprising:

adding one or more LSTM nodes to existing LSTM nodes in one or more LSTM layers and training the added nodes with a higher learning rate as compared to the existing LSTM nodes.

17. A computer program (51) comprising computer-executable instructions for causing a computing device (18) to perform the method recited in any one of claims 1-8 when the computer-executable instructions are executed on a processing unit (50) included in the computing device (18). 18. A computer program product comprising a computer readable medium (52), the computer readable medium having the computer program (51) according to claim 17 embodied thereon.

Description:
PREDICTING NETWORK BEHAVIOUR

TECHNICAL FIELD

[0001] The present disclosure relates to a method of a computing device of predicting behaviour of a target network, and a computing device performing the method. BACKGROUND

[0002] Network behaviour prediction in data and telecommunication networks are being utilized to an ever increasing extent as the networks are growing and are becoming more complex to monitor.

[0003] Network behaviour typically exhibits a complex sequential pattern and is often difficult to predict. Nowadays there are several techniques to predict the degradation in network key performance indicators (KPIs) such as for instance throughput, latency and Quality of Service (QoS) using various machine learning techniques like Deep Neural Networks, where initial network layers have learnt to map raw features such as performance counter measurements, weather, system configuration details, etc., into a feature space where classification by final network layers can be performed.

[0004] Given that the number of so called counters is substantial (more than 2000 in number) in Deep Neural Networks, a large amount of data is required to train these networks. This training requires resources and time and more importantly the training requires provisioning of large amounts of data for every trial; the reason being that network topology, climate features and user patterns vary across regions and service providers and hence network models need to be refreshed due to changes in these parameters.

[0005] Further, all the data is generally not available due to data pipe limitations, cost, or other data provisioning restrictions. In such cases representative data (about 10% of the volume required for model training) is provided with an expectation of benchmarked accuracy which often becomes difficult to achieve as features in a high dimensional space cannot be derived effectively from limited data.

[0006] Also, the nature of the KPI data is sequential in nature, as the current counter

measurements may be dependent on the measurements taken much earlier. It is also difficult to say beforehand how current measurement data being utilized will influence future measurements. SUMMARY

[0007] One objective is to solve this problem, or at least mitigate, this problem in the art and thus to provide an improved method of predicting behaviour of a target network,

[0008] In a first aspect, this objective is attained by a method of a computing device of predicting behaviour of a target network. The method comprises applying, to the target network, learnings of a previously computed time-dependent source network model appropriate for modelling the target network, thereby creating a target network model, estimating values of at least one target network property utilizing training data provided to the target network model, and adapting the target network model such that the estimated values sufficiently complies with measured true values of said at least one network property.

[0009] In a second aspect, this objective is attained by a computing device configured to predict behaviour of a target network. The computing device comprises a processing unit and a memory, said memory containing instructions executable by said processing unit, whereby the computing device is operative to apply, to the target network, learnings of a previously computed time- dependent source network model appropriate for modelling the target network, thereby creating a target network model, estimate values of at least one target network property utilizing training data provided to the target network model, and adapt the target network model such that the estimated values sufficiently complies with measured true values of said at least one network property.

[0010] In an embodiment, the target network model is adapted such that a difference between the estimated values and the measured values does not exceed a prediction accuracy threshold value.

[0011] In an embodiment, the target network model is adapted so as to minimize a loss function based on a difference between the estimated values and the measured values.

[0012] In an embodiment, the computed time-dependent source network model is configured to comprise stacked Long Short-Term Memory (LSTM) layers provided with the training data followed by a softmax function outputting estimated values of the at least one network property.

[0013] In an embodiment, the computed time-dependent source network model is further configured to comprise a feature creation function, such as an auto-encoder at its input in order to create a reduced feature set from a provided larger feature set, wherein the reduced feature set is provided to the stacked LSTM layers. [0014] In an embodiment, the target network model is adapted by inserting one or more LSTM layers between a final LSTM layer of the source network model and the softmax function.

[0015] In an embodiment, the target network model is adapted by adding one or more LSTM nodes to existing LSTM nodes in one or more LSTM layers and training the added nodes with a higher learning rate as compared to the existing LSTM nodes.

[0016] Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the element, apparatus, component, means, step, etc." are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.

BRIEF DESCRIPTION OF THE DRAWINGS

[0017] Aspects and embodiments are now described, by way of example, with reference to the accompanying drawings, in which:

[0018] Figure 1 illustrates a prior art Radio Access Network (RAN) in which embodiments can be implemented;

[0019] Figure 2 illustrates a network model utilized to model e.g. a Deep Learning Network;

[0020] Figure 3 illustrates modelling a target network according to an embodiment

[0021] Figure 4 illustrates modelling a target network according to another embodiment;

[0022] Figure 5 shows a flowchart illustrating a method of predicting behaviour of a target network according to an embodiment;

[0023] Figure 6 illustrates a source network model being utilized in an embodiment;

[0024] Figure 7 illustrates the source network model of Figure 5 being utilized to create a target network model according to an embodiment;

[0025] Figure 8 illustrates a further embodiment where learnings of a source model are applied to a target network;

[0026] Figure 9 illustrates a computing device according to an embodiment; and

[0027] Figure 10 illustrates a computing device according to another embodiment. DETAILED DESCRIPTION

[0028] The aspects of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the invention are shown.

[0029] These aspects may, however, be embodied in many different forms and should not be construed as limiting; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and to fully convey the scope of all aspects of invention to those skilled in the art. Like numbers refer to like elements throughout the description.

[0030] Figure 1 illustrates a Radio Access Network (RAN) 10 where a plurality of wireless communication devices 11-15, such as for instance smart phones, commonly referred to as User Equipment (UE) communicates with a radio base station 16, commonly referred to as an eNodeB in uplink and downlink direction. The eNodeB 16 is further connected to a so called Core Network (not being shown in Figure 1).

[0031] It may be desirable to predict the behaviour of a RAN, such as the RAN 10 shown in Figure 1. If a new, similar network is to be set-up somewhere else, it would be helpful to be able to use the predicted behaviour of the RAN 10 in the form of a source network model when determining configuration of the new network to be set-up instead of starting from scratch.

[0032] Further, in an embodiment, a network such as the RAN 10 may be used to create the source network model having being trained with a huge amount of data, thereby creating a strong, accurate model.

[0033] That source model may then subsequently be applied to a new target network to create a target network model which is trained with a highly limited amount of data as compared to the source network model.

[0034] Hence, hard-earned knowledge of a source site may advantageously be transferred and applied to a target site.

[0035] Since the learnings of the source model is used in the target network, the target network model is trained with (a limited amount of) data until the behaviour of the target network model corresponds sufficiently to the behaviour of the real-world target network and thus can be used for predicting a true behaviour of the target network. Since the target network model utilizes learnings of the source model, the target network model need not be trained from“scratch”, and hence a limited amount of training data is required as compared to the source model.

[0036] The eNodeB 16 produces a wide range of counter data that could be used by monitoring equipment 17 to monitor its performance and quality of service. The monitoring equipment 17 may be located local to, or remote from, the eNodeB 16. These counters are within the field of machine learning commonly referred to as Performance Management (PM) counters. Data from the counters are usually collected at predefined intervals such as 15, 30 or 60 minutes and over a long period of time. The counters are the building blocks for computing particular network properties, in the following referred to as KPIs. In brief, a KPI is computed from a formula in which data of one or several counters is used.

[0037] A KPI may represent end-user perception of a network on a macro level. KPIs are of interest for instance for an operator a network, where KPI statistics may be used to benchmark networks against each other and to detect problem areas. Hence, a KPI is supplied to a system operator which takes into account the value of the computed KPI and modifies the network accordingly.

[0038] With reference to Figure 2, in a network model 20 utilized to model e.g. a Deep Learning Network, data from PM counters are input to the model 20, which computes the KPIs and provides the estimated values of the KPIs as outputs. As an example, the outputted network property is“downlink throughput”, and the system characteristics represented by the PM counter values are system characteristics required for computing the downlink throughput, such as“total number of data being transferred in downlink”,“effective downlink transport time”, etc. This will be discussed in detail in the following.

[0039] As previously mentioned, the number of PM counters implemented at an eNodeB may add up to 2000+ counters.

[0040] In the following, an example will be given of a network property - i.e. a KPI - being computed from data derived from PM Counters. Throughput, be it uplink and/or downlink, is an important network property/KPI indicating a rate at which data is transmitted or received.

[0041] UE Downlink (DL) Throughput Average is a KPI in a communications network such as the RAN 10 of Figure 1, which may be defined as being computed based on three different system parameters: pmPdcpVolDWrbb— pmPdcpVolDWrbLastTTI

DL Throughput Average = - „ -— - pmu eT hpT imeDl

[0042] where: pmPdcpVolDIDrb is total number of data being transferred in DL,

pmPdcpVolDIDrbLastTTI is total volume of data being transferred in DL in the last Transmission Time Interval (TTI) when a data buffer is emptied, and

pmUeThpTimeDl is effective DL transport time comprising the periods from when the first part of the data of the buffer was transmitted until the buffer is emptied, excluding the TTI emptying the buffer.

[0043] Hence, in this example, three PM counters are used where each counter produces data embodying a respective one of the three above-mentioned system parameters.

[0044] Now, in an embodiment, in order to predict behaviour of a target network, a time- dependent source network model appropriate for being applied to the target network is acquired. As previously has been discussed, the source network model has typically been trained with a great amount of data, such that it accurately models the source network. In an embodiment, the time-dependent source network model is a recurrent neural network (RNN) model comprising layers of long short-term memory (LSTM) units to capture the time dependency in the data, since the data is sequential in nature.

[0045] Figure 3 illustrates an extended version of the RAN 10 of Figure 1 where a plurality of wireless communication devices 11-14, such as for instance smart phones, commonly referred to as User Equipment (UE) communicates with one or more of a first, a second and a third eNodeB 16a, 16b, 16c in uplink and downlink direction. As previously mentioned, each eNodeB 16a,

16b, 16c produces a wide range of counter data that could be used by corresponding monitoring equipment 17a, 17b, 17c to monitor its performance and quality of service.

[0046] Thus, counter data is captured continuously by monitoring equipment 17a, 17b, 17c at the eNodeBs 16a, 16b, 16c and is transferred in step SI to a data centre by means of adequate infrastructure in the form of a high-capacity data pipeline for allowing a computing device 18 to build machine learning models to predict KPIs according to an embodiment.

[0047] In case of setting up a new eNodeB 16d, or in case of an already available eNodeB 16d which do not have adequate infrastructure for receiving/transferring large amounts of data, source models created previously by the computing device 18 are used for transfer learning from the source model to a target model to be computed for the eNodeB 16d. Among the candidate source models computed by the computing device 18, one or more source models 30 most suitable for creating the target model 40 for the eNodeB 16d are selected, which is supplied with target network data in step S2 from which one or more selected KPIs can be estimated.

Typically, the target network model 40 would have to be adapted such that the estimated values sufficiently complies with measured true values, since the selected source model 30 is not a perfect fit for the target network. Finally, an adequate final target network model 40 is created in step S3.

[0048] Figure 4 illustrates modelling a target network according to an embodiment.

[0049] Reference will further be made to Figure 5 showing a flowchart illustrating a method of predicting behaviour of a target network according to an embodiment.

[0050] Now, assuming that behaviour of the target network is to be predicted. For instance, at 9:00 it is desirable to predict the throughput of the target network occurring 30 minutes later, i.e. at 9:30.

[0051] This is expressed as Y st = f(X T ), where Y st is the estimate of

Y rue = DL Throughput Average T+N

That is, Y est °° is an estimate of Y 09 ue = DL Throughput Average 0930 .

[0052] As an example, assuming that the source network model acquired for predicting throughput 30 minutes later in the time period ranging from 08:00 to 10:00 (the predictions of the model are typically based on property values varying over the day) has been found to be:

/ = 0. 5 * DL Throughput Average 1 +

[0053] Examples of values utilized to train the target network model are shown in Table 1 below:

Table 1.

[0054] In Table 1, for brevity:

a*: pmPdcpVolDIDrb 1 ,

b*: pmPdcpVolDIDrbLastTTI 1 ,

c*: pmUeThpTimeDl 1 , and

d*: DL Throughput Average 1

[0055] In a first step S101, learnings of the acquired previously computed time-dependent source network model are applied to the target network, thereby creating a target network model.

Hence, training data of the target network is applied to the source network model.

[0056] For each given instant of time t in Table 1, the target network model is trained with corresponding values X 1 = [pmPdcpVolDIDrb 1 , pmPdcpVolDIDrbLastTTI 1 , pmUeThpTimeDl 1 , DL Throughput Average 1 ].

[0057] When being trained with these values, the target network model produces, or outputs, estimated values Y l est of a target network property in step SI 02, in this case throughput.

[0058] However, as can be concluded from the last column of Table 1, where the measured, true throughput at time t + 30 min, DL Throughput Average 1 , is listed; the source model the learnings of which is applied to the target network is not entirely correct.

[0059] Assuming for instance that a requirement on the target network model is that an estimated value of a concerned target network property is not allowed to deviate more than 10% true, measured value of a concerned target network property at a given point in time. Hence, in an embodiment, the target network model is adapted in step SI 03 such that a difference between the estimated values and the measured values of the network property does not exceed a prediction accuracy threshold value, in this case set to 10%. It should be noted that stray difference values exceeding the prediction accuracy threshold value typically are allowable, but that an average value of the difference should not exceed the prediction accuracy threshold value.

[0060] In an alternative embodiment, the target network model is adapted so as to minimize a loss function based on a difference between the estimated values and the measured value.

[0061] As can be concluded from Table 1, the deviation for the prediction at 09:00 is (2-0.96)/2 = 52%, the deviation at 09: 15 is (3.1-1.84)/2 = 63%, and so on.

[0062] Hence, the target network model should be adapted to take into account the differences between the predicted values and the actually measured values such that the difference does not exceed the prediction accuracy threshold value.

[0063] For instance, after a number of iterations, it may be concluded that the target network model should be adapted to have the appearance:

f = 0.6 * DL Throughput Average 1 +

[0064] Table 2 illustrates the estimated throughput values after having adapted the target network model based on the measured, true values of the throughput for the target network.

Table 2. [0065] Now, as can be concluded from Table 2, after the adapting of the target network model, the deviation for the estimation at 09:00 is (0.9-0.96)/2 = 3%, the deviation at 09: 15 is (1.8- 1.84)/2 = 2%, the deviation at 09:30 is {2.1-2.62)12 = 4%, and so on.

[0066] To conclude, in this exemplifying embodiment, the behaviour of the network can be predicted based on estimation of one or more selected network properties by utilizing the final target network model.

[0067] Advantageously, using the approach of the above described embodiment, the target network model can be trained with a highly limited set of data as compared to the original source network model, which initially is applied to the target network (even though in practice far more data sets than those used in Tables 1 and 2 will be required). Thereby, computation resource requirements and overall costs to build new models are drastically reduced.

[0068] Figure 6 illustrates a source network model 30 being utilized in an embodiment.

[0069] The source network model 30 being utilized optionally comprises a feature creation block 31 for dimensionally reducing the provided training data X T . As an example, the feature creation blocks 31 may be implemented using so called autoencoders, i.e. an unsupervised learning algorithm that applies backpropagation, to construct a more meaningful set of training data features after eliminating any noise.

[0070] Thereafter, layers 32a, 32b of LSTMs are added (in this case two layers, but more layers are typically used in practice) being utilized to predict the output Y l est from the model 30.

[0071] Further, the model 30 comprises a so called softmax function 33 being well-known in the field of deep neural network. The softmax function takes an un-normalized vector and normalizes it into a probability distribution. Hence, the input vector elements could be negative or greater than one, and may not sum to 1. However, each vector element will when outputted from the softmax function be in the interval [0, 1], and the sum of all outputted vector elements will be 1. The softmax function is commonly used in the output layer of a deep neural net to represent a categorical distribution over class labels, and obtaining the probabilities of each input element belonging to a label. While Figure 5 shows simultaneous prediction at two different instances of time t, t +1, simultaneous prediction at further instances t+2, t+3, ..., t+n is envisaged. [0072] Hence, after having built the source network model 30 for different regions, customers, etc., the source model 30 can be reused for different regions or customers.

[0073] Figure 7 illustrates the source network model 30 of Figure 6 being utilized to create a target network model 40 according to an embodiment.

[0074] The pre-existing source model of Figure 6 should be appropriate for being used as a starting point for modelling the target network. As previously discussed, the source model is applied to the target network and true values of the concerned network property is measured to determine any difference between the estimated values and the true values measured in the target network.

[0075] In this embodiment, the target network model 40 is adapted in that one or more LSTM layers 33 are inserted between a final LSTM layer 32b of the source network model and the softmax function 33.

[0076] Advantageously, parameters at the lower LSTM layers 32a, 32b pertaining to the source network model 30 are kept fixed, while the new layer LSTM layer 33 reflecting particulars of the target network must be trained by the provided training data X T . The new LSTM layer 33 may be randomly initialized before training.

[0077] The target network model 40 is then trained until a difference between the estimated values and the measured values of the concerned network property does not exceed a prediction accuracy threshold value, previously exemplified to be 10%. Hence, the behaviour of the target network can thus be predicted with a high accuracy network using the target network model 40.

[0078] Again, using the approach of the above described embodiment, the target network model can be trained with a highly limited set of data as compared to the original source network model.

[0079] Figure 8 illustrates a further embodiment where learnings of a source model are applied to a target network. In Figure 8, it is illustrated that a new node 43 is added to existing nodes 41, 42 are added in an LSTM layer 32a. Similar to the embodiment of Figure 7, parameters at the existing LSTM nodes 41, 42 pertaining to the source network model are kept fixed and are trained with a lower training rate, while the new layer LSTM layer 33 reflecting particulars of the target network input is trained with a higher training rate. The output h l will be provided to a subsequent LSTM layer to finally produce the estimate Y l . [0080] With reference to Figure 9, the steps of the method of the computing device 18 for predicting behaviour of a target network according to embodiments are in practice performed by a processing unit 50 embodied in the form of one or more microprocessors arranged to execute a computer program 51 downloaded to a suitable storage medium 52 associated with the microprocessor, such as a Random Access Memory (RAM), a Flash memory or a hard disk drive. The processing unit 50 is arranged to cause the computing device 18 to carry out the method according to embodiments when the appropriate computer program 51 comprising computer-executable instructions is downloaded to the storage medium 52, being e.g. a non- transitory storage medium, and executed by the processing unit 50. The storage medium 52 may also be a computer program product comprising the computer program 51. Alternatively, the computer program 51 may be transferred to the storage medium 22 by means of a suitable computer program product, such as a Digital Versatile Disc (DVD) or a memory stick. As a further alternative, the computer program 51 may be downloaded to the storage medium 52 over a network. The processing unit 50 may alternatively be embodied in the form of a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), etc.

[0081] Figure 10 illustrates a computing device 18 configured to predicting behaviour of a target network according to an embodiment.

[0082] The computing device 18 comprises applying means 60 adapted to apply, to the target network, learnings of a previously computed time-dependent source network model appropriate for modelling the target network, thereby creating a target network model, estimating means 61 adapted to estimate values of at least one target network property utilizing training data provided to the target network model, and adapting means 62 adapted to adapt the target network model such that the estimated values sufficiently complies with measured true values of said at least one network property.

[0083] The means 60-62 may comprise a communications interface for receiving and providing information, and further a local storage for storing data, and may (in analogy with that previously discussed) be implemented by a processor embodied in the form of one or more microprocessors arranged to execute a computer program downloaded to a suitable storage medium associated with the microprocessor, such as a RAM, a Flash memory or a hard disk drive. [0084] The aspects of the present disclosure have mainly been described above with reference to a few embodiments and examples thereof. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims.