ANAND ARJUN (US)
HIMAYAT NAGEEN (US)
AVESTIMEHR AMIR S (US)
BALAKRISHNAN RAVIKUMAR (US)
BHARDWAJ PRASHANT (US)
CHOI JEONGSIK (US)
CHOI YANG-SEOK (US)
DHAKAL SAGAR (US)
EDWARDS BRANDON GARY (US)
PRAKASH SAURAV (US)
SOLOMON AMIT (US)
TALWAR SHILPA (US)
YONA YAIR ELIYAHU (US)
US20190340534A1 | 2019-11-07 | |||
US20170279839A1 | 2017-09-28 | |||
JP2019144642A | 2019-08-29 | |||
US20190171978A1 | 2019-06-06 | |||
KR20190032433A | 2019-03-27 |
What is claimed is: 1. An apparatus of a first edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the first edge computing node, and a processor to: decode a first message from a second edge computing node, the first message including information on a target data distribution for federated machine learning training; determine the target data distribution from the first message; encode a client report for transmission to the second edge computing node based on a divergence between a local data distribution of the first edge computing node and the target data distribution; cause transmission of the client report to the second computing node; decode a second message from the second edge computing node, the second message including information on a global model associated with a global epoch of the federated machine learning training; and update a local gradient at the first edge computing node based on the global model. 2. The apparatus of claim 1, the processor further to: encode weighted loss information for transmission to the second edge computing node; cause transmission of the weighted loss information to the second edge computing node; encode for transmission to the second edge computing node a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^/ ^^ ^^, where ^^ ^^, ^^+e corresponds to a weight update of the first edge computing node at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for the first edge computing node at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of the first edge computing node, and q is a probability distribution for the first edge computing node; and cause transmission of the local weight update to the second edge computing node. 3. The apparatus of any one of claims 1-2, wherein a data distribution at the first edge computing node corresponds to non-independent and identically distributed data (non-i.i.d.). 4. The apparatus of any one of claim 1-2, the processor to perform rounds of federated machine learning training including: processing a capability request from the second edge computing node; generating a capability report on compute capabilities and communication capabilities of the edge computing node in response to the capability request; causing transmission of the capability report to the second edge computing node; after causing transmission of the capability report, decoding the second message from the second edge computing node to initialize training parameters of a federated machine learning training round with the second edge computing node; and reporting updated model weights based on the global model for the federated machine learning training round. 5. The apparatus of claim 4, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the second edge computing node. 6. The apparatus of claim 4, wherein the capability report further includes information on a number of training examples at the client. 7. The apparatus of claim 1, the processor to: encode, for transmission to the second edge computing node, a capability report including at least one of information based on a training loss of the first edge computing node for a global epoch of a federated machine learning training by the second edge computing node training, or information based on a gradient of the first edge computing node with respect to a pre-activation output of the first edge computing node; cause transmission of the capability report; and for a next global epoch of the federated machine learning training, decode an updated global model from the second edge computing node. 8. The apparatus of claim 7, the processor to further decode the second message prior to causing transmission of the capability report. 9. The apparatus of any one of claims 7-8, wherein the capability report further includes a compute rate, and at least one of an uplink communication time or a downlink communication time for communication with the second edge computing node. 10. The apparatus of claim 1, the processor to: compute kernel coefficients based on a kernel function, a local raw training data set of the first edge computing node, and a raw label set corresponding to the local raw training data set; generate a coded training data set from the raw training data set; generate a coded label set based on the kernel coefficients, the kernel function, and the raw label set; and cause the coded training data set and coded label set to be transmitted to the second edge computing node. 11. The apparatus of claim 1, the processor to further: access a local training data set of the first edge computing node; apply a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; after decoding the second message, iteratively, until the global model converges: compute an update to the global model using the transformed data set and a raw label set corresponding to the training data set to obtain an updated global model; and cause the update to be transmitted to the second edge computing node. 12. The apparatus of claim 1, the processor to: access a local training data set of the first edge computing node and a label set corresponding to the local training data set; apply a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; estimate a local machine learning (ML) model based on the transformed training data set and the label set; generate a coded training data set from the transformed training data set; generate a coded label set based on the coded training data set and the estimated local ML model; and cause the coded training data set and coded label set to be transmitted to the second edge computing node. 13. The apparatus of claim 1, the processor to: access a subset of a local training data set of the first edge computing node; generate a transformed training data subset based on a Random Fourier Feature Mapping (RFFM) transform and the training data subset; generate a coding matrix based on a distribution; generate a weighting matrix; generate a coded training data mini-batch based on multiplying the transformed training data subset with the coding matrix and the weighting matrix; and cause the coded training data mini-batch to be transmitted to the second edge computing node; wherein the weighting matrix is based on a probability of whether the coded training data mini-batch will be received at the second edge computing node. 14. The apparatus of claim 1, the processor to: obtain, from each of a set of first edge computing nodes of the edge computing network, a maximum coding redundancy value for a coded federated learning (CFL) cycle to be performed on a global machine learning (ML) model of the federated ML training; determine a coding redundancy value based on the maximum coding redundancy values received from the edge computing devices; determine an epoch time and a number of data points to be processed at each edge computing device during each epoch of the CFL cycle based on the determined coding redundancy value; and cause the determined coding redundancy value, epoch time, and number of data points to be processed at each edge computing device to be transmitted to the set of edge computing devices. 15. The apparatus of claim 1, the processor to: determine a coded privacy budget and an uncoded privacy budget based on a differential privacy guarantee for a cycle of the federated machine learning; generate a coded data set from a raw data set of the first edge computing node based on the coded privacy budget; cause the coded data set to be transmitted to the second edge computing node; perform a round of the federated machine learning on the global model including: after receiving the second message, computing an update to the global model based on the raw data set and the uncoded privacy budget; and causing transmission of the update to the global model to the second edge computing node. 16. The apparatus of any one of claims 1-15, wherein the first edge computing node is a mobile client computing node. 17. A method to be performed at an first edge computing node in an edge computing network, the method including: decoding a first message from a second edge computing node, the first message including information on a target data distribution for federated machine learning training; determining the target data distribution from the first message; encoding a client report for transmission to the second edge computing node based on a divergence between a local data distribution of the first edge computing node and the target data distribution; causing transmission of the client report to the second computing node; decoding a second message from the second edge computing node, the second message including information on a global model associated with a global epoch of the federated machine learning training; and updating a local gradient at the first edge computing node based on the global model. 18. The method of claim 17, the method including: encoding weighted loss information for transmission to the second edge computing node; causing transmission of the weighted loss information to the second edge computing node; encoding for transmission to the second edge computing node a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^/ ^^ ^^, where ^^ ^^, ^^+e corresponds to a weight update of the first edge computing node at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for the first edge computing node at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of the first edge computing node, and q is a probability distribution for the first edge computing node; and causing transmission of the local weight update to the second edge computing node. 19. The method of any one of claims 17-18, wherein a data distribution at the first edge computing node corresponds to non-independent and identically distributed data (non-i.i.d.). 20. The method of any one of claim 17-18, the method including performing rounds of federated machine learning training including: processing a capability request from the second edge computing node; generating a capability report on compute capabilities and communication capabilities of the edge computing node in response to the capability request; causing transmission of the capability report to the second edge computing node; after causing transmission of the capability report, decoding the second message from the second edge computing node to initialize training parameters of a federated machine learning training round with the second edge computing node; and reporting updated model weights based on the global model for the federated machine learning training round. 21. The method of claim 20, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the second edge computing node. 22. The method of claim 20, wherein the capability report further includes information on a number of training examples at the client. 23. The method of claim 17, the method including: encoding, for transmission to the second edge computing node, a capability report including at least one of information based on a training loss of the first edge computing node for a global epoch of a federated machine learning training by the second edge computing node training, or information based on a gradient of the first edge computing node with respect to a pre-activation output of the first edge computing node; causing transmission of the capability report; and for a next global epoch of the federated machine learning training, decoding an updated global model from the second edge computing node. 24. The method of claim 23, the method including decoding the second message prior to causing transmission of the capability report. 25. The method of claim 23, wherein the capability report further includes a compute rate, and at least one of an uplink communication time or a downlink communication time for communication with the second edge computing node. 26. The method of claim 17, the method including: computing kernel coefficients based on a kernel function, a local raw training data set of the first edge computing node, and a raw label set corresponding to the local raw training data set; generating a coded training data set from the raw training data set; generating a coded label set based on the kernel coefficients, the kernel function, and the raw label set; and causing the coded training data set and coded label set to be transmitted to the second edge computing node. 27. The method of claim 17, the method including further: accessing a local training data set of the first edge computing node; applying a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; after decoding the second message, iteratively, until the global model converges: computing an update to the global model using the transformed data set and a raw label set corresponding to the training data set to obtain an updated global model; and causing the update to be transmitted to the second edge computing node. 28. The method of claim 17, the method including: accessing a local training data set of the first edge computing node and a label set corresponding to the local training data set; applying a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; estimating a local machine learning (ML) model based on the transformed training data set and the label set; generating a coded training data set from the transformed training data set; generating a coded label set based on the coded training data set and the estimated local ML model; and causing the coded training data set and coded label set to be transmitted to the second edge computing node. 29. The method of claim 17, the method including: accessing a subset of a local training data set of the first edge computing node; generating a transformed training data subset based on a Random Fourier Feature Mapping (RFFM) transform and the training data subset; generating a coding matrix based on a distribution; generating a weighting matrix; generating a coded training data mini-batch based on multiplying the transformed training data subset with the coding matrix and the weighting matrix; and causing the coded training data mini-batch to be transmitted to the second edge computing node; wherein the weighting matrix is based on a probability of whether the coded training data mini-batch will be received at the second edge computing node. 30. The method of claim 17, the method including: obtaining, from each of a set of first edge computing nodes of the edge computing network, a maximum coding redundancy value for a coded federated learning (CFL) cycle to be performed on a global machine learning (ML) model of the federated ML training; determining a coding redundancy value based on the maximum coding redundancy values received from the edge computing devices; determining an epoch time and a number of data points to be processed at each edge computing device during each epoch of the CFL cycle based on the determined coding redundancy value; and causing the determined coding redundancy value, epoch time, and number of data points to be processed at each edge computing device to be transmitted to the set of edge computing devices. 31. The method of claim 17, the method including: determining a coded privacy budget and an uncoded privacy budget based on a differential privacy guarantee for a cycle of the federated machine learning; generating a coded data set from a raw data set of the first edge computing node based on the coded privacy budget; causing the coded data set to be transmitted to the second edge computing node; performing a round of the federated machine learning on the global model including: after receiving the second message, computing an update to the global model based on the raw data set and the uncoded privacy budget; and causing transmission of the update to the global model to the second edge computing node. 32. An apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of federated machine learning training including: processing client reports from a plurality of clients of the edge computing network; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training; causing a global model to be sent to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. 33. The apparatus of claim 32, wherein the processor is to perform rounds of federated machine learning training further including: causing dissemination, to a plurality of clients of the edge computing network, of a target data distribution at the edge computing node for federated machine learning training, wherein each of the respective reports is based on a divergence between a local data distribution of a respective one of the clients and the target data distribution; assigning a respective weight to each respective divergence based on a size of the divergence, with higher divergences having higher weights; and selecting the candidate set including using a round robin approach based on the weights. 34. The apparatus of claim 33, wherein the divergence corresponding to each client is based on one of a Kullback-Leibler divergence or a distance of probability distributions between a local data distribution of said each client and the target data distribution. 35. The apparatus of any one of claims 33-34, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). 36. The apparatus of claim 32, wherein the processor is to perform rounds of federated machine learning training further including: processing weighted loss information from each of the clients; determining a probability distribution q for each of the clients based on the weighted loss information from corresponding ones of said each of the clients, wherein q is further based on an amount of data at said each of the clients, and a weight matrix for the federated machine learning training for said each of the clients for a global epoch; and selecting the candidate set including selecting a number of clients with highest probability distribution q as the candidate set. 37. The apparatus of claim 32, wherein the reports include at least one of information based on training losses of respective ones of said clients for an epoch of the training, or information based on gradients of respective ones of said clients with respect to a pre- activation output of respective ones of said clients; and the processor is to perform rounds of federated machine learning training further including: rank ordering the clients based on one of their training losses or their gradients; and selecting the candidate set including selecting a number of clients with highest training losses or highest gradients as the candidate set. 38. The apparatus of claim 37, wherein selecting a number of clients with highest training losses or highest gradients as the candidate set includes selecting a first number of clients with highest training losses or highest gradients as an intermediate set, and selecting a second number of clients from the intermediate set based on respective upload times of the first number of clients, the second number of clients corresponding to the candidate set. 39. The apparatus of claim 32, wherein the processor is to perform rounds of federated machine learning training further including: grouping clients from a plurality of available clients into respective sets of clients based on compute capabilities and communication capabilities of each of the available clients; and selecting a candidate set of clients based on a round robin approach, or based at least on one of a data distribution of each client of the candidate set or a determination that a global model of the machine learning training at the edge computing node has reached a minimum accuracy threshold. 40. The apparatus of claim 39, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the edge computing node. 41. The apparatus of any one of claims 32, 33, 34 and 36-40, wherein the processor is further to cause dissemination, to the plurality of clients of the edge computing network, of a global model corresponding to an epoch of a federated machine learning training. 42. The apparatus of any one of claims 32, 33, 34 and 36-40, wherein the processor is further to perform rounds of federated machine learning training including: obtaining coded training data from each of the selected clients; and performing machine learning training on the coded training data. 43. The apparatus of claim 42, wherein the processor is further to: determine a coding redundancy value to use in the machine learning training on the coded training data based on maximum coding redundancy values from each of the clients, the maximum coding redundancy values indicating a maximum number of coded training data points a respective client may provide; determine an epoch time and a number of data points to be processed at each client during each round federated machine learning based on the determined coding redundancy value; and cause the determined coding redundancy value, epoch time, and number of data points to be processed at each client to be transmitted to the selected clients. 44. The apparatus of claim 32, wherein the processor is further to, for a number of cycles E’ of epoch number t: discard an initial L clients from M clients sampled from N available clients; select a subsequent L clients from remaining clients N-M or N-M+initial L clients,; determine load balancing parameters for the subsequent L clients; receive coded data ^^ ^^̂ from each client i of the subsequent L clients; after the number of cycles E’, calculate a global weight corresponding to epoch number t+1, ^^( ^^+1) based on gradient g(t) i for each client i of the K clients at epoch number t and further based on gradient g(t+1) i for each client i of the K clients at epoch number t+1, wherein g(t) (t+1) i is calculated using data points based on the load balancing parameters, and gi is calculated using g(t) i and the coded data ^^ ^^̂. 45. The apparatus of claim 32, wherein the processor is further to: as a one stage operation, receiving a number of coded training data points from each client i of N available clients or of L clients, wherein L≤N, the number of coded training data points based on li, li* and t*, wherein li corresponds to a number of raw datapoints at client i, li* corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a deadline time duration representing a smallest epoch time window within which the apparatus and the client i can jointly calculate a gradient; for every epoch number t and until time t*: receive local gradients gi(t) from Li*(t*) raw data points, wherein li*(t*) corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric during time t*; calculate a global gradient based on a per client gradient from the coded data of each client i; on or after time t*, calculate an updated global gradient from the global gradient calculated at each epoch number t. 46. A method to perform federated machine learning training at an apparatus of an edge computing node in an edge computing network, the method including: processing client reports from a plurality of clients of the edge computing network; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training; sending a global model to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. 47. The method of claim 46, further comprising: disseminating, to a plurality of clients of the edge computing network, a target data distribution at the edge computing node for federated machine learning training, wherein each of the respective reports is based on a divergence between a local data distribution of a respective one of the clients and the target data distribution; assigning a respective weight to each respective divergence based on a size of the divergence, with higher divergences having higher weights; and selecting the candidate set including using a round robin approach based on the weights. 48. The method of claim 47, wherein the divergence corresponding to each client is based on one of a Kullback-Leibler divergence or a distance of probability distributions between a local data distribution of said each client and the target data distribution. 49. The method of any one of claims 47-48, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). 50. The method of claim 46, further comprising: processing weighted loss information from each of the clients; determining a probability distribution q for each of the clients based on the weighted loss information from corresponding ones of said each of the clients, wherein q is further based on an amount of data at said each of the clients, and a weight matrix for the federated machine learning training for said each of the clients for a global epoch; and selecting the candidate set including selecting a number of clients with highest probability distribution q as the candidate set. 51. The method of claim 46, wherein the reports include at least one of information based on training losses of respective ones of said clients for an epoch of the training, or information based on gradients of respective ones of said clients with respect to a pre- activation output of respective ones of said clients; and the method further comprises: rank ordering the clients based on one of their training losses or their gradients; and selecting the candidate set including selecting a number of clients with highest training losses or highest gradients as the candidate set. 52. The method of claim 51, wherein selecting a number of clients with highest training losses or highest gradients as the candidate set includes selecting a first number of clients with highest training losses or highest gradients as an intermediate set, and selecting a second number of clients from the intermediate set based on respective upload times of the first number of clients, the second number of clients corresponding to the candidate set. 53. The method of claim 46, further comprising: grouping clients from a plurality of available clients into respective sets of clients based on compute capabilities and communication capabilities of each of the available clients; and selecting a candidate set of clients based on a round robin approach, or based at least on one of a data distribution of each client of the candidate set or a determination that a global model of the machine learning training at the edge computing node has reached a minimum accuracy threshold. 54. The method of claim 53, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the edge computing node. 55. The method of claim 46, further comprising disseminating, to the plurality of clients of the edge computing network, a global model corresponding to an epoch of a federated machine learning training. 56. The method of any one of claims 46-48 and 50-55, further comprising: obtaining coded training data from each of the selected clients; and performing machine learning training on the coded training data. 57. The method of claim 56, further comprising: determining a coding redundancy value to use in the machine learning training on the coded training data based on maximum coding redundancy values from each of the clients, the maximum coding redundancy values indicating a maximum number of coded training data points a respective client may provide; determining an epoch time and a number of data points to be processed at each client during each round federated machine learning based on the selected coding redundancy value; and sending the determined coding redundancy value, epoch time, and number of data points to be processed at each client to the selected clients. 58. The method of claim 46, further comprising, for a number of cycles E’ of epoch number t: discarding an initial L clients from M clients sampled from N available clients; selecting a subsequent L clients from remaining clients N-M or N-M+initial L clients,; determining load balancing parameters for the subsequent L clients; receiving coded data ^^ ^^̂ from each client i of the subsequent L clients; after the number of cycles E’, calculating a global weight ^^( ^^+1) corresponding to epoch number t+1, ^^( ^^+1) based on gradient g(t) i for each client i of the K clients at epoch number t and further based on gradient g(t+1) i for each client i of the K clients at epoch number t+1, wherein g(t) i is calculated using data points based on the load balancing parameters, and g(t+1) is calcu (t) i lated using gi and the coded data ^^ ^^̂. 59. The method of claim 46, further comprising: as a one stage operation, receiving a number of coded training data points from each client i of N available clients or of L clients, wherein L≤N, the number of coded data points based on li, li* and t*, wherein li corresponds to a number of raw datapoints at client i, li* corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a deadline time duration representing a smallest epoch time window within which the apparatus and the client i can jointly calculate a gradient; for every epoch number t and until time t*: receive local gradients gi(t) from Li*(t*) raw data points, wherein li*(t*) corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric during time t*; calculate a global gradient based on a per client gradient from the coded data of each client i; on or after time t*, calculate an updated global gradient from the global gradient calculated at each epoch number t. 60. An edge compute node comprising the apparatus of any one of claims 1, 2, 7, 8, 10- 15, 32-34, 36-40, 44 and 45, and further comprising a transceiver coupled to the processor, and one or more antennas coupled to the transceiver, the antennas to send and receive wireless communications from other edge computing nodes in the edge computing network. 61. The edge compute node of claim 60, further comprising a system memory coupled to the processor, the system memory to store instructions, the processor to execute the instructions to perform the training. 62. The edge compute node of claim 60, further comprising: a network interface card (NIC) coupled to the apparatus to connect the apparatus to a core network by way of wired access; and a housing that encloses the apparatus, the transceiver, and the NIC. 63. The edge compute node of claim 62, wherein the housing further includes power circuitry to provide power to the apparatus. 64. The edge compute node of claim 62, wherein the housing further includes mounting hardware to enable attachment of the housing to another structure. 65. The edge compute node of claim 62, wherein the housing further includes at least one input device. 66. The edge compute node of claim 62, wherein the housing further includes at least one output device. 67. An apparatus comprising means to perform one or more elements of a method of any one of claims 17, 18, 23-31, 46-48, 50-55, 58 and 59. 68. A machine-readable storage including machine-readable instructions which, when executed, implement the method of any one of claims 17, 18, 23-31, 46-48, 50-55, 58 and 59. 69. A client compute node substantially as shown and described herein. 70. A server substantially as shown and described herein. |
N. Table TBQ1 [0277] Some embodiments in this Section M concern compute and communication aware selection in a federated learning environment. An objective of federated learning is the training of a global model based on a fleet of clients while keeping data local to the clients. The clients can download a central model as an initialization for model parameters. Clients can perform updates on the model, e.g., using a gradient based model, and in this way update the model locally. They can then send the updated model weights to the server, which can aggregate these model parameter updates through policies such as averaging or weighed averaging. [0278] Challenges with respect to federated learning arise on the one hand, based on the heterogeneity in the network, where memory cycles, compute rates, form factors if the various clients may be different. On the other hand, the amount of data can vary from client to client, which results in differing local training times, as well as resulting in disparate qualities of training. Connectivity can also be heterogeneous such as through 5G and WiFi, or one or more clients may be on or off. Therefore, the time required for each training epoch can be usually delayed for certain clients, which can lead to longer training time at the client and at the server, which can lead to poor performance. [0279] One approach is to select the clients that have better compute and communication rates. Some embodiments in this Section M contemplate the latter, but also aim to take into account non-i.i.d. data (data not drawn from the same distribution as the overall data or target data). Non-i.i.d. data can lead to divergence in the global model at the server. [0280] According to some embodiments, at each training round, the MEC server may schedule K out of N available clients, which perform the model weight updates in an iterative manner until convergence. [0281] According to one aspect, a capability request may be caused to be sent from the server to the clients in terms of compute and communication rates. These rates can have a deterministic component or they can have a random component, with the deterministic option being preferred at least for being more practical to implement. [0282] The MEC server may sort clients into sets based on their similar times to upload, which time to upload takes into consideration compute time and time to communicate the updated model weights back to the server. The MEC could therefore sort the clients based on the time to upload parameter for each client. [0283] According to an optional embodiment, the MEC server may implement a more aggressive selection of clients after the clients have been sorted into sets. The MEC server may select for training only those clients with fastest upload times that also exhibit an i.i.d. data distribution. The MEC server may have information regarding the data distribution at each client based on statistics (e.g. probability mass function, other divergence parameters of the probability distribution of the data set at the client and the overall test data of interest (target data) at the server), or based on data distribution information sent to the MEC by the client. In such a case, the MEC server may ignore the non-i.i.d. clients for training. Thus, if the divergence between the data distribution at clients in a given set and the target data distribution is high, the MEC server may determine that it cannot perform fast client selection (the selection of only the fastest clients in terms of upload time). With low divergence, the distribution at the clients within a given set and the target data are closer to each other, in which case the MEC server can be more aggressive and schedule the fastest clients with the low divergence. [0284] In the alternative, the MEC server may, after sorting the clients into sets based on their upload times, select the sets in a round robin fashion, such as based on upload times. Training using either of the above two approaches (aggressive or round robin) may involve an iterative process. The MEC server may further receive updated reports from clients regarding communication and compute rates and other information. For example, the MEC server may monitor to determine last instance of receipt of reports from clients and request additional reports based on the above, or may request additional reports based on training performance. N. DATA QUALITY AND COMPUTER COMMUNICATION AWARE CLIENT SELECTION FOR FEDERATED LEARNING [0285] Federated learning over wireless edge networks can face the following constraints: a) heterogeneity of client data quality/characteristics- some clients contribute larger data sets to the learning compared to other clients. The importance of client data can also vary across the training iterations; b) the number of training examples at each client can be widely different leading to different compute requirements; c) compute rates and memory cycles available for federated learning can be heterogeneous across clients as well as over time; d) communication rate/bandwidth available at clients can be heterogeneous and time- varying, such as by way of wireless or wired communication. [0286] All the above constraints can lead to the following challenges: A) random sub-sampling or the use of other approaches without considering client data quality can lead to poor training performance; B) the compute and/or communication rates (“compute-communication rates) of clients can also affect the time to convergence of the global model at the MEC server. [0287] Some embodiments in this Section N aim to account for the heterogeneity in data quality/characteristics including its time variability and propose methods to improve the speed- up of the convergence of the global model and accuracy of the same. At the same time, we also build on our solution in Section N above to simultaneously account for compute- communication rates in addition to the data quality/characteristics to reduce convergence time. [0288] Some embodiments in this Section N propose a client selection approach in order to reduce the convergence time of the global model as explained in Section N above. [0289] Referring to Y.Zhao et al, “Federated Learning with Non-i.i.d. Data” arXiv:1806.00582 (hereinafter “FL with Non-i.i.d Data,” in order to deal with some clients having highly-skewed data for example in terms of the distribution of the data and/or its importance, it was proposed to share a small amount of training data with the central server and train a warm-up global model before clients perform federated learning. In “FO for HetNets,” it has been proposed to have each client utilize a regularization parameter in their local loss functions that tries to reduce the impact of weight update from each client in each round. [0290] One major challenge in sub-sampling only the fast clients, in terms of upload time and compute time, is the issue of model divergence. The convergence of the global model is empirically conditioned on the clients’ data being independently and identically distributed (“i.i.d”). In the absence of such distribution of data, skipping updates from several clients, such as from straggler clients, can lead to skewed updates and lead to model divergence/overfitting for only certain data distribution profiles. [0291] Further, the solution proposed in “FL with Non-i.i.d Data,” depends on sharing training data with the server which may not always be possible due to privacy concerns (e.g., patient healthcare data, such as one of the use cases presented above). The regularization approach in “FO for HetNets” referred to above helps improve accuracy in the presence of non-i.i.d. data, but still hits a training accuracy performance ceiling while also resulting in slower convergence of the global model at the central server. [0292] Some embodiments propose a robust method to improve training accuracy and convergence of a global machine-learning model at a MEC server when performing client selection for federated learning. One fundamental idea is the realization that the contribution of each client (i.e. each client owning a subset of the overall dataset) is different during each global federated training epoch and is time-varying. Given this, in order to maximize training speedup and reach higher training accuracy, a non-uniform sampling of clients is needed during each round according to some embodiments. [0293] Some embodiments in this Section N present methods to sample clients at each federated client selection stage by observing the training loss incurred by the clients. The training loss may be a scalar quantity that can be reported by the clients at different time scales depending on the tradeoff for accuracy and efficiency. Experiments have to validated the above approach and show that observing the training loss at each epoch and performing client selection results provide higher accuracy than state-of-the-art solutions for federated learning under non-i.i.d. data conditions. Some embodiments may further account for the communication and compute rates at clients in order to achieve a tradeoff between training accuracy and convergence time. [0294] Some contributions for some embodiments in this Section N are as follows: 1) clients may report their training losses to the MEC server; 2) the MEC server may select large loss clients at each training round; 3) in order to balance training accuracy and total training time, the training losses, client compute rates and communication rates may be jointly accounted for during client selection. [0295] By selecting larger loss clients at each training round, some proposed embodiments herein achieve convergence in a smaller number of training rounds as compared with selecting all clients, while also reaching higher accuracy over state-of-the-art approaches. Further, by additionally accounting for the client compute rates and communication rates, the convergence time according to some embodiments herein may be reduced due to efficient communication rounds. By selecting clients based on larger losses, a solution according to some embodiments herein is sensitive against overfitting the model towards any single client. [0296] Some embodiments in this Section N may use a light-weight exchange of scalar quantity from clients to server used towards client selection. [0297] Exchanging training loss information between clients and MEC server may be needed for some embodiments. The exchange of this message may be intercepted over the air and analyzed using packet inspection tools. [0298] New clients may be spawned and their loss function evolution may be observed over time. The algorithm will result in larger loss clients being scheduled. “Larger loss clients” refers to a subset of all clients exhibiting loss functions within any given epoch that are larger than loss functions of another different subset of said all clients. [0299] In some federated learning setups, a subset of clients may be scheduled in each global training round. The most common approach is to sample the clients from a uniform distribution. [0300] However, some embodiments in this Section N highlight that the training speed-up is not the same for all the clients, and also varies over different epochs. Consider the simpler case where a single client is selected in each round according to Equation (BR1): where h t denotes the distance of model weights at time t to optimal weights w*, ^^ is the loss objective, D represents the entire dataset, D t represents the dataset corresponding to the client selected at time t, and ^^ ^^ is the learning rate at time t. Equation (BR1) above shows at least that a change in the distance of model weights for a given client from one epoch to the next is a function of the client’s data distribution and loss objective within each epoch. [0301] The variance of the speed up is given as by Equation (BR2): [0302] We can note that the variance of the speed up is non-zero, implying that the weight updates from different clients is non-uniform, at least as a function of varying data distributions per client at each epoch. As a result, to increase the speed up, we have found that it is important to sample the clients non-uniformly from the client pool. [0303] Equations (BR1) and (BR2) show among other things that the variance of training speed up is a function of data distribution D at each client. If you do uniform selection/sampling of data across clients, and the data is non-i.i.d., the training loss as a function of training time of each of the clients will be different, where training losses will all decrease, but at different rates. Therefore, uniform sampling where non-i.i.d. data is concerned does not address an overall goal to minimize expected loss across all clients in a uniform manner. Some embodiments in this Section N contemplate, in addition to clients sending compute rates and communication rate, the clients sending training losses to the MEC server. [0304] If we define the federated learning as a stochastic process that gradually reduces the mean of the overall training loss, then we can redefine the goal of federated learning as to minimize ^^[ψ w ( ^^ ^^ )] where D n is the dataset (of a certain client n) sampled from the client pool. Therefore, it follows that for speed up, we sample at each epoch t, the client n* that minimizes the ^^[ψ w ( ^^ ^^ )]. This can be easily extended for the case where the MEC server selects k clients for each epoch where n* is given by Equation (BR3): ^^ ∗ = argmnin ^^[ψ w ( ^^ ^^ )] ( ^^ ^^3) [0305] Heuristic Approach for Client selection: [0306] In order to avoid solving the above optimization at each federated learning epoch for scheduling clients, some embodiments for this Section N use a simple heuristic that can empirically show higher training accuracy over other state-of-the-art approaches, particularly for the challenging case of non-i.i.d. data. We define our heuristic approach as shown by the flow 1900 of Fig. 19, which shows a joint loss and computer/communication time based client selection process. [0307] Approach 1 - Loss-based Client Selection: [0308] According to Approach 1 of some embodiments in this Section N: 1) the MEC server 1904 may disseminate the global model to the clients 1902; 2) during the capability exchange phase, the clients 1902 may respond with their communication rates and an estimate of their compute rates; in addition, each client i may report a scalar quantity indicating their training loss ^^ ^ ^^ ^ corresponding to the model using global weights at epoch t; 3) the MEC server 1904 during each epoch may rank orders the clients based on their training loss on the global model at time t; 4) the MEC server may perform scheduling of clients as follows: a. in one embodiment (not shown), the server may schedule the top K out of the total N clients based on their training loss where K is a design choice; b. in another embodiment, as shown in flow 1900, the MEC server may first select a first subset N 1 (K <= N 1 < N) with the highest losses for consideration; as in the previous case a. immediately above, N1 can be a design choice that can determine the tradeoff between the accuracy and convergence time; in a second step, the server may select K out of N 1 clients based on client upload times, which can minimize the total upload time from clients. 5) the scheduled K clients may run E local epochs before reporting their updated model weights; 6) for the next training epoch with the MEC server, the clients may again report their loss values to the most recent global weights to aid with client selection; during each epoch, the K clients can simply piggyback the loss values along with their model updates. [0309] In an ideal setting, the MEC server requires the loss values ^^ ^^ from each client at each training round to schedule the next set of clients. Alternatively, it is possible that the MEC server can utilize the past training loss values from clients. Through experiments, we have observed that utilizing ^^ ^^−1 instead of ^^ ^^ does not degrade the training performance by any significant margin. [0310] It is also possible that the clients only report their loss information periodically (not every round), this can lead to longer convergence time, however. [0311] The MEC Server keeps a record of the client model/capability update time stamp (T last,n ). If a client has exceeded the threshold for update, i.e., T last,n > T threshold , the server initiates a client capability update request command to the clients. The clients respond with the parameters T up and r c . If necessary, the MEC server can re-run the Client selection algorithm. [0312] According to this approach, each client has the model provided by the server for each training round. The client may compute the loss and share it with the MEC server. The server therefore has the loss across all the clients at each training round of the model to be trained. The MEC server may then select only clients with larger losses for the next round of iterations/training. More iterations for larger loss clients can bring losses of those clients lower, and then the MEC server can move onto the next set of larger loss clients. [0313] Approach 2: Data-distribution based client selection: [0314] Since clients can have different training data distributions with respect to one another as noted above, according to one embodiment, we utilize this information to determine the clients in the training rounds. According to this approach, Approach 2: a) the MEC server may provide the target data distribution to the clients; the target data distribution may correspond to the cumulative density function P(y<=i) or the probability mass function P(y=i) for all i; and b) the data distribution at client k (i.e. the local data distribution at client k) may be given by Q k (y=i); the client may use the target distribution from the MEC server and local data distribution to compute the distance between the two distributions; this can be done in many ways and provided as below: a. The Kullback–Leibler (KL) divergence which measures the directed divergence between two distributions and is given by Equation (BR4): ^^( ^^) log ( ^^( ^^)) (BR4) or b. L2 distance of the probability distributions given by Equation (BR5): ^^ ^^2, ^^ = ‖ ^^ ( ^^ ) − ^^ ( ^^ )‖ 2 (BR5) [0315] Each of the clients may report their distances D k to the MEC server at the beginning of training. [0316] The MEC server may perform weighted round-robin client selection where higher weights are given to clients with larger D k to allow the MEC server to train more for the more highly skewed clients. [0317] This approach represents a weighted scheduling of training for clients by the server, with higher priority to clients with a larger distance of data distribution to the target distribution at the MEC server. [0318] Approach 3 - Gradient norm based Client Selection: [0319] It is also possible and even more reliable to utilize the gradient norm as an indicator on the “importance” of each client during the client selection process. We have found that clients with the largest norm of their gradients carry higher importance compared to the clients with a smaller norm. [0320] Computing the norm of the gradient for each client on the model is expensive and involves both forward and backward propagation at the clients before the actual training. Instead, the paper in A. Katharopoulos, F. Fleuret, “Not All Samples Are Created Equal: Deep Learning with Importance Sampling”, ICML 2018, arXiv:1803.00942.s (hereinafter “DL with IS”) derives an upper-bound for the gradient norm as the gradient of the loss with respect to the pre-activation outputs of a neural network (NN). This upper bound limits the gradient computation to only the last layer and avoids computing the gradients for each of the NN layers. [0321] The client selection flow according to an embodiment of the third approach, Approach 3, may be as follows: a) the MEC server may send the global model to the clients; b) each client may compute their gradient with respect to their pre-activation outputs indicated by ^^ ^^̂ ( ^^) and may return this to the MEC server; and c) the server may determine the clients with the K largest ^^ ^^̂ ( ^^) as the clients for federated training with K being implementation-based. [0322] The above method may incurs additional overhead due to the computation of partial gradients at the clients compared for example to Approach 1 or Approach 2, because here, the client is to calculate not only the loss function, but only the gradient for training before the training even begins for a given training round. However, the gradient calculation may be performed only at the last layer of the NN, and not for all layers of the NN. [0323] Experiments and Performance Evaluation: [0324] Approach 1: [0325] We have demonstrated the performance gain of our Client Set selection Approach 1 described above or a Federated Training problem on MNIST (Modified National Institute of Standards and Technology database) dataset for handwritten digit recognition. We are training a deep neural network (DNN) with 2 hidden layers with 200 nodes each. Our environment contains N = 100 clients with K =10 clients participating in each global training epoch. We split the training data into i.i.d. and non-i.i.d. sets and assigned each batch of 600 examples to each client. In our setup, 20% of the clients have i.i.d. data while the other 80% clients have non-i.i.d. datasets where each client has examples from 1-6 labels. In each epoch, each of the K clients run 5 local training iterations. [0326] The computational rates (r c ) and communication rate (r comm ) of the clients are modeled using shifted exponential distributions. We compared the performance of the proposed heuristic against the approach in Reference [3] where weight regularization component representing the L2 distance of the local weights from the global weights is utilized in the local loss objective. Table TBR1 [0327] Referring to some embodiments described in Section N above, where the data is not i.i.d. as between the given client sets, and the round robin selection approach is used to train the model, such a solution could lead to model divergence. This is because each gradient for the model is computed from clients with non-i.i.d. data, and therefore the updates are biased as opposed to the target data, and the MEC server may as a result be left with a bad model, even where clients of differing upload times are given equal opportunity for training purposes. [0328] Some embodiments in this Section N are still aware of compute and communication heterogeneity as between clients, while also addressing of non-i.i.d. data issues. O. IMPORTANCE SAMPLING METHODS FOR COMMUNICATIONEFFICIENT LEARNING IN FEDERATED LEARNING [0329] In this Section O, we provide new methods beyond our earlier proposed solutions, such as in Sections N and O above, on accounting for data importance awareness when selecting clients that participate in federated learning. [0330] Some embodiments in this Section O may be applied to a case where federated learning is performed over clients that have the following characteristics: a. clients have wireless connections and may face heterogeneity in terms of the bandwidths available to them; b. clients may have different compute rates and therefore may complete their local training at different times with respect to each other; and c. clients may have statistically differing datasets, i.e., not independent and identically distributed (i.i.d. or i.i.d.). [0331] Our earlier solutions as set forth in Section M above proposed a set of methods to perform client selection, some embodiments of which include: a) prioritizing clients with large weighted local losses on their data; b) prioritizing clients with larger compute and communication time; c) weighted round robin scheduling based on the divergence of client’s data distribution from the overall distribution, such as the KL divergence of their data distribution; and/or d) prioritizing clients based on the norm of the local gradients computed with respect to the last layer. [0332] In other work, for example in “FL with Non-i.i.d Data,” in order to deal with some clients having highly-skewed data, it was proposed to share a small amount of training data with the central server and train a warm-up global model before clients participate in federated learning. [0333] The earlier proposed heuristic methods sample clients non-uniformly from the original distribution of data, and hence may introduce biased training. For the loss based client selection, it is possible the final trained global model may be skewed towards achieving a fair performance across clients at the expense of overall accuracy. [0334] Some embodiments draw on principles from importance sampling theory to perform training by collecting gradient computations from a different data distribution q to the original distribution p, but with the ability to apply gradient correction to remove the bias. [0335] Some embodiments according to this Section O may include the following: a) some embodiments propose two importance sampling distributions that target accelerated training by focusing on importance regions in the client set based on statistically important data, clients’ communication rates and compute rates; b) some embodiments propose correct ways to implement importance sampling and bias correction to remove bias in the gradient computations due to importance sampling; and/or c) some embodiments propose a protocol to efficiently communicate this information between clients and the MEC server to achieve a) and b). [0336] Some advantages of the proposed embodiments according to this Section O are provided below: ^ new distributions to sample clients more efficiently and to achieve higher model accuracy and improve convergence time; and/or ^ methods to unbias the gradient estimates for federated learning when clients can run multiple local training epochs. [0337] To compute the importance sampling distribution, the clients may report their weighted loss periodically to the MEC server which can be intercepted. Similarly, the MEC server, after computing the importance sampling distribution q, may send it to the clients to allow clients to apply the bias correction. This may also be in the protocol according to some embodiments, and either be standardized or intercepted. [0338] The clients may also send their compute times and communication times to the server in order to compute the importance sampling distribution. [0339] The cross entropy loss ^^( ^^( ^^), ^^( ^^); ^^) for data (x(i),y(i)) using model weight w provides a measure of the dissimilarity of the classifier from the true label y and hence is utilized in approaches to accelerate learning. Specifically, sampling a training example based on the loss distribution of all examples has shown to provide training speed up in “PER” referred to above. [0340] Some embodiments according to this Section O extend the above approach for the case of federated learning where, instead of looking at the loss over each example, we define a probability distribution q over the weighted loss function over the clients and sample clients from this distribution according to the below equation, which calculation at Equation (BS1) may be performed by each client k, and which calculation at Equation (BS2) may be performed at each client k or at the MEC server as set forth in Equations (BS1) and (BS2): ̅ ^̅^ ^ ̅ ^ = ^^ ^^ ^^ ^^ ( ^^ ^^ ) (BS1) where ^^ ^^ refers to the amount of data at the client, ^^ ^^ refers to the loss function average across all data for the client, and ^^ ^^ is the weight parameter/matrix of a NN at the client. [0341] However, the above introduces a sampling bias in the gradient estimates g k from the client k. Some embodiments propose correcting the gradient estimate for each client k before updating the weights to remove the bias. This may be performed by computing the importance sampling estimate according to one embodiment as set forth in Equation (BS3): ^^ ^^, ^^+1 ∗ ^^ ^^ / ^^ ^^ (BS3) where p k = the original sampling distribution of client k based on a total of N clients and q is the importance distribution (here, stochastic loss based). Although, this approach adjusts bias for the weight after 1 round, when the number of local computations or local rounds τ is greater than 1, the gradient estimate at each client after each local update is biased. Therefore, some embodiments propose to utilize stochastic loss based sampling and correction using the bias factor at the client after each local update. In other words, when each client performs τ local updates before reporting the weight updates to the server, each client updates its local weight matrix as set forth in Equation (BS4): ^^ ^^, ^^+τ = ^^ ^^, ^^+τ−1 − η ∗ ^^ ^^, ^^+τ ∗ ^^ ^^ / ^^ ^^ (BS4) where η is the learning rate. The local weight matrix of each client is therefore adjusted or biased based on the updated gradient estimate for each client k. [0342] After the server receives K updates from the K selected clients out of the total N clients, the server computes the global weight as the average of these weights as set forth in Equation (BS5): [0343] However, this approach alone cannot result in accelerated training as K clients sampled using q k described above may have widely differing upload time given as set forth in Equation (BS6): [0344] We define the compute model and communication model of the clients with respect to a wireless communication model or a compute model by way of example as described below. [0345] Wireless Communication Model: [0346] The clients may be wireless connected to their base stations (and assuming no additional latency from base stations to the MEC server). The uplink data rates of clients may be obtained with the help of the 5G deployment and channel model as described in the ITU-R M.2412 report in T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, V. Smith, ``Federated Optimization for Heterogeneous Networks", Conference on Machine Learning and Systems (MLSys), 2020 (hereinafter “FO for HetNets”). Based on this, the average data rate r k for user k may be computed which determines the communication time for communication by client k as set forth in Equation (BS7): where D size can be computed as the size of all the weight matrices for the machine learning model and is the same for all clients for federated learning. For example, for the case of a fully connected neural network, D size indicates the sum of the sizes of all the weight matrices of all the layers of the neural network. [0347] Compute Model: [0348] The compute time for each client is the time needed to compute τ rounds of local gradient updates and is given by Equation (BS8): where τ is the number local epochs required at the clients, num MAC k is the number of multiply and add operations required at client k and T MAC k is the average time required by client k to complete 1 MAC operation. [0349] Importance Sampling with Compute Communication awareness: [0350] The importance sampling distribution q k is computed as the ratio of the normalized weighted losses to the normalized upload times and is given by Equation (BS9) ^^ ^^ = ^^̅ ^^̂ / ^^ ^^ ^^̂ ^ ^ (BS9) [0351] This is due to the fact that the MEC server is not only interested in computing weight updates from the clients with large losses but also the ones with small upload times. The importance sampling bias correction may then be applied as ^^ ^^, ^^+1 ∗ ^^ ^^ / ^^ ^^ for each local gradient step at each client. [0352] The proposed algorithm is summarized in Algorithm 1 below. [0353] Algorithm 1: Importance Sampling based Federated Learning. S: MEC server, C: Clients 1: Initialize N, K, τ, η, w1, original client sampling distribution p 2: Clients report a) Communication Time T comm, b comp k ) Compute Time T k to allow computation of upload time for the client T up k 3: for all global epochs t = 1, τ, …, τ T do S: Broadcasts global weight w t C: Share scalar weighted loss information n k F k (w t ) to S (In the absence of F k (w t ), S utilizes the most recently reported F k (w)) S: Update T comm k as well as receive any updated T comp k from clients S: Compute q k for all N clients using Equation (BS2) or Equation (BS9) S: Select K clients from the N clients using q k for weight update for each client k = 1, 2,..., K do for each local epochs e = 1, 2,..., τ do Compute w k;t+e = w k;t+e-1 - η * g k;t+e * p k /q k end for Report w k;t+τ to server and scalar n k F k (w t+τ ) end for S: Compute global weight at server as w t+τ = 4: end for [0354] Experiments and Performance Evaluation: [0355] We have demonstrated the performance gain of our importance sampling based federated learning approach on MNIST dataset for handwritten digit recognition. We are training a DNN with 2 hidden layers with 200 nodes each. Our environment contains N = 100 clients with K =10 clients participating in each global training epoch. [0356] In our evaluation, all clients have one of the 10 labels. Hence, the client data is statistically very different and non-i.i.d.. The number of local epochs is 5. [0357] The computational rates (rc) are modeled using shifted exponential distribution. We compare the performance of the proposed heuristic against the approach in “FO for HetNets” where weight regularization component representing the L2 distance of the local weights from the global weights is utilized in the local loss objective. The results are averaged over 3 simulation random seeds as shown in Table TBS1 below:
Table TBS1 [0358] In Section M above, reports of the losses from clients were to be sent to the MEC server. In this Section O, some embodiments concern creating a new distribution for the sampling for the clients. If p is taken as the distribution of data at a given client, it will be proportional to the amount of data that client has. If every client has the same amount of data, then the distribution as between clients is the same (p) for all clients. If not, p will be different for each client. We know that loss is a function of the data at a given client, parameterized by the model weight at time t. Some embodiments in this Section O aim to form a distribution̅ ^̅^ ^ ̅ ^ (Equation (BS1)) based on the loss function at each client k, basically skewing the client’s data distribution as a function of its loss function within a given training period. The probability of selecting each client would be ^^ ^^ (Equation (BS2)). According to some embodiments, the MEC may compute the ^^ ^^ using the̅ ^̅^ ^ ̅ ^ of all the clients k. After sampling based on the new distribution ^^ ^^ within a training round, some embodiments contemplate applying a bias correction for computed gradients, which may be done by each client at each local training round before sending updated weights to the MEC server. The biasing correction corresponds to the multiplication by the pk/qk factor in Equation (BS4). P. DIFFERENTIALLY PRIVACY GUARANTEES IN CODED FEDERATEDLEARNING [0359] In heterogeneous computing environments, the client devices need to compute their partial gradients and communicate those partial gradients to a controller node. However, the wait time for each epoch at the controller node is dominated by the time needed to receive the partial gradients from computing nodes with relatively slow computational capabilities and/or with weak or low-quality links. For example, the wait time at the controller node for one or more training epochs may be prolonged by computing nodes with weak or low-quality links, which may require multiple retransmissions to overcome radio link failures. Computing nodes from which the controller node has to wait due to, for example, low quality links or slow processing capabilities, may be referred to as “stragglers.” The issue of straggler nodes is especially relevant for computations that are distributed over wireless networks, where dynamic variations in wireless link quality can lead to loss of data. Accounting for such variations in distributed computing tasks is not well addressed by existing solutions. [0360] Accordingly, embodiments of the present disclosure may incorporate coding mechanisms to address the issue of stragglers in heterogenous environments. The coding mechanisms may allow for data to be duplicated across computing nodes, e.g., for data located at client edge computing nodes (e.g., client computing nodes 1202 of Fig .12) to be shared with a central server (e.g., central server 1208 of Fig .12). However, when utilizing client device data, duplicating and sharing the data may involve user privacy issues. user privacy. Ensuring user privacy protections for users who collaborate in distributed ML model computations may be quite important since some of these users may want to keep their raw data secure on their device. Failure to account for privacy concerns may result in underutilization of the processing capabilities of edge networks, including MEC networks, since some users may not opt-in to allowing their data to be used for collaborative unless these concerns can be alleviated. [0361] Thus, the client data may be coded according to one or more techniques, such as those as described herein, prior to sharing with the central server or with other computing nodes of the computing environment. The coded data set may be referred to as a parity data set or parity data, and may include a version of the client device data sets that have been coded in a way to obfuscate or partially-obfuscate the actual raw data. The coding mechanism may be known only to the client device, and is not shared with the central server, maintaining some privacy in the coded data that is shared. In this way, the coded data sets may help to protect the privacy of the client data sets. [0362] A coding redundancy ^^ that indicates a number of coded data points to compute at each client device may be determined at the central server, and may be broadcast by the central server to each of the client devices. The coding redundancy ^^ may depend on heterogeneity in computing, communication and/or power budgets observed across the client devices and central server. In some instances, the client data may be weighted using a weight matrix ^^ ^^ that probabilistically punctures the raw training data. The weighting values of the weight matrix may, in some embodiments, be determined based on how often data is received at the central server from a client device. For example, the weighting values may be higher (i.e., greater weight given) for devices with a poor connection to the central server, and may be lower for devices with a good connection to the central server. This may ensure that data sets from devices with poor connectivity are considered in the gradient computations in a similar manner to those with better connectivity. In some cases, the computation of coding redundancy ^^ the weight matrix ^^ ^^ may be performed as described in U.S. Patent Application Publication No. US 2019/0138934, which is hereby incorporated by reference in its entirety. [0363] The client devices may use the coding redundancy ^^ and/or the weight matrix ^^ ^^ to generate the coded data. For example, in some embodiments, a linear random coding technique may be utilized and the coded data set may be constructed as follows. The i-th device may use a random generator matrix ^^ ^^ of dimension ^^ × ^^ ^ ^ ^ ^ ^^ ^^ ^^ ^^ ^^ ^^ with elements drawn independently from a distribution (e.g., standard normal distribution or Bernoulli(1/2) distribution), and may apply the random generator matrix on a weighted raw data set to obtain a coded training data set such that ^̃^ ^^ = ^^ ^^ ^^ ^^ ^^ ^^ , and ^̃^ ^^ = ^^ ^^ ^^ ^^ ^^ ^^ , where the weight matrix ^^ ^^ is a ^^ ^ ^ ^ ^ ^^ ^^ ^^ ^^ ^^ ^^ × ^ ^^ diagonal matrix that probabilistically punctures the raw training data. The coded training data ( ^^̃ ^^ , ^^̃ ^^ ) may then be transmitted to the central server before training is performed, while the generator coefficients are kept private by each client device. At the central server, the locally-coded training data may be combined to obtain a composite coded data set given by ^̃^ = ∑ ^^ ^̃^ , ^̃^ ∑ ^^ ^ ^= ^^ ^^ = ^^= ^^ ^̃^ ^^ . [0364] The central server may compute model updates (e.g., partial gradients in a gradient descent technique) based on the coded data shared by the client devices, and may use the computed updates in addition to or in lieu of model updates performed by the client devices using the corresponding uncoded data. For instance, in some embodiments, an overall gradient for the global model may be computed at the central server based on two sets of gradients: (1) partial gradients computed at client devices using their uncoded data sets; and (2) a partial gradient computed by the MEC server using a coded data set (which is based on the client data sets). In certain embodiments, during every training epoch, the central server computes partial gradients from ^^ composite coded data set points, and waits for partial gradients corresponding to the first arriving ( ^^ − ^^) number of uncoded data points sent by the client devices. Therefore, the central server might not have to wait for the partial gradients from straggler nodes and/or links. This technique may be referred to herein as coded federated learning (CFL). [0365] In many instances, distributed learning methods such as CFL may allow for exchange of aggregate statistics/information without revealing the underlying “raw” data. This may be of great importance in areas where the datasets used hold private information of individuals, e.g., healthcare or other types of personal or private information. Embodiments of the present disclosure include techniques, which may be referred to herein as the Differentially Private Coded Federated Learning (DP-CFL), provide formal end-to-end Differential Privacy (DP) guarantees to all participating client computing nodes. In particular, embodiments of the present disclosure may generate the coded data in a differentially private way at each client computing node, while preserving the utility of the coded data for training the global model. [0366] Coded data may be generated by compressing the raw data privately at each client device (e.g., client computing nodes 1202). The amount of compression may be known at the central server, or at any other device participating in the CFL. The second order statistics of the raw data can be estimated from the coded data, and using this public information, differentiating attacks on the coded data can be designed to reveal the raw data of any device. [0367] Accordingly, embodiments may include differential privacy constraints during coding redundancy computations, which can guarantee privacy of the generated coded data at each client computing node (e.g., the underlying raw data upon which the coded data is based may not be obtainable, or may not be obtainable without extraordinary work). Furthermore, embodiments of the present disclosure may utilize distortion of a covariance matrix and injection of additive noise in combination with data compression to achieve any privacy requirement, while preserving the utility of the coded data in the learning algorithm(s). These techniques may provide one or more advantages. For instance, in some cases, privacy may be included directly during the data encoding process along with heterogeneity in computing power, communication links, and data quality across devices. In addition, in some cases, each device can define its own privacy requirement. Further, in some cases, the utility of the coded data may be preserved, retaining faster convergence of the CFL-based model learning. [0368] The coding mechanism used in CFL may be ( ^^ ^^ , ^^ ^^ )- differentially private as: [0369] where ^^ ′ ^ ^ is a row-perturbed version of data matrix ^^ ^^ at the i-th row, the probability density function of observing ^^̃ ^^ under data matrix ^^ ^^ . The ^^ ^^ parameter may indicate a metric of privacy loss based on the perturbation of the data matrix. In certain instances, CFL embodiments may be designed with a coding mechanism that guarantees that the above equation is true, based on a certain selected value for ^^ ^^ . In particular, some embodiments may utilize one of the following example methods to ensure that the coding mechanism used for CFL is ( ^^ ^^ , ^^ ^^ )- differentially private. It will be noted that the i-th device has ^^ ^ ^ ^ ^ ^^ ^^ ^^ ^^ ^^ ^^ number of raw data points. In order to maintain ( ^^ ^^ , ^^ ^^ )- differential privacy, the i-th device can generate only up to ^^ ^^ number of coded data points. With this in mind, the following example processes may be implemented to ensure ( ^^ ^^ , ^^ ^^ )- differential privacy. The example processes below may be implemented in software, firmware, hardware, or a combination thereof. For example, in some embodiments, operations in the example process shown may be performed by one or more components of edge computing nodes, such as processors of client computing nodes similar to client computing nodes 1202 of Fig .12 and central servers similar to central server 1208 of Fig .12. In some embodiments, one or more computer-readable media may be encoded with instructions that implement one or more of the operations in the example process below when executed by a machine (e.g., a processor of a computing node). The example process may include additional or different operations, and the operations may be performed in the order shown or in another order. In some cases, one or more of the operations shown in the corresponding Fig s are implemented as processes that include multiple operations, sub-processes, or other types of routines. In some cases, operations can be combined, performed in another order, performed in parallel, iterated, or otherwise repeated or performed another manner. [0370] Fig.20 is a flow diagram showing an example process 2000 of ensuring differential privacy in accordance with certain embodiments. At 2002, each i-th client computing node 2050 computes a maximum coding redundancy value ^^ ^^ , which represents the maximum number of coded data points the client computing node can generate, and sends the value ^^ ^^ to the central server 2060. At 2004, the central server 2060 determines a coding redundancy valueto be used in each epoch based on the ^^ ^^ values received from the client computing nodes. Insome cases, this may be the minimum or maximum of the ^^ ^^ values received, according to (respectively): ^^ up = m ^^in ^^ ^^ # ( CQ2 ) ^^ up = m ^a^ x ^^ ^^ . # ( CQ3 ) [0371] At 2006,the central server 2060 determines, using the new constraint given by ^^ up , an optimal coding redundancy ^^, an optimal epoch time ^^ ∗ , and an optimal number of data points to be processed locally at each device ^^ ^ ∗ ^( ^^ ∗ ) . In some embodiments, these values may be calculated using a load balancing algorithm, such as the one described in U.S. Patent Application Publication No. US 2019/0138934, which is hereby incorporated by reference in its entirety. The central server 2060 sends the values to each client computing node, and at 2008, the client computing nodes generate coded data sets based on the values (e.g., based onthe coding redundancy ^^) . The coded data sets may be sent thereafter to the central server 2060. [0372] At 2008, the client computing nodes 2050 compute model updates based on their raw data sets (e.g., based on the epoch time ^^ ∗ , and the number of data points to be processed ^^ ^ ∗ ^( ^^ ∗ )), and transmit the model updates to the central server 2060. The central server 2060, at 2010, computes model updates based on the coded data sets sent by the client computing nodes 2050, and at 2010, the central server 2060 aggregates the model updates. [0373] Fig .21 is a flow diagram showing an example process 2100 of ensuring differential privacy at a client computing node in accordance with certain embodiments. Aspects of the process 2100 may be combined with the process 2000 above. At 2102, the client computing node receives the values ^^ up , ^^ ^ ∗ ^( ^^ ∗ ), and ^^ ^ ^ ^ ^ ^^ ^^ ^^ ^^ ^^ ^^ from a central server, and at 2104, determines whether the value ^^ up is greater than its maximum coding redundancy value ^^ ^^ . If not, then the client computing node generates the coded data set based on the value ^^ up at 2105. [0374] However, if the value ^^ up is greater than its maximum coding redundancy value ^^ ^^ , then the client computing node computes, at 2106, a differential privacy metric ^^(0) ^ ^ that would be achieved freely through coding based on the coding redundancy value ^^ up selected by the central server. At 2108 computes its residual differential privacy requirement [0375] The residual privacy may refer to an amount of privacy that is “leaked” over the overall requirement ^^ ^^ due to the selection of ^^ up as the maximum value of all the ^^ ^^ values. This residual privacy may then be mitigated during the generation of coded data at 2110 by performing one or more of the following: (1) Distorting the raw data before encoding, e.g., by deleting one or more principal components of the raw data; (2) Distorting the coded data before sharing to the central server, e.g., by deleting one or more principal components of the coded data; (3) Injecting additive noise into the raw data before encoding; and (4) Injecting additive noise into the coded data before sharing to the central server. [0376] Fig .22 is a flow diagram showing an example process 2200 of ensuring differential privacy using a trusted server in accordance with certain embodiments. Aspects of the process 2200 may be combined with the process 2000 above. Although described below with respect to a trusted server 2255, aspects of the process 2200 may be used with a trusted execution environment in other embodiments. At 2202, after a secure channel is established between the client computing node 2250 and the trusted server 2255, the client computing node 2250 encrypts coded data it has generated (e.g., as described above) and sends the encrypted coded data to the trusted server 2255. At 2204, the trusted server 2255 decrypts the coded data and, at 2206, aggregates the decrypted coded data sets from each of the client computing nodes 2250. The trusted server 2255 then distorts the aggregated coded data at 2208. The distortion may be performed in one or more of the ways described above with respect to 2110 of Fig. 21. It should be noted that, to meet the same level of privacy, the amount of distortion performed on the aggregated coded data in the embodiment shown may be smaller than embodiments where the data is distorted at each client computing node. The trusted server 2255 then sends the distorted aggregated data to the central server 2260, which then computes model updates based on the distorted aggregated data (e.g., in the same manner as 2010 of Fig . 20) at 2210. The central server may then perform other steps in the CFL technique, e.g., other aspects of the process 2000 of Fig .20. [0377] Fig . 23 is a flow diagram showing another example process 2300 of ensuring differential privacy using a trusted server or execution environment in accordance with certain embodiments. Aspects of the process 2300 may be combined with the process 2000 above. At 2302, after a secure channel is established between the client computing node 2350 and the trusted server 2355, the trusted server 2355 generates a generates pairwise-complimentary (i.e., opposite or cancelling) high power noise matrices ( ^^ ( ^^) , − ^^ ( ^^) ), where the dimension of ^^ ( ^^) ^^ is ^^ × ^^, for ^^ = 0, 1, … , and sends the pairwise matrices to a pair of randomly selected client computing nodes 2350 (i.e., with ^ going to a first client computing node and − ^^ ( ^^) going to a second client computing node). At 2304, each client computing node 2350 injects (e.g., adds) the noise matrix to its coded data set ( ^^̃ ^ + ^^ ^^ , where ^^ ^^ is the noise matrix received by the i-th device) or to its raw data before generation of the coded data. At 2306, each client computing node further distorts the coded data ^^̃ ^^ , e.g., as described above with respect to 2110 of Fig .21. The client computing nodes then send their coded data to the central server 2360, which then computes model updates based on the distorted aggregated data (e.g., in the same manner as 2010 of Fig . 20) at 2308. The central server may then perform other steps in the CFL technique, e.g., other aspects of the process 2000 of Fig .20. Q. CODED AND UNCODED PRIVACY BUDGETS TO GUARANTEEDIFFERENTIAL PRIVACY IN CODED FEDERATED LEARNING [0378] In certain embodiments, privacy budgets may be partitioned between generating coded data and the uncoded gradient updates federated learning. In particular, client compute devices may inject noise into one or both of the coded data that is sent to the central server or into the model updates that are sent to the central server to ensure differential privacy. The client compute may adjust the amount of noise injected into either instance based on an overall privacy guarantee by the central server. Because the central server performs weighted aggregation of the model updates performed by the client computing node using raw data and the model updates performed by the central server using the coded data, the amount of noise (and thus, privacy) injected into either process can be adjusted so that data is considered/aggregated appropriately. For example, a client computing node with a poor connection to the central server may choose to place more of its privacy budget into the coded side versus the uncoded side (i.e., will inject more noise into the coded data than into the model updates), as the central server may weight the than the server’s updates performed using the coded data higher than the client’s model updates performed using raw data due to the poor connection. The opposite may be true for a client with a good connection, i.e., the client may place more of its privacy budget into the uncoded side versus the coded side. [0379] Fig. 24 is a flow diagram showing an example process 2400 of partitioning privacy budgets in accordance with certain embodiments. The example process may be implemented in software, firmware, hardware, or a combination thereof. For example, in some embodiments, operations in the example process shown may be performed by one or more components of an edge computing node, such as processor(s) of a client device similar to client computing nodes 1202 of Fig .12. In some embodiments, one or more computer-readable media may be encoded with instructions that implement one or more of the operations in the example process below when executed by a machine (e.g., a processor of a computing node). The example process may include additional or different operations, and the operations may be performed in the order shown or in another order. In some cases, one or more of the operations shown in Fig .24 are implemented as processes that include multiple operations, sub-processes, or other types of routines. In some cases, operations can be combined, performed in another order, performed in parallel, iterated, or otherwise repeated or performed another manner. [0380] After the central server 2460 has sent a differential privacy guarantee ( ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ) to the client computing node 2450 (e.g., by broadcasting the guarantee to the nodes) and a request for the client computing nodes 2450 to compute c coded data points, at 2402, the client computing nodes 2450 split their privacy budgets into coded ( ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ) and uncoded ( ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ) portions based on its communication and compute capabilities, ensuring that ( ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ) + ( ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ) = ( ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ). The privacy splits determined/made by the client computing nodes 2450 are not shared with the central server 2460. [0381] At 2404, the client computing nodes 2450 generate coded data based on the coded privacy budget determined at 2402. In particular, each client computing node 2450 generates and transmits coded data ^^̃ ^^ = ^^ ^^ ^^ ^^ + ^^ ^^ ^^ ^^ ^^ ^^ ^̃^ = ^^ ^^ ^^ ^^ + ^^ ^ ′ ^ where each entry of ^^ ^^ is sampled from standard normal distribution ^^(0,1) and ^^ ^^ and ^^ ^ ′ ^ are sampled independently from a zero mean Gaussian distribution with ^^ ^ 2 ^ . The matrices ^^ ^^ , ^^ ^^ , and ^^ ^^ ′ are private and not shared with the central server 2460. The noise power ^^ ^ 2 ^ that is added is also private and not shared with the central server. The noise matrices ^^ ^^ , and ^^ ^ ′ ^ may be carefully calibrated such that the privacy leakage in generating the coded data is ( ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ) . The coded data is then sent to the central server 2460. [0382] Thereafter, the central server 2460 sends a current model or current model parameter ^^( ^^) to the client computing nodes 2450. At 2406, the client computing nodes 2450 compute model updates using their raw, uncoded data. This may be done as described above. Then, at 2406 the client computing nodes 2450 perturb the model updates with noise based on theiruncoded privacy budget. This may be done as follows: ^^ ^^( ^^) = ^^ ^ ^ ^ ^( ^^ ^^ ^^ ( ^^ ) − ^^ ^^ ) + ^^ ^^ ( ^^ ) # ( CR1 ) where ^^ ^^ ( ^^) in epoch is also zero mean Gaussian noise carefully calibrated by the standard Gaussian method for DP to ensure privacy leakage in ^^ epochs is bounded by ( ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ) . The client computing nodes 2450 may then send their model updates to the central server 2460, which, at 2408, aggregates the model updates. In some instances, the central server 2460 may waits for a fixed duration ^^ ^^ ^^ ^^ ^^ℎ to collects all the model updates which it has received by that time. It assumes that all other model updates from the client computing nodes are lost. The central server may aggregate or combine the model updates as follows: where the first term is gradient generated from the coded data, and the second term is the gradient updates from uncoded data (where ^^ ^^ ( ^^) is one if client ^^ successfully transmits the gradient in round ^^ to the MEC server, and 0 otherwise). The ^^ ^^ values may be chosen in such a manner that ^^( ^^) is an unbiased estimator or to minimize the variance of the gradient. R. SCALABLE METHODS FOR PERFORMING IMPORTANCE SAMPLINGAND CODED FEDERATED LEARNING: [0383] In federated learning, a goal is to learn a global model using data, as well as computation power, from other client computing nodes (or clients), while the data of the clients i s not shared. See for example H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. ``Communication-efficient learning of deep networks from decentralized data", In Conference on Artificial Intelligence and Statistics, 2017 (hereinafter “Communication- Efficient Learning”). [0384] In federated learning, we might have a large number of clients, yet some may have more important data than others. Hence, some embodiments involve performing Importance Sampling (see for example Balakrishnan, R., Akdeniz, M., Dhakal, S., & Himayat, N. (2020, May), Resource Management and Fairness for Federated Learning over Wireless Edge Networks. In 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) (pp.1-5). IEEE (hereinafter “Resource Management and Fairness for FL”), where clients are sampled based on the importance of their data. Some embodiments further include adding a layer of reliability to the above, as well as hastening the learning process. For the latter, some embodiments implement a Coded Federated Learning approach. See for example Prakash, S., Dhakal, S., Akdeniz, M., Avestimehr, A. S., & Himayat, N. (2020). Coded Computing for Federated Learning at the Edge. arXiv preprint arXiv:2007.03273 (hereinafter “Coded Computing for FL”). [0385] In this Section, some embodiments provide new techniques, based on Importance Sampling and Coded Federated Learning. Some embodiments benefit from both solutions, while adding a few extra advantages: such as at least one of client scalability, reduction of computation complexity, or a smoother learning process. [0386] “Resource Management in FL” showed how to perform Importance Sampling, while Coded Computing for FL” dealt with the coding part, which mitigates the effects of straggling clients. However, neither the solution in “Resource Management in FL” nor the one in “Coded Computing for FL” deals with issues presented in the concurrent implementation of Importance Sample and the mitigation of stragglers through Coded Federated Learning. [0387] Some embodiments provide a novel two-step sampling approach that allows for feasible sampling out of a large number of clients. Some embodiments provide Bernoulli encoding of data, which reduces the computation associated with encoding the clients’ data. Some further embodiments provide for per-client coding, which uses client-specific coded data, instead of a sum of coded data. [0388] Advantageously, combining Importance Sampling with Coded Federated Learning allows for faster learning convergence, while reducing the learning time as well as wall clock time associated with the learning process. In addition, a novel sampling process according to some embodiments advantageously allows a feasible scaling of the number of clients i nvolved in updating the global model. Some embodiments further advantageously reduce the complexity of encoding by reducing the number of floating-point multiplications to update a model. In addition, advantageously, embodiments involve the use of client-specific coded data reduce the learning gradient’s variance. [0389] Some embodiments involve the use of key message exchanges between clients and a MEC server, which can be intercepted using tools such as WireShark, etc. Some message exchanges according to embodiments may include: 1. clients i reporting the number of their data points ^^ ^^ to the MEC server; 2. clients i periodically sharing the evaluated loss of their current model ^^ ^^ ; 3. the MEC server selecting both a subsampled set of ^^ clients, as well as ^^ clients (out of ^^) for each training epoch; this subsampling is different from typical random sampling, and may, according to some embodiments, be observed from analyzing message exchanges between the MEC server and certain clients; and/or 4. some clients share their coded data with the MEC server periodically, where current approaches either do no share any coded data or share from the entire set of clients at the beginning of training. [0390] Existing approaches utilize a fixed global epoch duration, and simply drop the client updates that are received after the fixed global epoch duration has elapsed. However, some embodiments implement a variable global epoch duration which may remain fixed for E’ epochs and then may be recomputed. The same may hold for the number of training data points which may also change every E’ epochs. Such a periodic variation of epoch duration can be tracked and attributed to the load balancing algorithm such as the one utilized according to some embodiments. Load balancing refers to deciding how many data samples (training examples) each client in the selected set is going to process (and for how long) at every epoch t at every E’ epoch interval. [0391] Embodiments advantageously eliminate the need for storing the coded data from all N clients that may participate in learning with the MEC server. Hence, a smaller amount of coded data (from M clients with M<N) may be stored at any time. [0392] Some embodiments combine Importance Sampling and Coded Federated Learning. Embodiments encompass the two approaches to learn a linear regression model. That is, embodiments combine Importance Sampling (IS) and Coded Federated Learning (CFL) by having as one goal to arrive at a model ^^ that minimizes loss expressed as ‖ ^^ ^^ − ^^‖ 2 ^ ^ corresponding to the Mean Squared Error (MSE), where ^^, ^^ are the data points and their labels, respectively, and where ‖ ‖ ^^ denotes the Frobenius norm. Some embodiments focus on learning the model using a Gradient Descent Algorithm (GDA) (see Ruder, S. (2016), An overview of gradient descent optimization algorithms, arXiv preprint arXiv:1609.04747 (hereinafter “An Overview of GDOA”), a commonly used algorithm for optimization problems, and learning linear regression models in particular. In GDA, a model is learned by taking small steps in a direction opposite of the gradient, a method which leads to convergence to a local minimum. In other words, in a GDA, the update to the model is given by Equation (DQ1) below: where are the gradient step, and the gradient at time ^^ + 1, respectively. This algorithm is especially suitable for federated learning, as different clients can compute their local gradients, and the MEC server may then aggregate all local gradients to produce the global updated model, since the gradient for the loss function for all clients through their datasets may be given by Equation (DQ2) below: 1 ^^ = ^^ ∑ ^^ ^^ ^ ^ ^ ^ ( ^^ ^ ^ ^ ^ ^^ − ^^ ^^ ) Eq. (DQ2) where ^^ ^^ , ^^ ^^ are the data points and the labels available at client ^^ respectively, and ^^ is the total number of data points (across all clients). [0393] In Importance Sampling, the clients evaluate the loss of the currently learned model based on the data they have, namely using Equation (DQ3) below: where ^^ ( ^^ ) , ^^ ( ^^ ) are the data points and labels at the ^^-th client, ^^ ^^ is the number of data points client ^^ has, and ^^ is the model or the global weight. [0394] In linear regression, the loss function is the Mean Square Error (MSE) between the input and the predicted output. It was shown in “Resource Management and Fairness for FL” that clients should be sampled according to distribution given by Equation (DQ4): ^^ and later correct bias in the gradient update by multiplying the gradient update by ^^ ^^ ^^ , where pi is given by Equation (DQ5): See for example Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol.1). Cambridge: MIT press (hereinafter “Deep Learning”). In Equations (DQ4) and (DQ5) above, both j and i denote a client, with the difference being that j disappears due to the noted summations. Thus, ^^ ^^ is a normalized value of ^^ ^^ ^^ ^^ across clients. [0395] “Coded Computing for FL” showed how to learn a linear regression model, where all clients participate. According to some embodiments, instead of waiting for all clients, the MEC server may determine and implement a load-balancing policy, for example by deciding how much time is given for all clients to perform a GDA step (this time being referred to as ^^ ∗ ), and how many data points ^^ ^ ∗ ^ will be used by each client ^^. See “Coded Computing for FL.” The rationale behind the above approach is that, since some clients have better connectivity and computation power than others, a mechanism ought to be used to avoid waiting until the slowest client (“straggler”) of all clients N finishes its calculation, delaying the learning process. By sending coded data, i.e. when each client ^^ sends to the MEC server coded data ( ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ), noting that Gi is the encoding matrix and Wi the scaling matrix, prior to the learning process, the MEC server can compensate for straggling clients and missing data points, and arrive at an unbiased estimate of the true gradient at each step. The above mechanism can result in faster and more reliable learning. Note that ^^ ^^ , the encoding matrix, and the scaling matrix, are, according to some embodiments, not to be shared with the MEC server. Furthermore, the above coding mechanism may also work for arriving at a kernel transformation, to allow learning linear models to data, which data is not linearly separable. [0396] Some embodiments aim to benefit from Importance Sampling, as well as from Coded Federated Learning. [0397] According to some embodiments, a baseline approach may involve having all ^^ clients send their coded data beforehand, and at each learning epoch, the MEC server would sample ^^ clients based on Importance Sampling. The learning epoch would then execute as a Coded Federated Learning epoch on those ^^ clients. That is, the MEC server would perform load-balancing, gather gradients from non-straggling clients, and compensate for the stragglers using the coded data. This baseline approach is illustrated in the Fig.25. [0398] Fig.35 shows an edge learning environment 3500 including clients 3502 and a MEC server 3504. In Fig 35, the MEC server 3504 may sample K clients 3502 based on Importance Sampling, and the learning epoch would then executed as a Coded Federated Learning epoch on the K clients. [0399] The baseline approach, an example of which is shown in Fig.35, may present scalability problems. When the number of clients ^^ ≫ ^^, that is, when there are a large number of clients N as compared with the sampled clients K, which is a common assumption in learning environments, the baseline approach as outlined above may suffers from significant overheads, such as in computation, communication and storage. [0400] To overcome the above issues with overhead, some embodiments propose a novel approach, based on subsampling, as depicted in Fig.36. [0401] Fig.36 shows an edge learning environment X1800 similar to that of Fig.35, with N clients 3602 and a MEC server 3604, where Importance Sampling and Coded Federated Learning are performed together with subsampling. In particular, some embodiments include sampling ^^ clients, where ^^ < ^^ ≪ ^^ periodically (every ^^′ epochs), according to the Importance Sampling distribution. Then, at each learning epoch, the MEC server may samples ^^ clients (out of the ^^ subsampled clients) uniformly. It has been shown, both analytically and empirically, that the sampling strategy described in the context of Fig.36 achieves similar results as the baseline, while significantly reducing the overheads associated with Coded Federated Learning. [0402] According to some further embodiments, a reduction in computation complexity may be achieved by encoding the data at the clients using a Bernoulli encoding matrix, rather than a Gaussian one. [0403] It was shown in “Coded Computing for FL” that clients should encode their data using an encoding matrix, whose elements are drawn in an independent and identically distributed manner (i.i.d.) from a distribution with a zero mean and a variance of 1/ ^^, with c representing the number of coded data points. When the encoding matrix elements are drawn from a Gaussian distribution, a large amount of floating-point multiplications are necessary, which are known to be costly. Instead, embodiments suggest drawing the elements of the matrix from a distribution that picks {−1,1} with probability half, and scaling the elements by multiplying the matrices by √1/ ^^ later, at the server side. [0404] Some further embodiments include using per-client coded data. In current mechanisms for Coded Federated Learning, the server sums the coded data from all clients, and uses the sum to compensate for missing data. It has been shown in “Coded Computing for FL” that the above mechanism results in an unbiased estimate of the gradient. However, the variance of the estimated gradient can pose problems, such as with respect to the learning rate. Some embodiments propose using coded data only from missing data points to decrease such variance, advantageously resulting in faster learning process. Per Client Coded Data: [0405] The proposed algorithm of achieving per client coded data is summarized below in the context of Algorithm 1 and Figs.27-25. [0406] Referring to Figs.27-32, these figures are similar to Figs.25 and 26 in that they show an edge learning environment 2700, with N clients 2702 and a MEC server 2704. [0407] A process according to Algorithm 1 may include the following operations: 1. the server initializes ^^, ^^, ^^, ^^ (0) , ^^ (0) , ^^, ^^′ (see Fig.27) where: a. K is the subsampling number representing the number of clients to undergo training during each epoch; b. N is the total number of clients; c. M is the sampling number representing the number of clients to be sampled from which K clients are to be subsequently subsampled; d. ^^ (0) is the gradient step at time 0; e. ^^ (0) is the gradient at time 0; f. ^^ is the maximum number of epochs the algorithm is running; g. ^^′ is the periodicity at which the operations 4.a.i to 4.a.v immediately below are performed, and thus operations 4.a.i to 4.a.v are not done at every epoch but once in E’ epochs; 2. The clients share ^^ 1 , … , ^^ ^^ with the server (see Fig.27), where ^^ i corresponds to the number of data points at client i; ^^ 3. The server calculates ^^ ^^ ^^ = ∑ ^^ ^^ ^^ , ∀ ^^ = 1, … , ^^ where: a. ^^ ^^ is the number of data points at client i; b. ^^ ^^ is the number of data points at client i with the understanding that it will disappear by virtue of the summation in the calculation of pi; and c. ^^ ^^ is the ratio of the number of data points at client i over the number of data points at all clients. 4. For all epochs (i.e. number of a specific epoch) ^^ = 0, … , ^^ − 1 do: a. If ^^ ^^ ^^( ^^, ^^ ′ ) == 0 do: i. the server broadcasts global weight ^^ ) where ^^ (t) is the global weight at epoch number t; ii. the clients share ^^ 1 , … , ^^ ^^ with the server for example using Equation (DQ4) (see Fig.28); iii. the server selects ^^ clients out of all ^^ clients using the ^^⃗ distribution (see Fig.29) – ^^⃗ is similar to a look up and contains the probability for each client to be included in the selection of subset that is to be polled in the next E’ epochs; it is also updated every E’ epochs at ii. above; clients calculate ^^ ^^ ^^ ^^ and they send it to the server. The server normalizes these values with the summation of ^^ ^^ ^^ ^^ for all clients and divides it per Equation (DQ4) over i to obtain ^^⃗ for each client; iv. the server performs load-balancing by determining ^^ ∗ , ^^ ^ ∗ ^ ∀ ^^, where ^^ ∗ denotes the time for all clients to perform a gradient descent algorithm step and ^^ ^ ∗ ^ is the number of data points to be used by client i (see Fig.28) - t* is the duration of an epoch (versus t with denotes an epoch number), and refers to duration of time that the server is going to wait for clients to return their local gradients, in terms of seconds, corresponding to a time duration to be used by clients i to perform a gradient update operation as expected by the server . It is optimized once every E’ epochs and used for the next E’ epochs; it may pertain to all clients N or a sampling or subsampling of N; v. the M clients share coded data ^^ ^^̂ = ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^̂ = ^^ ^^ ^^ ^^ ^^ ^^ with the server, where G i is the encoding matrix, W i the scaling matrix, X i the datapoints at client i, and Y i the labels at client i (see Fig.30); b. End if 5. the server selects ^^ clients out of ^^ uniformly (such as without consideration of importance sampling or other sampling criteria) (See Fig.31); 6. the server sends the K clients (see Fig.32); 7. the K clients send ^ to the ) server where ^ is the gradient at client i at epoch number t (see Fig.32); since t* is the time in seconds that the server is going to wait for this gradient update (a time duration to be used by clients i to perform a gradient update operation as expected by the server) and ^^ ^ ∗ ^ is the total number of datapoints client i is going to use to calculate this update. Therefore, ( ^^ ^ ∗ ^, ^^ ^^ ∗ ) is a subset of data set with dimension ^^ ^ ∗ ^, sampled from the larger dataset ( ^^ ^^ , ^^ ^^ ). 8. the server calculates is the global gradient at epoch number t+1 (see Fig.32); 9. The server updates the global weight at epoch number t+1 ^^ ( ^^+1) = − ^^ ( ^^+1) ^^ ( ^^+1) (see Fig.32); and 10. End for. [0408] One challenge in the above algorithm is the sub-optimality of the calculation of optimal epoch duration (t*) and optimal number of datapoints l i * used by each client to calculate the local gradients, if K < M. This is because t*, l i * are calculated assuming all M clients will participate in federated learning in step 7 and that the expected return is maximized. When K < M, the optimal t*, l i * do not hold anymore. In such a case, we argue that if M is partitioned into subsets of size K such that each subset is statistically identical to the set M, then the previously chosen t*, l i * for the E’ epochs could still perform well. [0409] To this end, we propose to utilize a partitioning algorithm that can group the clients i nto clusters where each cluster contains “similar” clients, and where the similarity is determined by the clients’ data distribution, average compute time and communication times. For example, according to this embodiment, each client may be indicated by a point in an M- dimensional space where M represents the feature space for the clustering algorithm. The features may include the data dimension, number of data points at client, compute time and communication time. Then, a clustering algorithm can be utilized to partition the space into K clusters by grouping nearby clients i nto a cluster. The complexity of such an algorithm is typically dependent on the number of clients M (e.g., O(M2)). However, since M << N, this is not a major concern. [0410] According to this embodiment, in each training round, within E’, the MEC server may sample a client randomly from each of the K clusters to participate in federated learning with the pre-determined epoch duration t* and number of training data points l i *. Experiments and Performance Evaluation: [0411] We simulated the performance of our Importance Sampling and Coded Federated Learning approach on a MNIST dataset. See LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324 (hereinafter “Gradient-Based Learning”). MNIST dataset contains hand-written digits, and is commonly used for benchmarking in the area of machine learning. We learned a regression model with ^^ = 100 clients, where every ^^′ epochs, ^^ = 20 are selected according to Importance Sampling, and every epoch, ^^ = 10 are selected uniformly out of the group of ^^ subsampled clients. The learning rate is with ^^ (0) = 1 or ^^ (0) = 0.1. The encoding matrix is either Gaussian or Bernoulli. In order to simulate a non-i.i.d. case, each client contains different digits of (mostly) a single digit, e.g. some clients have (almost) only images of the digit 0, some of the digit 1, and so forth. [0412] We compared the subsampling approach with the baseline approach, as well as to Coded Federated Learning without Importance Sampling. The results are shown in Figs.33 and 34. [0413] As shown in Fig.33, graph 3300 plots number of epochs E against Mean Square Error (MSE). The graph 3300 shows among other things that while Importance Sampling has a measurable effect on reducing MSE for the same number of training epochs, combining Importance Sampling with Coded Federated Learning and using subsampling for example as noted in Algorithm 1 above has an even greater effect in reducing MSE for the same number of epochs, while, for the same sample size M, reducing the number of cycles E ‘of epoch duration change does not have a significant effect in reducing MSE. [0414] As shown in Fig.34, graph 3400 plots number of epochs E against percent accuracy. The graph 3400 shows among other things that Importance Sampling and subsampling according for example to Algorithm 1 are roughly on par with one another in terms of accuracy, and that they provide a significant improvement of the case of uniform sampling. In addition, reducing the number of cycles E’ of epoch duration appears to provide enhanced accuracy. Generalized Algorithm: [0415] In previous sections we have proposed and investigated an algorithm and a framework for a federated learning system that performs Importance Sampling in order to reduce communication and compute overhead and power for clients, along with coding in order to mitigate stragglers. The performance of the algorithm depends on selection of hyper parameter ^^ ′ , which correlates with the periodicity (integer number of epochs) of updating the subsampled clients. A smaller ^^ ′ therefore means more frequent updating of the epochs, yielding better performance at the cost of more communication overhead for coded data exchange. However, we may tackle this issue by keeping the number of clients we are replacing at each ^^ ′ epoch small. In other words, instead of deciding on a fresh set of ^^ clients at each ^^ ′ epoch, we can discard ^^ out of ^^ clients from that set and add new set of ^^ clients out of remaining set of ^^ − ^^, resulting in ^^ − ^^ + ^^ clients to form a new set of ^^ clients that we perform coded learning. Discarding and resampling of these ^^ clients may depend on the importance distribution ^^⃗. Note that this algorithm is the same as Algorithm 1 when ^^ = ^^. [0416] Even though we did not investigate the performance of this generalized algorithm, Algorithm 2 below, we do provide the framework as follows. [0417] A process according to Algorithm 2 may include the following operations: 1. the server initializes ^^, ^^, ^^, ^^ (0) , ^^ (0) , ^^, ^^′ (see Fig.27) where: a. K is the subsampling number representing the number of clients to undergo training during each epoch; b. N is the total number of clients; c. M is the sampling number representing the number of clients to be sampled from which K clients are to be subsequently subsampled; d. ^^ (0) is the gradient step at time 0; e. ^^ (0) is the gradient at time 0; f. ^^ is the maximum number of epochs the algorithm is running; g. ^^′ is the periodicity at which the operations 4.a.i to 4.a.v immediately below are performed, and thus operations 4.a.i to 4.a.v are not done at every epoch but once in E’ epochs; 2. The clients share ^^ 1 , … , ^^ ^^ with the server (see Fig.27), where ^^ i corresponds to the number of data points at client i 3. The server calculates ^ = 1, … , ^^ where: a. ^^ ^^ is the number of data points at client i; b. ^^ ^^ is the number of data points at client i with the understanding that it will disappear by virtue of the summation in the calculation of p i ; and c. ^^ ^^ is the ratio of the number of data points at client i over the number of data points at all clients. 4. For all epochs (i.e. number of a specific epoch) ^^ = 0, … , ^^ − 1, do: a. If ^^ ^^ ^^ ( ^^, ^^ ′) == 0, coded data shared with server from a subsampling of the N clients: i. the server broadcasts global weight where ^^ (t) is the global weight at epoch number t; ii. the clients share ^^ 1 , … , ^^ ^^ with the server for example using Equation (DQ4) (see Fig.28); iii. the server selects L clients out of participating ^^ clients out of all ^^ clients using a function of the ^^⃗ distribution or using a uniform distribution, and discards them from the set of M clients; iv. the server selects L clients out of the remaining N-M or out of the remaining N-M+L (the L that was discarded) clients using a function of the ^^⃗ distribution and adds them to the M-L(the L that was discarded) clients; v. the server performs load-balancing for the newly determined L clients by determining ^^ ∗ , ^^ ^ ∗ ^ ∀ ^^, where ^^ ∗ denotes the time for all clients in L to perform a gradient descent algorithm step and ^^ ^ ∗ ^ is the number of data points to be used by client i; vi. the L clients share coded data ^^ ^^̂ = ^^ ^^ ^^ ^^ ^^ ^^ , ^^ ^^̂ = ^^ ^^ ^^ ^^ ^^ ^^ with the server, where G i is the encoding matrix, W i the scaling matrix, X i the datapoints at client i, and Y i the labels at client i; b. End if 5. the server selects ^^ clients uniformly out of ^^ clients at epoch t= E-1 remaining from operation 4 above. 6. the server sends the K clients ^ is the global weight at epoch number t; 7. the K clients send ^ ) to the server where ^ is the gradient at client i at epoch number t; 8. the server calculates is the global gradient at epoch number t+1; 9. The server updates the global weight at epoch number t − 10. End for. S. SYSTEM AND METHODS FOR CODED FEDERATED LEARNING WITHDIFFERENT CODING REDUNDANCIES FOR CLIENTS [0418] As noted previously, federated learning environments involve clients collaboratively learning an underlying model by periodically exchanging model parameters with a central server, such as a MEC server. The convergence behavior of federated learning algorithms suffers from heterogeneities in computing power, communication links and data distribution across client devices. To mitigate the impact of these heterogeneities, some embodiments propose a coded computing method that enables each client to generate an optimal number of encoded data at a per-client redundancy level, and share the optimal number of encoded data with the MEC server. [0419] In one previous coded computing solution a MEC server collects communication and compute parameters from clients and performs a global load balancing action that instructs all clients to share the equal number of encoded data to the server. See U.S. Patent Publication US-2019-0138934. In the latter solution, the server combines the received encoded data to form a composite encoded data. During training, the server computes a gradient from the composite encoded data in each epoch to probabilistically compensate for gradients sent by the clients that fail to arrive within an optimized epoch time, achieving model convergence up to four times faster than classical FL, while maintaining a certain level of privacy for clients by not exposing the raw data [0420] Some prior solutions however require all clients to generate an equal number of encoded data, and hence the global coding redundancy is dominated by the straggler clients. Therefore, it unfairly requires reliable clients to also generate and transmit large number of encoded data. This is an inefficient use of bandwidth and computation resources. [0421] Some embodiments propose a coded federated learning framework, where the MEC performs load balancing to calculate: 1. optimal partitioning of each client’s data into raw data (for client-side processing) and encoded data (for server-side processing), and 2. the optimal epoch time global to all clients. Under this framework, some embodiments propose two coded computing schemes for encoded data generation at a per client redundancy, and gradient computation. [0422] Embodiments in this section advantageously enable per-client coding redundancy based on network and compute parameters of each client. The above enables each client to have a trade-off for their own privacy and performance. Embodiments outperforms uncoded federated learning in heterogeneity regions, and do not require decoding overhead for gradient computation. [0423] Some embodiments in this section involve an exchange of encoded data between clients and the server in a systematic way, which may likely be standardized for Mobile Edge Computing. In such a case, standard compliance can be invoked to detect infringement. Controlled experiments in a laboratory can also be performed to create different levels of compute, communication and data heterogeneities to accurately predict the behavior of proposed algorithms herein. Background on Federated Learning for Linear Regression Workloads: [0424] Raw data is located at edge devices. More specifically, the data matrix and the associated label vector ^ ) located at the i-th device (i.e. client) may be given by is the number of training data points available. Note that the dimension of is ^^ ^^ × ^^, where ^^ is the dimension of the feature space. [0425] In federated learning, each client locally computes partial gradients in each epoch, say the r-th epoch, as follows in Equation (DR3): where, is the estimate of the global model. The partial gradient is communicated to the master server for aggregation. The global gradient is given by Equation (DR4): ^^ ^^ ^^( ^^ ( ^^) ) = ∑ ^ ^ ^ ^ = 1 ^^ ^^ ^^ ^^ ( ^^ ( ^^) ) Eq. (DR4) and the model is updated as shown by way of Equation (DR5): where ^^ = ^^ ^^ is the totality of training data points and ^^ ≥ 0 is the learning rate. Coded Federated Learning with Per-Client Coding Redundancy: [0426] Some embodiments propose coded federated learning with a per-client redundancy algorithm comprising the features noted below: 1. Load balancing, 2. Encoded data generation 3. Gradient computation [0427] The first two features may be performed once at the beginning of the training process. The third step may be performed every epoch. Each step is described below in detail. [0428] Load balancing: [0429] In order to describe a load balancing module according to some embodiments, let us introduce a return metric ^^ ^^ ( ^^; ^^) that represents the event that a gradient computed over ^^ ≥ 0 data points by the i-th client is available at the server within the deadline time ^^. More precisely, the return metric R i may be given by Equation (DR6) below: where, ^^ is an indicator function for event ^^, ^^ ^^ is the time taken by the i-th client to download a model, compute gradient and upload the computed gradient to the server. Similarly, a return metric of MEC server is defined as given in Equation (DR7): where ^^ ^^ ^^ ^^ ( ^^; ^^̃) represents the event that a gradient computed over ^^̃ ≥ 0 data points by the server is available at the server within the deadline time ^^ , where ^^̃ ≥ 0 is the number of encoded data points used for gradient computation by the MEC server. [0430] Optimization Parameters: [0431] Definition 1: Deadline time t ∗ : [0432] The deadline time ^^ ∗ is the smallest epoch time window within which the MEC server and the i-th client can jointly compute gradient over, on average, ^^ ^^ number of data points for ^^ = 1, … , ^^ clients. Here ^^ ^^ is the number of raw data points available at the i-th client. [0433] Definition 2: Optimal load for the client l ∗ i(t): [0434] The optimal load ^^ ^ ∗ ^( ^^) ≤ ^^ ^^ for an epoch time ^^ is the number of raw data points used by the client to maximize its average return metric for the epoch time. More precisely, as set forth in Equation (DR8): [0435] Observation 1: [0436] Based on Definitions 1 and 2 above, the MEC server computes gradients from number of data points for ^^ = 1, … , ^^ clients. Therefore, each client must generate ( ^^ ^^ − ^^[ ^^ ^^ ( ^^ ^ ∗ ^( ^^ ∗ ))]) + number of encoded data once at the beginning of training and share it to the MEC server. [0437] We assume that he MEC sever has storage constrained of ^^ ^^ number of data points. [0438] Definition 3: Optimal load for the MEC server c ∗ (t): [0439] The optimal load ^^ ∗ ( ^^) ≤ ^^ ^^ for an epoch time ^^ is the number of encoded data points used by the MEC server to maximize its average return metric for the epoch time. More precisely, ^^ ∗ ( ^^) may be given by Equation (DR4): ^^ ∗ ( ^^) = argmax 0≤ ^^̃≤ ^^ ^^ ^^[ ^^ ^^ ^^ ^^ ( ^^; ^^̃)] Eq. (DR9) [0440] Coded Federated Learning Schemes: [0441] Based on Definitions 1-3 and Observation 1 above, we propose the following two embodiments herein: [0442] A protocol according to a first embodiment may involve the following: 1. as a one stage procedure, each client generates ( ^^ ^^ − ^^[ ^^ ^^ ( ^^ ^ ∗ ^( ^^ ∗ ))]) + number of coded data and uploads the data to the server; 2. the following operations may then be repeated every epoch: a. each client downloads the current model from the server, computes its local gradient from ^^ ^ ∗ ^( ^^ ∗ ) raw data points and uploads the local gradient to the server; b. the server computes a gradient from the coded data of each client separately, and combines the computed per client gradients across all clients to obtain a global gradient; c. the server receives local gradients until the deadline time ^^ ∗ ; and combines all the received local gradients into the global gradient; and d. the server updates the model. [0443] A protocol according to a second embodiment may involve the following: 1. as a one stage procedure: a. each client generates ( ^^ ^^ − ^^[ ^^ number of coded data and uploads the data to the server; and b. the server oversamples the coded data of each client and combines the oversampled coded data to create a composite coded data. 2. the following operations may then be repeated every epoch: a. each client downloads the current model from the server, computes its local gradient from ^^ ^ ∗ ^( ^^ ∗ ) raw data points and uploads the local gradient to the server; b. the server computes a gradient from the composite coded data of each client separately, and combines the computed per client gradients across all clients to obtain a global gradient; c. the server receives local gradients until the deadline time ^^ ∗ ; and combines all the received local gradients into the global gradient; and d. the server updates the model. [0444] Both of the protocols of embodiments one and two above are implemented by finding ^^ ∗ , and ^^ ∗( ^^ ∗) that satisfy the following inequality as provided by Equation (DR10) below: [0445] Before the training starts, in either embodiment, each client may provide statistics of its compute and communication delays to the server. The server may perform load balancing to solve the inequality in Equation (DR10) using a two-stage optimization method outlined below: 1. initialize time ^^ = 0; 2. operation 1: find an optimal load allocation ^^ ^ ∗ ^( ^^), ^^ ∗ ( ^^) by solving Eq.(DR8) and Eq. (DR9), respectively (note: it has been shown that for a shifted exponential distribution for compute delays and geometric distribution for communication delays, the average return metrics in Eq.(DR8) and Eq. (DR9) are piece-wise concave functions of ^^ ^^ and ^^̃. See “Coded Computing for FL”) - therefore, operation 1 may be solved with a standard convex optimization tool; 3. operation 2: check for the following condition: 4. if the condition in operation 2 is satisfied, the deadline time ^^ ∗ = ^^, and the end of the optimization method is reached; 5. but, if the condition in operation 2 is NOT satisfied, then set ^^ = ^^ + ∆ ^^, and go back to operation 1, and repeat. [0446] Remark 1: [0447] In Equation (DR10), the storage constraint at the MEC is applied on the total amount of coded data shared by all the clients. In another implementation, per-client storage constraints may be used by finding ^^ ∗ , ^^ ^ ∗ ^ ( ^^ ∗) , and ^^ ∗( ^^ ∗) that satisfy the following set of inequalities as set forth in Equations (DR11a), (DR11b) and (DR11c): where, ^^( ^^) ^ ^ is the storage constraint at the MEC for the ^^-th client. [0448] Encoded data generation: [0449] According to some embodiments, random linear coding may be utilized by each client to encode the data. The encoded data at the i-th client may then be given by Equation (DR12): where ^^ ^^ is a generator matrix of dimension ^^ ^ ∗ ^ × ^^ ^^ , with elements drawn independently from ^^ a normal distribution with a mean of 0 and a variance ^ ^^ ∗ ^^ , where, as per Equations (DR13) and (DR14) below: ^^ ^ ∗ ^ = ( ^^ ^^ − ^^[ ^^ ^^ ( ^^ ^ ∗ ^( ^^ ∗ ))] )+ Eq. (DR13) and where ^^ ^ ∗ ^ is the number of coded data at each client, and where clearly, the scaling ^^ ^^ is the probability that the raw gradients from the i-th client fail to arrive within the deadline time ^^ ∗ . [0450] The encoded data ( ^^̃ ( ^^) , ^^̃ ( ^^) ) may be transmitted to the server, while the generator matrix ^^ ^^ is kept private. At the server, the encoded data may be stored in at least two different ways as set forth below. [0451] Operations according to a first protocol scheme may involve storing encoded data ( ^^̃ ( ^^) , ^^̃ ( ^^) ) from each client separately at the server. [0452] Operations according to a second protocol scheme may involve the following: 1. composite encoded data is formed as follows: let Equation (DR15) below hold: 2. for each client use a discrete Fourier transform (DFT) matrix ^^ ^^ of dimension ^^ ^ ∗ ^ ^^ ^^ × ^^ ^ ∗ ^ to construct a over-sampled encoded data given by Equations (DR16a) and (DR16b) as: ( ^^) ^^̃ ^^ = ^^ ^^ ^^̃ ( ^^) Eq. (DR16a) and ^^̃ ^^ ( ^^) = ^^ ^^ ^^̃ ( ^^) Eq. (DR16b) 3. form a composite encoded data ( ^^̃, ^^̃) as per Equations (DR17a) and (DR17b) as follows: and [0453] Gradient Computation: [0454] In each ^^-th epoch, there may be two types of gradients computed: a client-gradient or local gradient, and a server-gradient or global gradient, which may, according to some embodiments, be used together to update the global model ^^ ( ^^) . [0455] Client-gradient: [0456] Each client may compute a local gradient from ^^ ^ ∗ ^ ( ^^ ∗) number of raw data points. Therefore, in each epoch, each client randomly picks ^^ ^ ∗ ^( ^^ ∗ ) raw data points out of ^^ ^^ raw data points available at the client, such that each data point has equal likelihood of selection given ^^ ^ ∗ ^( ^^ ∗ ) by ^^ ^^ . The client computes its local gradient and uploads its local gradient to the server. The expected value of total local gradients received by the MEC server within the deadline time ^^ ∗ is given by Equation (DR18): has been defined in Eq. (DR14) above. [0457] According to a first scheme, Scheme 1, a server-gradient protocol is provided as described below. According to Scheme 1, in each epoch, the server computes a gradient from the encoded data. The server may compute this gradient from encoded data of each client separately and add them together. The expected value of the server computed gradient in Scheme 1 may be given by Equation (DR19) below: ^^ ( ^^) ) . Pr{ ^^ ^^ ^^ ^^ ≤ ^^ ∗ } Eq. (DR19) [0458] Assuming the server has much more computing power than the clients, we can approximate the probability Pr { ^^ ^^ ^^ ^^ ≤ ^^ ∗} ≈ 1 to obtain the expected value of the server computed gradient in Scheme 1 as set forth in Equation (DR20): [0459] The server may add the two types of gradients to obtain the overall gradient given by Equation (DR21) below: [0460] Based on Equations (DR18) and (DR20), it is straightforward to note that is an unbiased estimate of the gradient computed from the entire ^^ data points distributed across all the clients. [0461] According to a second scheme, Scheme 2, the server computes a gradient from composite encoded data ( ^^̃, ^^̃). The expected value of the server computed gradient in the Scheme 2 protocol, assuming that Pr { ^^ ^^ ^^ ^^ ≤ ^^ ∗} ≈ 1, is given by Equation (DR22): where, as set forth in Equations (DR23a), (DR23b) and (DR23c) below: and noting that, per Equation (DR23d): ^^ ^ ^ ^ ^ ^^ ^^ = ^^ ^^ ^ ∗ ^ Eq. (DR23d) and [0462] Further, we invoke the weak law of large numbers to approximate as noted in Equations (DR24) and (DR25) below: ^^ ^ ^ ^ ^ ^^ ^^ ≈ ^^ ^^ ^^ ^^ ^^ Eq. (DR24) and ^^ ^ ^ ^ ^ ^^ ^^ ≈ ^^ ^^ ^^× ^^ ^^ Eq. (DR25) [0463] Using the above identities, we obtain D as provided in Equation (DR26) below: [0464] Therefore, the global gradient in the Scheme-2 protocol may be given by Equation (DR27): [0465] The server adds the two types of gradients to obtain the overall gradient given by Equation (DR28) below: [0466] Remark 2 [0467] Based on equations (DR18) and (DR22), it is straightforward to note that is approximately an unbiased estimate of the gradient computed from the entire ^^ data points distributed across all the clients. The approximation comes from the fact that the matrix ^^ will have small non-diagonal values, which only vanishes asymptotically with the [0468] number of coded data points. [0469] Numerical Results [0470] We now present numerical results for a MEC setup consisting of 24 client nodes and 1 server node. [0471] We use an LTE network, where each client is assigned 3 resource blocks. To model heterogeneity in communication links, link throughput coefficients are generated using {1, ^^ ^^ ^^ ^^1, ^^ ^^ ^^ ^^1 2 , … , ^^ ^^ ^^ ^^1 23 } × ^^ , where ^^ ^^ ^^ ^^1 = 0.8, ^^ = 0.4 bits per channel use, and a random permutation of these coefficients are assigned to the clients’ communication links. Both the uplink and downlink channels are modeled with 10% erasure probability. [0472] Similarly, to model heterogeneity in computation power, the average processing rates are generated using {1, ^^ ^^ ^^ ^^2, ^^ ^^ ^^ ^^2 2 , … , ^^ ^^ ^^ ^^2 23 } × ^^, where ^^ ^^ ^^ ^^2 = 0.8, ^^ = 1536 Kilo Multiply-Accumulate operations per second, and a random permutation of these coefficients are assigned to the clients’ computation power. A shifted exponential statistical model for computation time as described in “Coded Computing for FL” is used. [0473] Each client has ^^ = 360 training examples. The feature dimension ^^ = 500. The total storage constraint at MEC server is ^^ ^^ = 720. [0474] The non-i.i.d. training examples at each client are generated using the following linear model as provided by Equation (DR29): ^^ ( ^^) = ^^ ( ^^) ^^ ^^ ^^ ^^ ^^ + ^^ ( ^^) Eq. (DR29) Where each element of ^ ) is independent and distributed as ~ ^^ each element of 1 additive noise is independent and distributed as ~ ^^ (0, ^^ ^ SNR = 0 dB, and each element of model ^^ ^^ ^^ ^^ ^^ is independent and distributed as ~ ^^ ( 0,1 ) . [0475] We have utilized normalized mean squared error (NMSE) as a performance metric, defined in the r-th epoch as set forth in Equation (DR30): [0476] The numerical results provided herein compare the following algorithms: 1. The Scheme-1 protocol 2. The Scheme-2 protocol 3. Coded Federated Learning CFL with global coding redundancy 4. Baseline Federated Learning 5. Greedy Federated Learning [0477] In Fig.25, the NMSE performance is shown by way of graph 340 with respect to the total wall clock time. Clearly, proposed protocols-1 and 2, that is, Scheme 1 and Scheme 2, respectively, converge faster than the currently known global redundancy based Coded Federated Learning scheme. This is primarily due to a smaller initial cost incurred in encoding data at each client. Proposed protocol 2 has a slightly higher error floor due to coupling of coded data across the clients during composite data formation. Nonetheless, in proposed protocol 2 the MEC server computes gradient only once in each epoch whereas in proposed protocol 1, the MEC server needs to compute gradient from coded data of each client separately. Finally, all the methods converge much faster compared to the baseline federated learning solution that does not use any coding. [0478] Next, in Fig.252, we compare by way of graph 25200 the convergence rate of different algorithms in terms of number of epochs. To this end, we consider a greedy federated learning scheme that waits for the gradients from clients to arrive until the deadline time ^^ ∗ and updates the model. For reference, we have also shown an ideal federated learning scheme that waits for all the clients i n each epoch. Therefore, all schemes except the ideal FL scheme consume the same amount of wall-clock time in each epoch. The ideal FL scheme will have variable epoch times based on variability of straggler effects in each epoch. Both the proposed protocols perform very close to ideal federated learning scheme, and significantly outperforms the greedy federated learning algorithm. In addition, the proposed embodiments converge faster than the global redundancy based coded federated learning scheme. T. EXAMPLE EDGE COMPUTING IMPLEMENTATIONS [0479] Additional examples of the presently described method, system, and device embodiments include the following, non-limiting implementations. Each of the following non- limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure. [0480] As referred to below, an “apparatus of” an edge computing node is meant to refer to a “component’ of “node,” such as of a central node, central server, server, client node, client computing node, client device, client or user, as the component is defined above. A client, client node, or client compute/computing node may refer to an edge computing node that is serving as a client device and, in the examples below, may perform training of a global model using local data, which the client may wish to keep private (e.g., from other nodes). The “apparatus” as referred to herein may refer, for example, to a processor such as processor 852 of edge computing node 950 Fig.9, or to the processor 852 of Fig.9 along with any other components of the edge computing node 950 of Fig.9, or, for example to circuitry corresponding to a computing node 515 or 523 with virtualized processing capabilities as described in Fig.5. [0481] Examples: [0482] Example AM1 includes a method to be performed at an apparatus of a client computing node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: accessing a local raw training data set of the client computing node and a raw label set corresponding to the local raw training data set; computing kernel coefficients based on a kernel function, the local raw training data set, and the raw label set; generating a coded training data set from the raw training data set; generating a coded label set based on the kernel coefficients, the kernel function, and the raw label set; and transmitting the coded training data set and coded label set to a central server of the edge computing environment. [0483] Example AM2 includes the subject matter of Example AM1 and/or other Example(s) herein and, optionally, further comprising performing gradient descent to compute the kernel coefficients. [0484] Example AM3 includes the subject matter of Example AM1 or AM2 and/or other Example(s) herein and, optionally, wherein the kernel function is a Gaussian function. [0485] Example AM4 includes the subject matter of any one of Examples AM1-AM3 and/or other Example(s) herein and, optionally, wherein generating the coded training data set comprises generating a coding matrix based on a distribution, wherein the coded training data set is based on multiplying a matrix representing the raw training data set with the coding matrix. [0486] Example AM5 includes the subject matter of Example AM4 and/or other Example(s) herein and, optionally, wherein the coding matrix is generated based on one of a standard normal distribution and a Bernoulli (1/2) distribution. [0487] Example AM6 includes the subject matter of any one of Examples AM1-AM5 and/or other Example(s) herein and, optionally, further comprising: obtaining, from the central server, a global mean and standard deviation for training data of client computing nodes of the edge computing environment; and normalizing the raw local training data set based on the global mean and standard deviation. [0488] Example AM7 includes the subject matter of Example AM6 and/or other Example(s) herein and, optionally, wherein the kernel coefficients are computed based on the normalized local training data set. [0489] Example AM8 includes the subject matter of any one of Examples AM1-AM7 and/or other Example(s) herein and, optionally, further comprising: obtaining the global ML model from the central server; computing an update to the global ML model using the raw training data set and the raw label set; and transmitting the update to the global ML model to the central server. [0490] Example AM9 includes the subject matter of Example AM8 and/or other Example(s) herein and, optionally, wherein computing the update to the global ML model comprises computing partial gradients to the global ML model via a backpropagation technique, and transmitting the update comprises transmitting the partial gradients to the central server. [0491] Example AA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: compute kernel coefficients based on a kernel function, a local raw training data set of the client computing node and a raw label set corresponding to the local raw training data set; generate a coded training data set from the raw training data set; generate a coded label set based on the kernel coefficients, the kernel function, and the raw label set; and cause the coded training data set and coded label set to be transmitted a central server of the edge computing environment. [0492] Example AA2 includes the subject matter of Example AA1 and/or other Example(s) herein and, optionally, wherein the processor is to perform gradient descent to compute the kernel coefficients. [0493] Example AA3 includes the subject matter of any one of Examples AA1-AA2 and/or other Example(s) herein and, optionally, wherein the kernel function is a Gaussian function. [0494] Example AA4 includes the subject matter of any one of Examples AA1-AA3 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded training data set by generating a coding matrix based on a distribution, and wherein the coded training data set is based on multiplying a matrix representing the raw training data set with the coding matrix. [0495] Example AA5 includes the subject matter of Example AA4 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coding matrix based on one of a standard normal distribution and a Bernoulli (1/2) distribution. [0496] Example AA6 includes the subject matter of any one of Examples AA1-AA5 and/or other Example(s) herein and, optionally, wherein the processor is further to: obtain, from the central server, a global mean and standard deviation for training data of client computing nodes of the edge computing environment; and normalize the raw local training data set based on the global mean and standard deviation. [0497] Example AA7 includes the subject matter of Example AA6 and/or other Example(s) herein and, optionally, wherein the processor is to compute the kernel coefficients based on the normalized local training data set. [0498] Example AA8 includes the subject matter of any one of Examples AA1-AA7 and/or other Example(s) herein and, optionally, wherein the processor is further to: obtain the global ML model from a central server of the edge computing environment; compute an update to the global ML model using the raw training data set and the raw label set; and cause the update to the global ML model to be transmitted to the central server. [0499] Example AA9 includes the subject matter of Example AA8 and/or other Example(s) herein and, optionally, wherein the processor is to compute the update to the global ML model by computing partial gradients to the global ML model via a backpropagation technique, and transmitting the update comprises transmitting the partial gradients to the central server. [0500] Example AA10 includes the subject matter of any one of Examples AA1-AA9 and/or other Example(s) herein and, optionally, wherein the apparatus further comprises a transceiver to provide wireless communication between the apparatus and other edge computing nodes of a wireless edge network. [0501] AMM1 includes a method comprising: obtaining, at a central server node from each of a set of client compute nodes, a coded training data set and a coded label set; computing a gradient update to a global machine learning (ML) model based on the coded training data set and coded label set; obtaining, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregating the gradient updates; and updating the global ML model based on the aggregated gradient updates. [0502] AAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain, at a central server node from each of a set of client compute nodes, a coded training data set and a coded label set; compute a gradient update to a global machine learning (ML) model based on the coded training data set and coded label set; obtain, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregate the gradient updates; and update the global ML model based on the aggregated gradient updates. [0503] Example BM1 includes a method to be performed at an apparatus of a client computing node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: accessing a local training data set of the client computing node; applying a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; obtaining the global ML model from a central server of the edge computing environment; and iteratively, until the global ML model converges: computing an update to the global ML model using the transformed data set and a raw label set corresponding to the training data set; and transmitting the global ML model update to the central server. [0504] Example BM2 includes the subject matter of Example BM1 and/or another Example and, optionally, wherein the global ML model update comprises a partial gradient obtained via gradient descent. [0505] Example BM3 includes the subject matter of Example BM2 and/or another Example and, optionally, further comprising transmitting operational parameters of the client computing node to the central server. [0506] Example BM4 includes the subject matter of Example BM3 and/or another Example and, optionally, wherein the operational parameters include one or more of: processing capabilities for the client computing node and link quality for a network connection between the client computing node and the central server. [0507] Example BM5 includes the subject matter of any one of Examples BM1-BM3 and/or another Example and, optionally, wherein applying the RFFM transform comprises applying a cosine function element-wise to the training data set. [0508] Example BM6 includes the subject matter of Example BM5 and/or another Example and, optionally, wherein applying the RFFM transform comprises applying the cosine function according to cos( ^^ ^^ ^^ + ^^), where ^^ ^^ represents the training data set, and ^^ and ^^ comprise entries sampled from distributions. [0509] Example BM7 includes the subject matter of Example BM6 and/or another Example 1 and, optionally, wherein ^^ is sampled from the distribution ^^ (0, (2 ^^ 2 ) ^^ ^^ ) and ^^ is sampled from Uniform(0,2 ^^]. [0510] Example BM8 includes the subject matter of any one of Examples BM1-BM7, further comprising receiving, from the central server node, a random number generator for random feature mapping and random feature dimensions, wherein the RFFM transform is based on the random number generator for random feature mapping and random feature dimensions. [0511] Example BA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: access a local training data set of the client computing node; apply a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; obtain the global ML model from a central server of the edge computing environment; and iteratively, until the global ML model converges: compute an update to the global ML model using the transformed data set and a raw label set corresponding to the training data set; and cause the global ML model update to be transmitted to the central server. [0512] Example BA2 includes the subject matter of Example BA1 and/or another Example and, optionally, wherein the global ML model update comprises a partial gradient obtained via gradient descent. [0513] Example BA3 includes the subject matter of Example BA2 and/or another Example and, optionally, wherein the processor is further to cause operational parameters of the client computing node to be transmitted to the central server. [0514] Example BA4 includes the subject matter of Example BA3 and/or another Example and, optionally, wherein the operational parameters include one or more of: processing capabilities for the client computing node and link quality for a network connection between the client computing node and the central server. [0515] Example BA5 includes the subject matter of any one of Examples BA1-BA3 and/or another Example and, optionally, wherein the processor is to apply the RFFM transform by applying a cosine function element-wise to the training data set. [0516] Example BA6 includes the subject matter of Example BA5 and/or another Example and, optionally, wherein the processor is to apply the cosine function according to cos ( ^^ ^^ ^^ + ^^ ) , where ^^ ^^ represents the training data set, and ^^ and ^^ comprise entries sampled from distributions. [0517] Example BA7 includes the subject matter of Example BA6 and/or another Example 1 and, optionally, wherein ^^ is sampled from the distribution ^^ (0, (2 ^^ 2 ) ^^ ^^ ) and ^^ is sampled from Uniform(0,2 ^^]. [0518] Example BA8 includes the subject matter of any one of Examples BA1-BA7, wherein the processor is further to receive, from the central server node, a random number generator for random feature mapping and random feature dimensions, wherein the RFFM transform is based on the random number generator for random feature mapping and random feature dimensions. [0519] BMM1 includes a method comprising: obtaining, from a set of client compute nodes, gradient updates to a global ML model computed by the client compute nodes on a dataset transformed via a Random Fourier Feature Mapping (RFFM) transform; aggregating the gradient updates; and updating the global ML model based on the aggregated gradient updates. [0520] BAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain, from a set of client compute nodes, gradient updates to a global ML model computed by the client compute nodes on a dataset transformed via a Random Fourier Feature Mapping (RFFM) transform; aggregate the gradient updates; and update the global ML model based on the aggregated gradient updates. [0521] Example CM1 includes method to be performed at an apparatus of a client computing node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: accessing a local training data set of the client computing node and a label set corresponding to the local training data set; applying a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; estimating a local machine learning (ML) model based on the transformed training data set and the label set; generating a coded training data set from the transformed training data set; generating a coded label set based on the coded training data set and the estimated local ML model; and transmitting the coded training data set and coded label set to a central server of the edge computing environment. [0522] Example CM2 includes the subject matter of Example CM1 and/or another Example and, optionally, wherein estimating the local ML model comprises performing linear regression via gradient descent. [0523] Example CM3 includes the subject matter of Example CM1 or CM2 and/or another Example and, optionally, wherein applying the RFFM transform comprises applying a cosine function element-wise to the training data set. [0524] Example CM4 includes the subject matter of Example CM3 and/or another Example and, optionally, wherein applying the RFFM transform comprises applying the cosine function according to cos ( ^^ ^^ ^^ + ^^ ) , where ^^ ^^ represents the training data set, and ^^ and ^^ comprise entries sampled from distributions. [0525] Example CM5 includes the subject matter of Example CM4 and/or another Example 1 and, optionally, wherein ^^ is sampled from the distribution ^^ (0, ( 2 ^^ 2) ^^ ^^ ) and ^^ is sampled from Uniform(0,2 ^^]. [0526] Example CM6 includes the subject matter of any one of Examples CM1-CM5 and/or another Example and, optionally, wherein generating the coded data set comprises generating a coding matrix based on a distribution, wherein the coded data set is based on multiplying a matrix representing the raw data set with the coding matrix. [0527] Example CM7 includes the subject matter of Example CM6 and/or another Example and, optionally, wherein the coding matrix is generated by sampling a uniform distribution U(0, 1). [0528] Example CM8 includes the subject matter of any one of Examples CM1-CM7 and/or another Example and, optionally, further comprising iteratively, until the global ML model converges: computing an update to the global ML model based on the local training data set; and transmitting the update to the global ML model to the central server. [0529] Example CA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: access a local training data set of the client computing node and a label set corresponding to the local training data set; apply a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; estimate a local machine learning (ML) model based on the transformed training data set and the label set; generate a coded training data set from the transformed training data set; generate a coded label set based on the coded training data set and the estimated local ML model; and cause the coded training data set and coded label set to be transmitted to a central server of the edge computing environment. [0530] Example CA2 includes the subject matter of Example CA1 and/or another Example and, optionally, wherein the processor is to estimate the local ML model comprises performing linear regression via gradient descent. [0531] Example CA3 includes the subject matter of Example CA1 or CA2 and/or another Example and, optionally, wherein the processor is to apply a cosine function element-wise to the training data set. [0532] Example CA4 includes the subject matter of Example CA3 and/or another Example and, optionally, wherein the processor is to apply the cosine function according to cos ( ^^ ^^ ^^ + ^^), where ^^ ^^ represents the training data set, and ^^ and ^^ comprise entries sampled from distributions. [0533] Example CA5 includes the subject matter of Example CA4 and/or another Example 1 and, optionally, wherein ^^ is sampled from the distribution ^^ (0, (2 ^^ 2 ) ^^ ^^ ) and ^^ is sampled from Uniform(0,2 ^^]. [0534] Example CA6 includes the subject matter of any one of Examples CA1-CA5 and/or another Example and, optionally, wherein the processor is to generate the coded data set by generating a coding matrix based on a distribution, wherein the coded data set is based on multiplying a matrix representing the raw data set with the coding matrix. [0535] Example CA7 includes the subject matter of Example CA6 and/or another Example and, optionally, wherein the coding matrix is generated by sampling a uniform distribution U(0, 1). [0536] Example CA8 includes the subject matter of any one of Examples CA1-CA7 and/or another Example and, optionally, wherein the processor is further to iteratively, until the global ML model converges: compute an update to the global ML model based on the local training data set; and cause the update to the global ML model to be transmitted to the central server. [0537] CMM1 includes a method comprising: obtaining, at a central server node from each of a set of client compute nodes, a coded training data set and a coded label set; computing a gradient update to a global machine learning (ML) model based on the coded training data set and coded label set; obtaining, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregating the gradient updates; and updating the global ML model based on the aggregated gradient updates. [0538] CAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain, at a central server node from each of a set of client compute nodes, a coded training data set and a coded label set; compute a gradient update to a global machine learning (ML) model based on the coded training data set and coded label set; obtain, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregate the gradient updates; and update the global ML model based on the aggregated gradient updates. [0539] Example DM1 includes a method to be performed at an apparatus of a client computing node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: accessing a subset of a local training data set of the client computing node; generating a transformed training data subset based on a Random Fourier Feature Mapping (RFFM) transform and the training data subset; generating a coding matrix based on a distribution; generating a weighting matrix; generating a coded training data mini-batch based on multiplying the transformed training data subset with the coding matrix and the weighting matrix; and transmitting the coded training data mini-batch to the central server; wherein the weighting matrix is based on a probability of whether the coded training data mini-batch will be received at the central server. [0540] Example DM2 includes the subject matter of Example DM1 and/or another Example and, optionally, further comprising iteratively, until the global ML model converges: computing an update to the global ML model based on the training data subset; and transmitting the global ML model update to the central server. [0541] Example DM3 includes the subject matter of Example DM1 and/or another Example and, optionally, wherein the coding matrix comprises elements drawn independently from a standard normal distribution or from an equi-probable Bernoulli distribution. [0542] Example DM4 includes the subject matter of Example DM1 and/or another Example and, optionally, wherein a size of the coding matrix is based on a coding redundancy parameter obtained from the central server. [0543] Example DM5 includes the subject matter of any one of Examples DM1-DM4 and/or another Example and, optionally, wherein applying the RFFM transform comprises applying a cosine function element-wise to the training data set. [0544] Example DM6 includes the subject matter of Example DM5 and/or another Example and, optionally, wherein applying the RFFM transform comprises applying the cosine function according to + ^^ ) , where ^^ ^^ represents the training data set, ^^ is sampled from the distribution ^^ ( and ^^ is sampled from Uniform(0,2 ^^]. [0545] Example DM7 includes the subject matter of any one of Examples DM1-DM6 and/or another Example and, optionally, wherein the probability is determined based on operational parameters of the client computing node. [0546] Example DM8 includes the subject matter of Example DM7 and/or another Example and, optionally, wherein the operational parameters include one or more of computational capabilities of the client computing node and a communication link quality for the client computing node. [0547] Example DM9 includes the subject matter of any one of Examples DM1-DM8 and/or another Example and, optionally, further comprising iteratively, until the global ML model converges: computing an update to the global ML model based on the local training data set; and transmitting the update to the global ML model to the central server. [0548] Example DA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: access a subset of a local training data set of the client computing node; generate a transformed training data subset based on a Random Fourier Feature Mapping (RFFM) transform and the training data subset; generate a coding matrix based on a distribution; generate a weighting matrix; generate a coded training data mini-batch based on multiplying the transformed training data subset with the coding matrix and the weighting matrix; and cause the coded training data mini-batch to be transmitted to the central server; wherein the weighting matrix is based on a probability of whether the coded training data mini-batch will be received at the central server. [0549] Example DA2 includes the subject matter of Example DA1 and/or another Example and, optionally, wherein the processor is further to iteratively, until the global ML model converges: compute an update to the global ML model based on the training data subset; and cause the global ML model update to be transmitted to the central server. [0550] Example DA3 includes the subject matter of Example DA1 and/or another Example and, optionally, wherein the coding matrix comprises elements drawn independently from a standard normal distribution or from an equi-probable Bernoulli distribution. [0551] Example DA4 includes the subject matter of Example DA1 and/or another Example and, optionally, wherein a size of the coding matrix is based on a coding redundancy parameter obtained from the central server. [0552] Example DA5 includes the subject matter of any one of Examples DA1-DA4 and/or another Example and, optionally, wherein the processor is to apply a cosine function element- wise to the training data set. [0553] Example DA6 includes the subject matter of Example DA5 and/or another Example and, optionally, wherein the processor is to apply the cosine function according to cos ( ^^ ^^ ^^ + 1 ^^ ) , where ^^ ^^ represents the training data set, ^^ is sampled from the distribution ^^ (0, (2 ^^ 2 ) and ^^ is sampled from Uniform(0,2 ^^]. [0554] Example DA7 includes the subject matter of any one of Examples DA1-DA6 and/or another Example and, optionally, wherein the processor is to determine the probability based on operational parameters of the client computing node. [0555] Example DA8 includes the subject matter of Example DA7 and/or another Example and, optionally, wherein the operational parameters include one or more of computational capabilities of the client computing node and a communication link quality for the client computing node. [0556] Example DA9 includes the subject matter of any one of Examples DA1-DA8 and/or another Example and, optionally, wherein the processor is further to iteratively, until the global ML model converges: compute an update to the global ML model based on the local training data set; and cause the update to the global ML model to be transmitted to the central server. [0557] Example DA10 includes the subject matter of any one of Examples DA1-DA9 and/or another Example and, optionally, wherein the apparatus further comprises a transceiver to provide wireless communication between the apparatus and other edge computing nodes of a wireless edge network. [0558] DMM1 includes a method comprising: obtaining, at a central server node from each of a set of client compute nodes, a coded training data mini-batch; computing a gradient update to a global machine learning (ML) model based on the coded training data mini-batch; obtaining, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregating the gradient updates; and updating the global ML model based on the aggregated gradient updates. [0559] DAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain, at a central server node from each of a set of client compute nodes, a coded training data mini-batch; compute a gradient update to a global machine learning (ML) model based on the coded training data mini-batch; obtain, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregate the gradient updates; and update the global ML model based on the aggregated gradient updates. [0560] Example EA1 includes an apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of a federated machine learning training within an edge computing network including: grouping clients from a plurality of available clients into respective sets of clients based on compute capabilities and communication capabilities of each of the available clients; selecting a candidate set of clients to participate in said round of the federated machine learning training, the candidate set of clients being from the respective sets of clients and further being selected based on a round robin approach, or based at least on one of a data distribution of each client of the candidate set or a determination that a global model of the machine learning training at the edge computing node has reached a minimum accuracy threshold; causing the global model to be sent to the candidate set of clients; and processing information on updated model weights for the federated machine learning training, the information being from clients of the candidate set of clients; and updating the global model based on processing the information. [0561] Example EA2 includes the subject matter of Example EA1, and optionally, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the edge computing node. [0562] Example EA3 includes the subject matter of any one of Examples EA1-EA2, and optionally, the processor further to cause transmission of a client capability request to the available clients prior to grouping, and processing client capability reports from the available clients on the compute capabilities and the communication capabilities of each of the available clients, the client capability reports sent in response to the client capability request, processing client capability reports including computing a total upload time for the available clients. [0563] Example EA4 includes the subject matter of Example EA3, and optionally, wherein the client capability reports further include information on a number of training examples at each of the available clients. [0564] Example EA5 includes the subject matter of any one of Examples EA3-EA4, and optionally, the process to further maintain a time since a last processing of the information from the available clients, cause transmission of a client capability update request to the available clients based on the time since the last processing prior to a next grouping, and process updated client capability reports from the available clients in response to the update request on the compute capabilities and the communication capabilities of each of the available clients, processing client capability reports including computing a total updated upload time for the available clients. [0565] Example EA6 includes the subject matter of any one of Examples EA3-EA4, and optionally, wherein the client capability reports from the available clients are one-time client capability reports, the processor further to estimate current client capabilities of the available clients based on the client capability reports. [0566] Example EA7 includes the subject matter of any one of Examples EA1-EA6, and optionally, the processor to further select a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0567] Example EA8 includes the subject matter of any one of Examples EA1-EA7, and optionally, wherein the edge computing node is a server, such as a MEC server. [0568] Example EM1 includes a method to be performed at an apparatus of an edge computing node to be operated in an edge computing network, the method including performing a round of a federated machine learning training within the edge computing network including: grouping clients from a plurality of available clients into respective sets of clients based on compute capabilities and communication capabilities of each of the available clients; selecting a candidate set of clients to participate in said round of the federated machine learning training, the candidate set of clients being from the respective sets of clients and further being selected based on a round robin approach, or based at least on one of a data distribution of each client of the candidate set or a determination that a global model of the machine learning training at the edge computing node has reached a minimum accuracy threshold; causing the global model to be sent to the candidate set of clients; and processing information on updated model weights for the federated machine learning training, the information being from clients of the candidate set of clients; and updating the global model based on processing the information. [0569] Example EM2 includes the subject matter of Example EM1, and optionally, wherein the compute capabilities include a compute rate and the communication capabilities include one of an uplink communication time to the edge computing node or an uplink communication time and a downlink communication time with the edge computing node. [0570] Example EM3 includes the subject matter of any one of Examples EM1-EM2, and optionally, wherein the method further includes causing transmission of a client capability request to the available clients prior to grouping, and processing client capability reports from the available clients on the compute capabilities and the communication capabilities of each of the available clients, the client capability reports sent in response to the client capability request, processing client capability reports including computing a total upload time for the available clients. [0571] Example EM4 includes the subject matter of Example EM3, wherein the client capability reports further include information on a number of training examples at each of the available clients. [0572] Example EM5 includes the subject matter of any one of Examples EM3-EM4, and optionally, further including maintaining a time since a last processing of the information from the available clients, causing transmission of a client capability update request to the available clients based on the time since the last processing prior to a next grouping, and processing updated client capability reports from the available clients in response to the update request on the compute capabilities and the communication capabilities of each of the available clients, processing client capability reports including computing a total updated upload time for the available clients. [0573] Example EM6 includes the subject matter of any one of Examples EM3-EM4, wherein the client capability reports from the available clients are one-time client capability reports, the method further including estimating current client capabilities of the available clients based on the client capability reports. [0574] Example EM7 includes the subject matter of any one of Examples EM1-EM6, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0575] Example EM8 includes the subject matter of any one of Examples EM1-EM7, and/or some example(s) herein, and optionally, wherein the edge computing node is a server, such as a MEC server. [0576] Example EAA1 includes an apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of federated machine learning training including: processing a capability request from a second edge computing node of the edge computing network; generating a capability report on compute capabilities and communication capabilities of the edge computing node in response to the capability request; causing transmission of the capability report to the server; after causing transmission, processing information on a global model from the second edge computing node to initialize training parameters of a federated machine learning training round with the second edge computing node; and reporting updated model weights based on the global model for the federated machine learning training round. [0577] Example EAA2 includes the subject matter of Example EAA1, and optionally, wherein the compute capabilities include a compute rate and the communication capabilities include one of an uplink communication time to the second edge computing node or both an uplink communication time and a downlink communication time with the second edge computing node. [0578] Example EAA3 includes the subject matter of any one of Examples EAA1-EAA2, and optionally, wherein the capability report further includes information on a number of training examples at the client. [0579] Example EAA4 includes the subject matter of any one of Examples EAA1-EAA3, the processor further to: process a capability update request from the server; and generate an updated capability report in response to the update request on the compute capabilities and the communication capabilities of the first edge computing node. [0580] Example EAA5 includes the subject matter of any one of Examples EAA1-EAA4, the process further to determine the updated model weights using a gradient-based approach. [0581] Example EAA6 includes the subject matter of any one of Examples EAA1-EAA5, and optionally, wherein a data distribution of the first edge computing node corresponds to a non-independent and identically distributed (non-i.i.d.) data distribution. [0582] Example EAA7 includes the subject matter of any one of Examples EAA1-EAA6, and optionally, wherein the first edge computing node is a client, and the second edge computing node is a server such as a MEC server. [0583] Example EMM1 includes a method to be performed at an apparatus of a first edge computing node in an edge computing network, the method including performing a round of a federated machine learning training within the edge computing network including: processing a capability request from a second edge computing node of the edge computing network; generating a capability report on compute capabilities and communication capabilities of the edge computing node in response to the capability request; causing transmission of the capability report to the server; after causing transmission, processing information on a global model from the second edge computing node to initialize training parameters of a federated machine learning training round with the second edge computing node; and reporting updated model weights based on the global model for the federated machine learning training round. [0584] Example EMM2 includes the subject matter of Example EMM1, and optionally, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the second edge computing node. [0585] Example EMM3 includes the subject matter of any one of Examples EMM1— EMM2, and optionally, wherein the capability report further includes information on a number of training examples at the client. [0586] Example EMM4 includes the subject matter of any one of Examples EMM1-EMM3, further including: processing a capability update request from the server; and generating an updated capability report in response to the update request on the compute capabilities and the communication capabilities of the first edge computing node. [0587] Example EMM5 includes the subject matter of any one of Examples EMM1-EMM4, further including determining the updated model weights using a gradient-based approach. [0588] Example EMM6 includes the subject matter of any one of Examples EMM1-EMM5, and optionally, wherein a data distribution of the first edge computing node corresponds to a non-independent and identically distributed (non-i.i.d.) data distribution. [0589] Example EMM7 includes the subject matter of any one of Examples EMM1-EMM6, and optionally, wherein the first edge computing node is a client, and the second edge computing node is a server such as a MEC server. [0590] Example FA1 includes an apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of federated machine learning training including: processing capability reports from a plurality of clients of the edge computing network, the reports including at least one of information based on training losses of respective ones of said clients for an epoch of the training, or information based on gradients of respective ones of said clients with respect to a pre-activation output of respective ones of said clients; rank ordering the clients based on one of their training losses or their gradients; selecting a candidate set of clients from the plurality of clients for a next epoch of the federated machine learning training, selecting the candidate set including selecting a number of clients with highest training losses or highest gradients as the candidate set; causing an updated global model to be sent to the candidate set of clients for the next epoch; and performing the federated machine learning training on the candidate set of clients. [0591] Example FA2 includes the subject matter of FA1, the processor to further, prior to processing the capability reports, cause dissemination, to the plurality of clients of the edge computing network, of a global model corresponding to an epoch of a federated machine learning training. [0592] Example FA3 includes the subject matter of FA1-FA2, and optionally, wherein selecting a number of clients with highest training losses or highest gradients as the candidate set includes selecting a first number of clients with highest training losses or highest gradients as an intermediate set, and selecting a second number of clients from the intermediate set based on respective upload times of the first number of clients, the second number of clients corresponding to the candidate set. [0593] Example FA4 includes the subject matter of any one of Examples FA1-FA3, and optionally, wherein the capability reports further include compute rates uplink communication times. [0594] Example FA5 includes the subject matter of any one of Examples FA1-FA4, the processor to further cause transmission of a capability request to the available clients. [0595] Example FA6 includes the subject matter of any one of Examples FA1-FA5, and optionally, wherein the capability reports further include information on a number of training examples at each of the available clients. [0596] Example FA7 includes the subject matter of any one of Examples FA1-FA6, the processor to further maintain a time since a last processing of the information from the available clients, cause transmission of a capability update request to the available clients based on the time since the last processing prior to a next grouping, and process updated capability reports from the available clients in response to the update request on the compute capabilities and the communication capabilities of each of the available clients, processing capability reports including computing a total updated upload time for the available clients. [0597] Example FA8 includes the subject matter of any one of Examples FA1-FA7, and optionally, wherein the capability reports from the available clients are one-time capability reports, the processor to further estimate current client capabilities of the available clients based on the capability reports. [0598] Example FA9 includes the subject matter of any one of Examples FA1-FA8, and optionally, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0599] Example FA10 includes the subject matter of any one of Examples FA1-FA9, the processor to further select a candidate set of clients from the plurality of clients for one or more epochs subsequent to said next epoch based on the training losses. [0600] Example FA11 includes the subject matter of any one of Examples FA1-FA10, and optionally, wherein the edge computing node includes a server, such as a MEC server. [0601] Example FM1 includes a method to perform federated machine learning training at an apparatus of an edge computing node to be operated in an edge computing network, the method including: processing capability reports from a plurality of clients of the edge computing network, the reports including at least one of information based on training losses of respective ones of said clients for an epoch of the training, or information based on gradients of respective ones of said clients with respect to a pre-activation output of respective ones of said clients; rank ordering the clients based on one of their training losses or their gradients; selecting a candidate set of clients from the plurality of clients for a next epoch of the federated machine learning training, selecting the candidate set including selecting a number of clients with highest training losses or highest gradients as the candidate set; causing an updated global model to be sent to the candidate set of clients for the next epoch; and performing the federated machine learning training on the candidate set of clients. [0602] Example FM2 includes the subject matter of Example FM1, further including, prior to processing the capability reports, causing dissemination, to the plurality of clients of the edge computing network, of a global model corresponding to an epoch of a federated machine learning training. [0603] Example FM3 includes the subject matter of any one of Examples FM1-FM2, and optionally, wherein selecting a number of clients with highest training losses or highest gradients as the candidate set includes selecting a first number of clients with highest training losses or highest gradients as an intermediate set, and selecting a second number of clients from the intermediate set based on respective upload times of the first number of clients, the second number of clients corresponding to the candidate set. [0604] Example FM4 includes the subject matter of any one of Examples FM1-FM3, and optionally, wherein the capability reports further include compute rates uplink communication times. [0605] Example FM5 includes the subject matter of any one of Examples FM1-FM4, and optionally, wherein the method further includes causing transmission of a capability request to the available clients. [0606] Example FM6 includes the subject matter of any one of Examples FM1-FM5, and optionally, wherein the capability reports further include information on a number of training examples at each of the available clients. [0607] Example FM7 includes the subject matter of any one of Examples FM1-FM6, further including maintaining a time since a last processing of the information from the available clients, causing transmission of a capability update request to the available clients based on the time since the last processing prior to a next grouping, and processing updated capability reports from the available clients in response to the update request on the compute capabilities and the communication capabilities of each of the available clients, processing capability reports including computing a total updated upload time for the available clients. [0608] Example FM8 includes the subject matter of any one of Examples FM1-FM7, and optionally, wherein the capability reports from the available clients are one-time capability reports, the method further including estimating current client capabilities of the available clients based on the capability reports. [0609] Example FM9 includes the subject matter of any one of Examples FM1-FM8, and optionally, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0610] Example FM10 includes the subject matter of any one of Examples FM1-FM9, further including selecting a candidate set of clients from the plurality of clients for one or more epochs subsequent to said next epoch based on the training losses. [0611] Example FM11 includes the subject matter of any one of Examples FM1-FM10, and optionally, wherein the edge computing node includes a server, such as a MEC server. [0612] Example FAA1 includes an apparatus of a first edge computing node to be operated in an edge computing network, including an interconnect interface to connect to one or more components of the edge computing node, and a processor to: encode, for transmission to a second edge computing node of the edge computing network, a capability report, the second edge computing node to perform rounds of federated machine learning training, the report including at least one of information based on a training loss of the first edge computing node for an epoch of the training, or information based on a gradient of the first edge computing node with respect to a pre-activation output of the first edge computing node; cause transmission of the capability report; and for a next epoch of the federated machine learning training, decode an updated global model from the second edge computing node. [0613] Example FAA2 includes the subject matter of Example FAA1, the processor to further, prior to causing transmission of the capability report, decode a global model corresponding to an epoch of the federated machine learning training. [0614] Example FAA3 includes the subject matter of any one of Examples FAA1-FAA2, wherein the capability report further includes a compute rate, and at least one of an uplink communication time or a downlink communication time for communication with the second edge computing node. [0615] Example FAA4 includes the subject matter of any one of Examples FAA1-FAA3, the processor to decode a capability request from the second edge computing node prior to encoding the capability report. [0616] Example FAA5 includes the subject matter of any one of Examples FAA1-FAA4, wherein the capability report further includes information on a number of training examples at the first edge computing node. [0617] Example FAA6 includes the subject matter of any one of Examples FAA1-FAA5, the processor to further: decode a capability update request from the second edge computing node; encode for transmission to the second edge computing node an updated capability report in response to the update request, the updated capability report including information on compute capabilities and communication capabilities of the first edge computing node; and cause transmission of the updated capability report to the second edge computing node. [0618] Example FAA7 includes the subject matter of any one of Examples FA1-FA10, wherein the first edge computing node includes a mobile client computing node. [0619] Example FMM1 includes a method to be performed at an apparatus of a first edge computing node in an edge computing network, including: encoding, for transmission to a second edge computing node of the edge computing network, a capability report, the second edge computing node to perform rounds of federated machine learning training, the report including at least one of information based on a training loss of the first edge computing node for an epoch of the training, or information based on a gradient of the first edge computing node with respect to a pre-activation output of the first edge computing node; causing transmission of the capability report; and for a next epoch of the federated machine learning training, decoding an updated global model from the second edge computing node. [0620] Example FMM2 includes the subject matter of Example FMM1, including, prior to causing transmission of the capability report, decoding a global model corresponding to an epoch of the federated machine learning training. [0621] Example FMM3 includes the subject matter of any one of Examples FMM1-FMM2, wherein the capability report further includes a compute rate, and at least one of an uplink communication time or a downlink communication time for communication with the second edge computing node. [0622] Example FMM4 includes the subject matter of any one of Examples FMM1-FMM3, further including decoding a capability request from the second edge computing node prior to encoding the capability report. [0623] Example FMM5 includes the subject matter of any one of Examples FMM1- FMM4, wherein the capability report further includes information on a number of training examples at the first edge computing node. [0624] Example FMM6 includes the subject matter of any one of Examples FMM1- FMM5, further including: decoding a capability update request from the second edge computing node; encoding for transmission to the second edge computing node an updated capability report in response to the update request, the updated capability report including information on compute capabilities and communication capabilities of the first edge computing node; and causing transmission of the updated capability report to the second edge computing node. [0625] Example FMM7 includes the subject matter of any one of Examples FA1-FA10, wherein the first edge computing node includes a mobile client computing node. [0626] Example GA1 includes an apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of federated machine learning training including: causing dissemination, to a plurality of clients of the edge computing network, of a target data distribution at the edge computing node for federated machine learning training; processing client reports from the clients, each of the respective reports being based on a divergence between a local data distribution of a respective one of the clients and the target data distribution; assigning a respective weight to each respective divergence based on a size of the divergence, with higher divergences having higher weights; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training, selecting the candidate set including using a round robin approach based on the weights; causing a global model to be sent to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. [0627] Example GA2 includes the subject matter of Example GA1, and optionally, wherein the divergence corresponding to each client is based on one of a Kullback-Leibler divergence or a distance of the probability distributions between a local data distribution of said each client and the target data distribution. [0628] Example GA3 includes the subject matter of any one of Examples GA1-GA2, and optionally, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0629] Example GA4 includes the subject matter of any one of Examples GA1-GA3, and optionally, wherein the edge computing node is a server such as a MEC server. [0630] Example GAA1 includes an apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to, for each global epoch of a federated machine learning training within the edge computing network: cause dissemination, to a plurality of clients of the edge computing network, of a global model; process weighted loss information from each of the clients; determine a probability distribution q for each of the clients based on the weighted loss information from corresponding ones of said each of the clients, and optionally, wherein q is further based on an amount of data at said each of the clients, and a weight matrix for the federated machine learning training for said each of the clients for the global epoch; select a candidate set of clients from the plurality of clients for a next global epoch of the federated machine learning training, selecting the candidate set including selecting a number of clients with highest probability distribution q as the candidate set; processing, from each of the clients, for each local epoch of the global epoch, a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^ / ^^ ^^ , where ^^ ^^, ^^+e corresponds to a weight update of client k at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for client k at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of client k, and q is the probability distribution for client k; and determining a global weight based on local weight updates from each of the clients. [0631] Example GAA2 includes the subject matter of Example GAA1, further including processing capability reports from the clients, the reports including at least one of communication time or compute time for the epoch, and determining update times for each of the clients based on respective ones of the reports. [0632] Example GAA3 includes the subject matter of any one of Examples GAA1-GAA2, and optionally, wherein the edge computing node includes a server, such as a MEC server. [0633] Example GM1 includes method to be performed at an apparatus of a edge computing node in an edge computing network, the method including performing rounds of federated machine learning training including: causing dissemination, to a plurality of clients of the edge computing network, of a target data distribution at the edge computing node for federated machine learning training; processing client reports from the clients, each of the respective reports being based on a divergence between a local data distribution of a respective one of the clients and the target data distribution; assigning a respective weight to each respective divergence based on a size of the divergence, with higher divergences having higher weights; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training, selecting the candidate set including using a round robin approach based on the weights; causing a global model to be sent to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. [0634] Example GM2 includes the subject matter of Example GM1, and optionally, wherein the divergence corresponding to each client is based on one of a Kullback-Leibler divergence or a distance of the probability distributions between a local data distribution of said each client and the target data distribution. [0635] Example GM3 includes the subject matter of any one of Examples GM1-GM2, and optionally, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0636] Example GM4 includes the subject matter of any one of Examples GM1-GM2, and optionally, wherein the edge computing node is a server such as a MEC server. [0637] Example GMM1 includes a method to be performed at an apparatus of an edge computing node to be operated in an edge computing network, the method including, for each global epoch of a federated machine learning training within the edge computing network: causing dissemination, to a plurality of clients of the edge computing network, of a global model; processing weighted loss information from each of the clients; determining a probability distribution q for each of the clients based on the weighted loss information from corresponding ones of said each of the clients, and optionally, wherein q is further based on an amount of data at said each of the clients, and a weight matrix for the federated machine learning training for said each of the clients for the global epoch; selecting a candidate set of clients from the plurality of clients for a next global epoch of the federated machine learning training, selecting the candidate set including selecting a number of clients with highest probability distribution q as the candidate set; processing, from each of the clients, for each local epoch of the global epoch, a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^ / ^^ ^^ , where ^^ ^^, ^^+e corresponds to a weight update of client k at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for client k at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of client k, and q is the probability distribution for client k; and determining a global weight based on local weight updates from each of the clients. [0638] Example GMM2 includes the subject matter of Example GMM1, further including processing capability reports from the clients, the reports including at least one of communication time or compute time for the epoch, and determining update times for each of the clients based on respective ones of the reports. [0639] Example GMM3 includes the subject matter of any one of Examples GMM1- GMM2, and optionally, wherein the edge computing node includes a server, such as a MEC server. [0640] Example GAAA1 includes the subject matter of An apparatus of a first edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the first edge computing node, and a processor to: decode a first message from a second edge computing node, the first message including information on a target data distribution for federated machine learning training; determine the target data distribution from the first message; encode a client report for transmission to the second edge computing node based on a divergence between a local data distribution of the first edge computing node and the target data distribution; cause transmission of the client report to the second computing node; decode a second message from the second edge computing node, the second message including information on a global model for the federated machine learning training; and update a local gradient at the first edge computing node based on the global model. [0641] Example GAAA2 includes the subject matter of Example GAAA1, wherein a data distribution at the apparatus corresponds to non-independent and identically distributed data (non-i.i.d.). [0642] Example GAAA3 includes the subject matter of any one of Examples GAAA1- GAAA2, wherein the first edge computing node is a mobile client computing node. [0643] Example GAAAA1 includes an apparatus of a first edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the first edge computing node, and a processor to: decode a message from a second edge computing node including information on a global model associated with each global epoch of a federated machine learning training performed by a second edge computing node in the edge computing network; encode weighted loss information for transmission to the second edge computing node; cause transmission of the weighted loss information to the second edge computing node; encode for transmission to the second edge computing node a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^ / ^^ ^^ , where ^^ ^^, ^^+e corresponds to a weight update of the first edge computing node at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for the first edge computing node at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of the first edge computing node, and q is the probability distribution for the first edge computing node; and cause transmission of the local weight update to the second edge computing node. [0644] Example GAAAA2 includes the subject matter of Example GAAAA1, further including encoding for transmission to the second client computing node a capability report, the report including at least one of communication time or compute time for the local epoch. [0645] Example GAAAA3 includes the subject matter of any one of Examples GAAAA1- GAAAA2, wherein first the edge computing node is a mobile client computing node. [0646] Example GMMM1 includes a method to be performed at an apparatus of a first edge computing node in an edge computing network, the method including: decoding a first message from a second edge computing node, the first message including information on a target data distribution for federated machine learning training; determining the target data distribution from the first message; encoding a client report for transmission to the second edge computing node based on a divergence between a local data distribution of the first edge computing node and the target data distribution; causing transmission of the client report to the second computing node; decoding a second message from the second edge computing node, the second message including information on a global model for the federated machine learning training; and updating a local gradient at the first edge computing node based on the global model. [0647] Example GMMM2 includes the subject matter of Example GMMM1, wherein a data distribution at the apparatus corresponds to non-independent and identically distributed data (non-i.i.d.). [0648] Example GMMM3 includes the subject matter of any one of Examples GMMM1- GMMM2, wherein the first edge computing node is a mobile client computing node. [0649] Example GMMMM1 includes a method to be performed at an apparatus of a first edge computing node in an edge computing network, the method including: decoding a message from a second edge computing node including information on a global model associated with each global epoch of a federated machine learning training performed by a second edge computing node in the edge computing network; encoding weighted loss information for transmission to the second edge computing node; causing transmission of the weighted loss information to the second edge computing node; encoding for transmission to the second edge computing node a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^ / ^^ ^^ , where ^^ ^^, ^^+e corresponds to a weight update of the first edge computing node at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for the first edge computing node at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of the first edge computing node, and q is the probability distribution for the first edge computing node; and [0650] causing transmission of the local weight update to the second edge computing node. [0651] Example GMMMM2 includes the subject matter of the Example GMMMM1, further including encoding for transmission to the second client computing node a capability report, the report including at least one of communication time or compute time for the local epoch. [0652] Example GMMMM3 includes the subject matter of any one of Examples GMMMM1-GMMMM2, wherein the first edge computing node is a mobile client computing node. [0653] Example HM1 includes a method to be performed at an apparatus of central server node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: obtaining, from each of a set of client computing nodes of the edge computing environment, a maximum coding redundancy value for the CFL; determining a coding redundancy value based on the maximum coding redundancy values received from the edge computing devices; determining an epoch time and a number of data points to be processed at each edge computing device during each coded federated learning epoch based on the selected coding redundancy value; and transmitting the determined coding redundancy value, epoch time, and number of data points to be processed at each edge computing device to the set of edge computing devices. [0654] Example HM2 includes the subject matter of Example HM1 and/or other Example(s) herein and, optionally, wherein determining the coding redundancy value comprises selecting a minimum value of the set of maximum coding redundancy values received from the edge computing devices. [0655] Example HM3 includes the subject matter of Example HM1 and/or other Example(s) herein and, optionally, wherein determining the coding redundancy value comprises selecting a maximum value of the set of maximum coding redundancy values received from the edge computing devices. [0656] Example HM4 includes the subject matter of any one of Examples HM1-HM3 and/or other Example(s) herein and, optionally, further comprising: receiving one or more coded data sets based on the transmitted determined epoch time and number of data points to be processed at each edge computing device, the coded data sets based on raw data sets of the edge computing devices; determining a gradient update to a global machine learning model based on the coded data sets; and updating the global ML model based on the gradient update. [0657] Example HM5 includes the subject matter of Example HM4 and/or other Example(s) herein and, optionally, wherein the one or more coded data sets includes coded data sets obtained from the edge computing devices. [0658] Example HM6 includes the subject matter of Example HM4 and/or other Example(s) herein and, optionally, wherein the one or more coded data sets includes a coded data set received from a trusted server of the edge computing environment. [0659] Example HM7 includes the subject matter of any one of Examples HM4-HM6 and/or other Example(s) herein and, optionally, further comprising receiving gradient updates to the global ML model from the edge computing devices, wherein updating the global model is further based on the gradient updates received from the edge computing devices. [0660] Example HA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain, from each of a set of client computing nodes of the edge computing environment, a maximum coding redundancy value for a coded federated learning (CFL) cycle to be performed on a global machine learning (ML) model; determine a coding redundancy value based on the maximum coding redundancy values received from the edge computing devices; determine an epoch time and a number of data points to be processed at each edge computing device during each epoch of the CFL cycle based on the selected coding redundancy value; and cause the determined coding redundancy value, epoch time, and number of data points to be processed at each edge computing device to be transmitted to the set of edge computing devices. [0661] Example HA2 includes the subject matter of Example HA1 and/or other Example(s) herein and, optionally, wherein the processor is to determine the coding redundancy value by selecting a minimum value of the set of maximum coding redundancy values received from the edge computing devices. [0662] Example HA3 includes the subject matter of Example HA1 and/or other Example(s) herein and, optionally, wherein the processor is to determine the coding redundancy value by selecting a maximum value of the set of maximum coding redundancy values received from the edge computing devices. [0663] Example HA4 includes the subject matter of any one of Examples HA1-HA3 and/or other Example(s) herein and, optionally, wherein the processor is further to: access one or more coded data sets based on the transmitted determined epoch time and number of data points to be processed at each edge computing device, the coded data sets based on raw data sets of the edge computing devices; determine a gradient update to a global machine learning model based on the coded data sets; and update the global ML model based on the gradient update. [0664] Example HA5 includes the subject matter of Example HA4 and/or other Example(s) herein and, optionally, wherein the one or more coded data sets includes coded data sets obtained from the edge computing devices. [0665] Example HA6 includes the subject matter of Example HA4 and/or other Example(s) herein and, optionally, wherein the one or more coded data sets includes a coded data set received from a trusted server of the edge computing environment. [0666] Example HA7 includes the subject matter of Example HA4-HA6 and/or other Example(s) herein and, optionally wherein the processor is further to access gradient updates to the global ML model obtained from the edge computing devices, and update the global model based on the gradient updates received from the edge computing devices. [0667] Example HA8 includes the subject matter of any one of Examples HA1-HA7 and/or other Example(s) herein and, optionally, wherein the apparatus further comprises a transceiver to provide wireless communication between the apparatus and other edge computing nodes of a wireless edge network. [0668] Example HMM1 includes a method to be performed at an apparatus of client computing node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: determining a maximum coding redundancy value for the client computing node; transmitting the maximum coding redundancy value to a central server of the edge computing environment; receiving, from the central server based on transmitting the maximum coding redundancy value, a coding redundancy value for performing the CFL, an epoch time, and a number of data points to be processed in each CFL epoch; determining a differential privacy parameter based on the received coding redundancy value, an epoch time, and a number of data points to be processed; generating a coded data set from a raw data set of the edge computing device based on the determined differential privacy parameter; and transmitting the coded data set to the central server. [0669] Example HMM2 includes the subject matter of Example HMM1 and/or other Example(s) herein and, optionally, wherein transmitting the coded data set to the central server comprises transmitting the coded data set directly to the central server. [0670] Example HMM3 includes the subject matter of Example HMM1 and/or other Example(s) herein and, optionally, wherein transmitting the coded data set to the central server comprises transmitting the coded data set to a trusted server of the edge computing environment. [0671] Example HMM4 includes the subject matter of any one of Examples HMM1-HMM3 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises: generating a coding matrix based on a distribution; and generating a weighting matrix; wherein the coded data set is based on multiplying a matrix representing the raw data set with the coding matrix and the weighting matrix. [0672] Example HMM5 includes the subject matter of any one of Examples HMM1-HMM4 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises deleting a portion of the raw data set before generating the coded data set. [0673] Example HMM6 includes the subject matter of any one of Examples HMM1-HMM4 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises deleting a portion of the coded data set after generation. [0674] Example HMM7 includes the subject matter of any one of Examples HMM1-HMM4 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises injecting noise into the raw data set before generating the coded data set. [0675] Example HMM8 includes the subject matter of any one of Examples HMM1-HMM4 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises injecting noise into the coded data set after generation. [0676] Example HMM9 includes the subject matter of any one of Examples HMM7-HMM8 and/or other Example(s) herein and, optionally, wherein the injected noise is based on a noise matrix received from a trusted server of the edge computing network. [0677] Example HMM10 includes the subject matter of any one of Examples HMM7- HMM8 and/or other Example(s) herein and, optionally, wherein the injected noise is based on a coded privacy budget, the coded privacy budget determined from an overall privacy budget obtained from the central server. [0678] Example HMM11 includes the subject matter of any one of Examples HMM1- HMM10 and/or other Example(s) herein and, optionally, The method of any one of claims [0668]-[0677][0679], further comprising: determining a gradient update to a global machine learning model based on the raw data set; and transmitting the gradient update to the central server. [0679] Example HMM12 includes the subject matter of Example HMM11 and/or other Example(s) herein and, optionally, further comprising injecting noise into the gradient update, wherein the noise injected to the raw data set is based on an uncoded privacy budget determined from an overall privacy budget obtained from the central server. [0680] Example HAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: determine a maximum coding redundancy value for the client computing node; cause the maximum coding redundancy value to be transmitted to a central server of an edge computing environment; obtain, from the central server based on transmitting the maximum coding redundancy value, a coding redundancy value for performing the CFL, an epoch time, and a number of data points to be processed in each CFL epoch; determine a differential privacy parameter based on the received coding redundancy value, an epoch time, and a number of data points to be processed; generate a coded data set from a raw data set of the edge computing device based on the determined differential privacy parameter; and cause the coded data set to be transmitted to the central server. [0681] Example HAA2 includes the subject matter of Example HAA1 and/or other Example(s) herein and, optionally, wherein the processor is to cause the coded data set to be directly transmitted to the central server. [0682] Example HAA3 includes the subject matter of Example HAA1 and/or other Example(s) herein and, optionally, wherein the processor is to cause the coded data set to be transmitted to a trusted server of the edge computing environment. [0683] Example HAA4 includes the subject matter of any one of Examples HAA1-HAA3 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by: generating a coding matrix based on a distribution; generating a weighting matrix; and multiplying a matrix representing the raw data set with the coding matrix and the weighting matrix. [0684] Example HAA5 includes the subject matter of any one of Examples HAA1-HAA4 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by deleting a portion of the raw data set before generating the coded data set. [0685] Example HAA6 includes the subject matter of any one of Examples HAA1-HAA4 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by deleting a portion of the coded data set after generation. [0686] Example HAA7 includes the subject matter of any one of Examples HAA1-HAA4 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by injecting noise into the raw data set before generating the coded data set. [0687] Example HAA8 includes the subject matter of any one of Examples HAA1-HAA4 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by injecting noise into the coded data set after generation. [0688] Example HAA9 includes the subject matter of any one of Examples HAA7-HAA8 and/or other Example(s) herein and, optionally, wherein the processor is to inject a noise matrix received from a trusted server of the edge computing network. [0689] Example HAA10 includes the subject matter of any one of Examples HAA7-HAA8 and/or other Example(s) herein and, optionally, wherein the processor is to inject a noise matrix that is based on a coded privacy budget, the coded privacy budget determined from an overall privacy budget obtained from the central server. [0690] Example HAA11 includes the subject matter of any one of Examples HAA1-HAA10 and/or other Example(s) herein and, optionally, wherein the processor is further to: determining a gradient update to a global machine learning model based on the raw data set; and transmitting the gradient update to the central server. [0691] Example HAA12 includes the subject matter of Example HAA11 and/or other Example(s) herein and, optionally, wherein the processor is further to inject noise into the gradient update, wherein the noise injected to the raw data set is based on an uncoded privacy budget determined from an overall privacy budget obtained from the central server. [0692] Example HAA13 includes the subject matter of any one of Examples HAA1-HAA12 and/or other Example(s) herein and, optionally, wherein the apparatus further comprises a transceiver to provide wireless communication between the apparatus and other edge computing nodes of a wireless edge network. [0693] Example HMMM1 includes a method comprising: receiving, at a trusted server of an edge computing environment, raw data sets from each of a plurality of client computing nodes of the edge computing environment; generating a coded data set based on the raw data sets; transmitting the coded data set to a central server of the edge computing environment for use in a coded federated learning epoch. [0694] Example HMMM2 includes the subject matter of Example HMMM1 and/or other Example(s) herein and, optionally, wherein generating the coded data set based on the raw data set comprises: generating a coding matrix based on a distribution; and generating a weighting matrix; wherein the coded data set is based on multiplying a matrix representing the raw data set with the coding matrix and the weighting matrix. [0695] Example HMMM3 includes the subject matter of Example HMMM1 or HMMM2 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises deleting a portion of the raw data set before generating the coded data set. [0696] Example HMMM4 includes the subject matter of Example HMMM1 or HMMM2 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises deleting a portion of the coded data set after generation. [0697] Example HMMM5 includes the subject matter of Example HMMM1 or HMMM2 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises injecting noise into the raw data set before generating the coded data set. [0698] Example HMMM6 includes the subject matter of Example HMMM1 or HMMM2 and/or other Example(s) herein and, optionally, wherein generating the coded data set from the raw data set comprises injecting noise into the coded data set after generation. [0699] Example HMMM7 includes the subject matter of any one of Examples HMMM1- HMMM6 and/or other Example(s) herein and, optionally, wherein the raw data sets received from the client computing nodes are encrypted, and the method further comprises decrypting the raw data sets before generating the coded data set. [0700] Example HAAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain raw data sets from each of a plurality of client computing nodes of the edge computing environment; generating a coded data set based on the raw data sets; transmitting the coded data set to a central server of the edge computing environment for use in a coded federated learning epoch. [0701] Example HAAA2 includes the subject matter of Example HAAA1 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set based on the raw data set by: generating a coding matrix based on a distribution; generating a weighting matrix; and multiplying a matrix representing the raw data set with the coding matrix and the weighting matrix. [0702] Example HAAA3 includes the subject matter of Example HAAA1 or HAAA2 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by deleting a portion of the raw data set before generating the coded data set. [0703] Example HAAA4 includes the subject matter of Example HAAA1 or HAAA2 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by deleting a portion of the coded data set after generation. [0704] Example HAAA5 includes the subject matter of Example HAAA1 or HAAA2 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set by injecting noise into the raw data set before generating the coded data set. [0705] Example HAAA6 includes the subject matter of Example HAAA1 or HAAA2 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set from the raw data set comprises injecting noise into the coded data set after generation. [0706] Example HAAA7 includes the subject matter of any one of Examples HAAA1- HAAA6 and/or other Example(s) herein and, optionally, wherein the raw data sets received from the client computing nodes are encrypted, and the processor is further to decrypt the raw data sets before generating the coded data set. [0707] Example HAAA8 includes the subject matter of any one of Examples HAAA1- HAAA7 and/or other Example(s) herein and, optionally, wherein the apparatus further comprises a transceiver to provide wireless communication between the apparatus and other edge computing nodes of a wireless edge network. [0708] Example IM1 includes a method to be performed at an apparatus of a client computing node in an edge computing environment to provide for coded federated learning (CFL) of a global machine learning (ML) model, the method comprising: receiving, from a central server of an edge computing environment, a differential privacy guarantee for a CFL cycle and a global ML model on which to perform the CFL cycle; determining a coded privacy budget and an uncoded privacy budget based on the differential privacy guarantee; generating a coded data set from a raw data set of the client computing node based on the coded privacy budget; transmitting the coded data set to the central server; performing a CFL cycle on the global ML model, wherein performing the CFL cycle comprises: computing an update to the global ML model based on the raw data set and the uncoded privacy budget; and transmitting the update to the global ML model to the central server. [0709] Example IM2 includes the subject matter of Example IM1 and/or other Example(s) herein and, optionally, wherein generating the coded data set comprises injecting noise into the coded data set. [0710] Example IM3 includes the subject matter of Example IM2 and/or other Example(s) herein and, optionally, wherein injecting noise into the coded data set comprises adding a noise matrix to the coded data set, the noise matrix based on the coded privacy budget. [0711] Example IM4 includes the subject matter of Example IM3 and/or other Example(s) herein and, optionally, wherein the noise matrix comprises values sampled independently from a zero mean Gaussian distribution. [0712] Example IM5 includes the subject matter of Example IM1 and/or other Example(s) herein and, optionally, wherein computing the update to the global ML model comprises injecting noise into the update to the global ML model. [0713] Example IM6 includes the subject matter of Example IM5 and/or other Example(s) herein and, optionally, wherein injecting noise into the update to the global ML model comprises adding a noise matrix to the update to the global ML model, the noise matrix based on the uncoded privacy budget. [0714] Example IM7 includes the subject matter of Example IM6 and/or other Example(s) herein and, optionally, wherein the noise matrix comprises values sampled independently from a zero mean Gaussian distribution. [0715] Example IM8 includes the subject matter of any one of Examples IM1-IM7 and/or other Example(s) herein and, optionally, wherein generating the coded data set comprises generating a coding matrix based on a distribution, wherein the coded data set is based on multiplying a matrix representing the raw data set with the coding matrix. [0716] Example IM9 includes the subject matter of Example IM8 and/or other Example(s) herein and, optionally, wherein the coding matrix is generated by sampling a standard normal distribution N(0, 1). [0717] Example IM10 includes the subject matter of any one of Examples IM1-IM9 and/or other Example(s) herein and, optionally, wherein computing the update to the global ML model comprises computing a gradient to the global ML model via gradient descent. [0718] Example IA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: determine a coded privacy budget and an uncoded privacy budget based on a differential privacy guarantee for a coded federated learning (CFL) cycle and a global machine learning (ML) model on which to perform the CFL cycle; generate a coded data set from a raw data set of the edge computing node based on the coded privacy budget; cause the coded data set to be transmitted the central server; perform a CFL cycle on the global ML model, wherein the processor is to perform the CFL cycle by: computing an update to the global ML model based on the raw data set and the uncoded privacy budget; and transmitting the update to the global ML model to the central server. [0719] Example IA2 includes the subject matter of Example IA1 and/or other Example(s) herein and, optionally, wherein the processor is further to inject noise into the coded data set. [0720] Example IA3 includes the subject matter of Example IA2 and/or other Example(s) herein and, optionally, wherein the processor is to inject noise into the coded data set by adding a noise matrix to the coded data set, the noise matrix based on the coded privacy budget. [0721] Example IA4 includes the subject matter of Example IA3 and/or other Example(s) herein and, optionally, wherein the processor is to generate the noise matrix by independently sampling a zero mean Gaussian distribution. [0722] Example IA5 includes the subject matter of Example IA1 and/or other Example(s) herein and, optionally, wherein the processor is further to inject noise into the update to the global ML model. [0723] Example IA6 includes the subject matter of Example IA5 and/or other Example(s) herein and, optionally, wherein the processor is to inject noise into the update to the global ML model by adding a noise matrix to the update to the global ML model, the noise matrix based on the uncoded privacy budget. [0724] Example IA7 includes the subject matter of Example IA6 and/or other Example(s) herein and, optionally, wherein the processor is to generate the noise matrix by independently sampling a zero mean Gaussian distribution. [0725] Example IA8 includes the subject matter of any one of Examples IA1-IA7 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coded data set by generating a coding matrix based on a distribution, wherein the coded data set is based on multiplying a matrix representing the raw data set with the coding matrix. [0726] Example IA9 includes the subject matter of Example IA8 and/or other Example(s) herein and, optionally, wherein the processor is to generate the coding matrix by sampling a standard normal distribution N(0, 1). [0727] Example IA10 includes the subject matter of any one of Examples IA1-IA9 and/or other Example(s) herein and, optionally, wherein the processor is to compute the update to the global ML model by computing a gradient to the global ML model via gradient descent. [0728] Example IA11 includes the subject matter of any one of Examples IA1-IA10 and/or other Example(s) herein and, optionally, wherein the apparatus further comprises a transceiver to provide wireless communication between the apparatus and other edge computing nodes of a wireless edge network. [0729] IMM1 includes a method comprising: obtaining, at a central server node from each of a set of client compute nodes, a coded data set; computing a gradient update to a global machine learning (ML) model based on the coded data set; obtaining, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregating the gradient updates; and updating the global ML model based on the aggregated gradient updates. [0730] IAA1 includes an apparatus of an edge computing node, the apparatus comprising an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to: obtain, at a central server node from each of a set of client compute nodes, a coded data set; compute a gradient update to a global machine learning (ML) model based on the coded data set; obtain, from at least a portion of the set of client compute nodes, gradient updates to the global ML model computed by the client compute nodes on local training data; aggregate the gradient updates; and update the global ML model based on the aggregated gradient updates. [0731] Example JA1 includes an apparatus of an edge computing node, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of a federated machine learning training within an edge computing network, each round including: for a number of cycles E’ of epoch number t: discarding an initial L clients from M clients sampled from N available clients, and optionally, wherein the initial L ≤ M and is based on one of a sampling distribution q i for datapoints at each client i within the M clients, or is based on a uniform sampling; selecting a subsequent L clients from remaining clients N-M or N-M+initial L clients, and optionally, wherein the subsequent L ≤ M and is based on an importance sampling distribution q i for datapoints at each client i of the remaining clients; determining load balancingparameters for the subsequent L clients; receiving coded data ^^ ^^̂ from each client i of thesubsequent L clients; after the number of cycles E’, calculating a global weight corresponding to epoch number t +1, ^^ ( ^^+1) based on gradient g(t) i for each client i of the Kclients at epoch number t and further based on gradient g(t+1) i for each client i of the K clientsat epoch number t +1, and optionally, wherein g(t) i is calculated using data points based on theload balancing parameters, and g(t+1) (t) i is calculated using g i and the coded data ^^ ^^̂ . [0732] Example JA2 includes the subject matter of Example JA1, and optionally, wherein the load balancing parameters include t* and l i * for all clients i of the subsequent L clients, and optionally, wherein t* is a time duration to be used by clients i to perform a gradient update operation, and l i * is a number of datapoints to be used by each client i in the gradient update operation. [0733] Example JA3 includes the subject matter of any one of Examples JA1-JA2, the processor to further, after the number of cycles E’, cause the edge computing node to send aglobal weight ^^ ( ^^) corresponding to epoch number t to K clients selected from a last M clients, and optionally, wherein g(t) i is further based on a global weight corresponding to epoch numbert, ^^ ( ^^) . [0734] Example JA4 includes the subject matter of any one of Examples JA1-JA3, the processor to further, prior to the number of cycles E’, receive l i from each client i of the N available clients, and optionally, wherein l i corresponds to a number of data points at each client i, calculate p i based on l i , and optionally, wherein p i corresponds to a ratio of the number of data points at each client i divided by a number of data points for all clients i, and calculate g(t+1) i based on p i /q i . [0735] Example JA5 includes the subject matter of any one of Examples JA1-JA4, and optionally, wherein the coded data is based on one of a Gaussian coding matrix or a Bernoullian coding matrix. [0736] Example JA6 includes the subject matter of any one of Examples JA1-JA5, the processor to further one of receive q i from each client i of the N clients prior to selecting the initial L clients, or calculate q i. [0737] Example JA7 includes the subject matter of any one of Examples JA1-JA6, and optionally, wherein the processor is to select K out of M uniformly. [0738] Example JA8 includes the subject matter of any one of Examples JA1-JA7, and optionally, wherein receiving coded data from each client i includes receiving a number of coded data from each client i of N of available clients, the number of coded data based on l i , l i * and t*, and optionally, wherein l i corresponds to the number of raw datapoints at client i, l i * corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a time duration to be used by clients i to perform a gradient update operation. [0739] Example JA9 includes the subject matter of Example JA8, and optionally, wherein coded data from each client i corresponds to data coded using random linear coding using a Gaussian generator matrix based on a number of coded data c i * at each client i during time duration t*, and optionally, wherein the Gaussian generator matrix is kept private from the apparatus. [0740] Example JA10 includes the subject matter of any one of Examples JA1-JA9, and optionally, wherein the edge network is a wireless edge network. [0741] Example JM1 includes a method to be performed at an apparatus of an edge computing node to be operated in an edge computing network, the method including performing rounds of a federated machine learning training within the edge computing network, including: for a number of cycles E’ of epoch number t: discarding an initial L clients from M clients sampled from N available clients, and optionally, wherein the initial L ≤ M and is based on one of a sampling distribution q i for datapoints at each client i within the M clients, or is based on a uniform sampling; selecting a subsequent L clients from remaining clients N-M or N-M+initial L clients, and optionally, wherein the subsequent L ≤ M and is based on an importance sampling distribution q i for datapoints at each client i of the remaining clients; determining load balancing parameters for the subsequent L clients; receiving codeddata ^^ ^^̂ from each client i of the subsequent L clients; after the number of cycles E’,calculating a global weight ^^ ( ^^+1) corresponding to epoch number t +1, based ongradient g(t) i for each client i of the K clients at epoch number t and further based on gradient g(t+1) for each client i of the K clients at epoch number t +1, and optional (t) i ly, wherein g i is calculated using data points based on the load balancing parameters, and g(t+1) i is calculatedusing g(t) i and the coded data ^^ ^^̂ . [0742] Example JM2 includes the subject matter of Example JM1, and optionally, wherein the load balancing parameters include t* and l i * for all clients i of the subsequent L clients, and optionally, wherein t* is a time duration to be used by clients i to perform a gradient update operation, and l i * is a number of datapoints to be used by each client i in the gradient update operation. [0743] Example JM3 includes the subject matter of any one of Examples JM1-JM2, further including, after the number of cycles E’, causing the edge computing node to send a globalweight ^^ ^^) corresponding to epoch number t to K clients selected from a last M clients, and optionally, wherein g(t) i is further based on a global weight corresponding to epoch number t, ^^ ( ^^) . [0744] Example JM4 includes the subject matter of any one of Examples JM1-JM3, further including, prior to the number of cycles E’, receiving l i from each client i of the N available clients, and optionally, wherein l i corresponds to a number of data points at each client i, calculating p i based on l i , and optionally, wherein p i corresponds to a ratio of the number of data points at each client i divided by a number of data points for all clients i, and calculating g(t+1) i based on p i /q i . [0745] Example JM5 includes the subject matter of any one of Examples JM1-JM4, and optionally, wherein the coded data is based on one of a Gaussian coding matrix or a Bernoullian coding matrix. [0746] Example JM6 includes the subject matter of any one of Examples JM1-JM5, further including one of receiving q i from each client i of the N clients prior to selecting the initial L clients, or calculating q i. [0747] Example JM7 includes the subject matter of any one of Examples JM1-JM6, further including selecting K out of M uniformly. [0748] Example JM8 includes the subject matter of any one of Examples JM1-JM7, and optionally, wherein receiving coded data from each client i includes receiving a number of coded data from each client i of N of available clients, the number of coded data based on l i , l i * and t*, and optionally, wherein l i corresponds to the number of raw datapoints at client i, l i * corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a time duration to be used by clients i to perform a gradient update operation. [0749] Example JM9 includes the subject matter of Example JM8, and optionally, wherein coded data from each client i corresponds to data coded using random linear coding using a Gaussian generator matrix based on a number of coded data c i * at each client i during time duration t*, and optionally, wherein the Gaussian generator matrix is kept private from the apparatus. [0750] Example JM10 includes the subject matter of any one of Examples JM1-JM9, and optionally, wherein the edge network is a wireless edge network. [0751] Example KA1 includes an apparatus of an edge computing node, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of a federated machine learning training within an edge computing network, each round including: as a one stage operation, receiving a number of coded data from each client i of N available clients or of L clients, wherein L≤N, the number of coded data based on l i , l i * and t*, wherein l i corresponds to the number of raw datapoints at client i, l i * corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a deadline time duration representing a smallest epoch time window within which the apparatus and the client i can jointly calculate a gradient; for every epoch number t and until time t*: receiving local gradients g i (t) from L i *(t*) raw data points, wherein l i *(t*) corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric during time t*; calculating a global gradient based on a per client gradient from the coded data of each client i; on or after time t*, calculating an updated global gradient from the global gradient calculated at each epoch number t. [0752] Example KA2 includes the subject matter of Example KA1, and optionally, wherein coded data from each client i corresponds to data coded using random linear coding using a Gaussian generator matrix based on a number of coded data c i * at each client i during time duration t*, and optionally, wherein the Gaussian generator matrix is kept private from the apparatus. [0753] Example KA3 includes the subject matter of any one of Examples KA1-KA2, and optionally, wherein calculating the global gradient includes calculating a per client gradient from the coded data of each client i separately, and combining calculated per client gradients for all clients i to obtain a global gradient. [0754] Example KA4 includes the subject matter of Example KA3, the processor to further oversample the coded data of each client i to obtain oversampled coded data for each client i, and combine the oversampled coded data to compute composite coded data, and optionally, wherein calculating the per client gradient from the coded data of each client i separately includes calculating, for each client i separately, the per client gradient from the composite coded data. [0755] Example KA5 includes the subject matter of any one of Examples KA1-KA2, the process to further oversample the coded data of each client i to obtain oversampled coded data for each client i, and to combine the oversampled coded data to compute composite coded data, and optionally, wherein calculating the global gradient includes calculating the global gradient from the composite coded data for all clients i. [0756] Example KA6 includes the subject matter of any one of Examples KA4-KA5, and optionally, wherein the processor is to compute the composite coded data by: computing an over-sampled encoded data for each client i using, for each client i: a Fourier transform matrix Fi having a dimension based on a maximum number of coded data from client i; and a coded data c i * at client i; computing the composite coded data by summing the over-sampled encoded data for each client i across all clients i. [0757] Example KA7 includes the subject matter of Example KA1, the processor to further, for a number of cycles E’ of epoch number t: select and discarding an initial L clients from M clients sampled from N available clients, and optionally, wherein the initial L ≤ M and is based on one of a sampling distribution q i for datapoints at each at each client i within the M clients, or is based on a uniform sampling; select a subsequent L clients from remaining clients N-M or N-M+initial L, and optionally, wherein the subsequent L ≤ M and is based on a sampling distribution q i for datapoints at each client i within the remaining clients; determine load balancing parameters for the subsequent L clients; as a one stage operation, receive a number of coded data from each client i of the subsequent L clients, the number of coded data based on l i , l i * and t*. [0758] Example KM1 includes a method to be performed at an apparatus of an edge computing node to be operated in an edge computing network, the method including performing rounds of a federated machine learning training within the edge computing network, including: as a one stage operation, receiving a number of coded data from each client i of N available clients or of L clients, and optionally, wherein L≤N, the number of coded data based on l i , l i * and t*, and optionally, wherein l i corresponds to the number of raw datapoints at client i, l i * corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a deadline time duration representing a smallest epoch time window within which the apparatus and the client i can jointly calculate a gradient; for every epoch number t and until time t*: receiving local gradients g i (t) from L i *(t*) raw data points, and optionally, wherein l i *(t*) corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric during time t*; calculating a global gradient based on a per client gradient from the coded data of each client i; on or after time t*, calculating an updated global gradient from the global gradient calculated at each epoch number t. [0759] Example KM2 includes the subject matter of Example KM1, and optionally, wherein coded data from each client i corresponds to data coded using random linear coding using a Gaussian generator matrix based on a number of coded data c i * at each client i during time duration t*, and optionally, wherein the Gaussian generator matrix is kept private from the apparatus. [0760] Example KM3 includes the subject matter of any one of Examples KM1-KM2 wherein calculating the global gradient includes calculating a per client gradient from the coded data of each client i separately, and combining calculated per client gradients for all clients i to obtain a global gradient. [0761] Example KM4 includes the subject matter of Example KM3, further including oversampling the coded data of each client i to obtain oversampled coded data for each client i, and combining the oversampled coded data to compute composite coded data, and optionally, wherein calculating the per client gradient from the coded data of each client i separately includes calculating, for each client i separately, the per client gradient from the composite coded data. [0762] Example KM5 includes the subject matter of any one of Examples KM1-KM2, further including oversampling the coded data of each client i to obtain oversampled coded data for each client i, and combining the oversampled coded data to compute composite coded data, and optionally, wherein calculating the global gradient includes calculating the global gradient from the composite coded data for all clients i. [0763] Example KM6 includes the subject matter of any one of Examples KM4-KM5, and optionally, wherein the composite coded data is computed by: computing an over-sampled encoded data for each client i using, for each client i: a Fourier transform matrix Fi having a dimension based on a maximum number of coded data from client i; and a coded data c i * at client i; computing the composite coded data by summing the over-sampled encoded data for each client i across all clients i. [0764] Example KM7 includes the subject matter of Example KM1, further including, for a number of cycles E’ of epoch number t: selecting and discarding an initial L clients from M clients sampled from N available clients, and optionally, wherein the initial L ≤ M and is based on one of a sampling distribution q i for datapoints at each at each client i within the M clients, or is based on a uniform sampling; selecting a subsequent L clients from remaining clients N- M or N-M+initial L, and optionally, wherein the subsequent L ≤ M and is based on a sampling distribution q i for datapoints at each client i within the remaining clients; determining load balancing parameters for the subsequent L clients; as a one stage operation, receiving a number of coded data from each client i of the subsequent L clients, the number of coded data based on l i , l i * and t*. [0765] Example PCA1 includes an apparatus of a first edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the first edge computing node, and a processor to: decode a first message from a second edge computing node, the first message including information on a target data distribution for federated machine learning training; determine the target data distribution from the first message; encode a client report for transmission to the second edge computing node based on a divergence between a local data distribution of the first edge computing node and the target data distribution; cause transmission of the client report to the second computing node; decode a second message from the second edge computing node, the second message including information on a global model associated with a global epoch of the federated machine learning training; and update a local gradient at the first edge computing node based on the global model. [0766] Example PCA2 Includes the subject matter of Example PCA1, the processor further to: encode weighted loss information for transmission to the second edge computing node; cause transmission of the weighted loss information to the second edge computing node; encode for transmission to the second edge computing node a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^ / ^^ ^^ , where ^^ ^^, ^^+e corresponds to a weight update of the first edge computing node at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for the first edge computing node at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of the first edge computing node, and q is the probability distribution for the first edge computing node; and cause transmission of the local weight update to the second edge computing node. [0767] Example PCA3 Includes the subject matter of any one of Examples PCA1-PCA2, wherein a data distribution at the first edge computing node corresponds to non-independent and identically distributed data (non-i.i.d.). [0768] Example PCA4 Includes the subject matter of any one of Example PCA1-PCA3, the processor to perform rounds of federated machine learning training including: processing a capability request from the second edge computing node; generating a capability report on compute capabilities and communication capabilities of the edge computing node in response to the capability request; causing transmission of the capability report to the server; after causing transmission of the capability report, decoding the second message from the second edge computing node to initialize training parameters of a federated machine learning training round with the second edge computing node; and reporting updated model weights based on the global model for the federated machine learning training round. [0769] Example PCA5 Includes the subject matter of Example PCA4, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the second edge computing node. [0770] Example PCA6 Includes the subject matter of any one of Examples PCA4—PCA5, wherein the capability report further includes information on a number of training examples at the client. [0771] Example PCA7 Includes the subject matter of any one of Examples PCA1-PCA3, the processor to: encode, for transmission to the second edge computing node, a capability report including at least one of information based on a training loss of the first edge computing node for a global epoch of a federated machine learning training by the second edge computing node training, or information based on a gradient of the first edge computing node with respect to a pre-activation output of the first edge computing node; cause transmission of the capability report; and for a next global epoch of the federated machine learning training, decode an updated global model from the second edge computing node. [0772] Example PCA8 Includes the subject matter of Example PCA7, the processor to further decode the second message prior to causing transmission of the capability report. [0773] Example PCA9 Includes the subject matter of any one of Examples PCA7-PCA8, wherein the capability report further includes a compute rate, and at least one of an uplink communication time or a downlink communication time for communication with the second edge computing node. [0774] Example PCA10 Includes the subject matter of Example PCA1, the processor to: compute kernel coefficients based on a kernel function, a local raw training data set of the first edge computing node, and a raw label set corresponding to the local raw training data set; generate a coded training data set from the raw training data set; generate a coded label set based on the kernel coefficients, the kernel function, and the raw label set; and cause the coded training data set and coded label set to be transmitted to the second edge computing node. [0775] Example PCA11 Includes the subject matter of Example PCA1, the processor to further: access a local training data set of the first edge computing node; apply a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; after decoding the second message, iteratively, until the global model converges: compute an update to the global model using the transformed data set and a raw label set corresponding to the training data set to obtain an updated global model; and cause the update to be transmitted to the second edge computing node. [0776] Example PCA12 Includes the subject matter of Example PA1, the processor to: access a local training data set of the first edge computing node and a label set corresponding to the local training data set; apply a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; estimate a local machine learning (ML) model based on the transformed training data set and the label set; generate a coded training data set from the transformed training data set; generate a coded label set based on the coded training data set and the estimated local ML model; and cause the coded training data set and coded label set to be transmitted to the second edge computing node. [0777] Example PCA13 Includes the subject matter of Example PCA1, the processor to: access a subset of a local training data set of the first edge computing node; generate a transformed training data subset based on a Random Fourier Feature Mapping (RFFM) transform and the training data subset; generate a coding matrix based on a distribution; generate a weighting matrix; generate a coded training data mini-batch based on multiplying the transformed training data subset with the coding matrix and the weighting matrix; and cause the coded training data mini-batch to be transmitted to the second edge computing node; wherein the weighting matrix is based on a probability of whether the coded training data mini-batch will be received at the second edge computing node. [0778] Example PCA14 Includes the subject matter of Example PCA1, the processor to: obtain, from each of a set of first edge computing nodes of the edge computing network, a maximum coding redundancy value for a coded federated learning (CFL) cycle to be performed on a global machine learning (ML) model of the federated ML training; determine a coding redundancy value based on the maximum coding redundancy values received from the edge computing devices; determine an epoch time and a number of data points to be processed at each edge computing device during each epoch of the CFL cycle based on the selected coding redundancy value; and cause the determined coding redundancy value, epoch time, and number of data points to be processed at each edge computing device to be transmitted to the set of edge computing devices. [0779] Example PCA15 Includes the subject matter of Example PCA1, the processor to: determine a coded privacy budget and an uncoded privacy budget based on a differential privacy guarantee for a cycle of the federated machine learning; generate a coded data set from a raw data set of the first edge computing node based on the coded privacy budget; cause the coded data set to be transmitted to the second edge computing node; perform a round of the federated machine learning on the global model including: after receiving the second message, computing an update to the global model based on the raw data set and the uncoded privacy budget; and causing transmission of the update to the global model to the second edge computing node. [0780] Example PCA16 Includes the subject matter of any one of Examples PCA1-Example PCA15, wherein the first edge computing node is a mobile client computing node. [0781] Example PCM1 A method to be performed at an first edge computing node in an edge computing network, the method including: decoding a first message from a second edge computing node, the first message including information on a target data distribution for federated machine learning training; determining the target data distribution from the first message; encoding a client report for transmission to the second edge computing node based on a divergence between a local data distribution of the first edge computing node and the target data distribution; causing transmission of the client report to the second computing node; decoding a second message from the second edge computing node, the second message including information on a global model associated with a global epoch of the federated machine learning training; and updating a local gradient at the first edge computing node based on the global model. [0782] Example PCM2 Includes the subject matter of Example PCM1, the method including: [0783] encoding weighted loss information for transmission to the second edge computing node; causing transmission of the weighted loss information to the second edge computing node; encoding for transmission to the second edge computing node a local weight update ^^ ^^, ^^+e = ^^ ^^, ^^+e−1 − η ∗ ^^ ^^, ^^+e ∗ ^^ ^^ / ^^ ^^ , where ^^ ^^, ^^+e corresponds to a weight update of the first edge computing node at global epoch t and local epoch e, η is a learning rate, ^^ ^^, ^^+e corresponds to a gradient estimate for the first edge computing node at global epoch t and local epoch e, ^^ ^^ corresponds to an original sampling distribution of the first edge computing node, and q is the probability distribution for the first edge computing node; and causing transmission of the local weight update to the second edge computing node. [0784] Example PCM3 Includes the subject matter of any one of Examples PCM1-PCM2, wherein a data distribution at the first client computing node corresponds to non-independent and identically distributed data (non-i.i.d.). [0785] Example PCM4 Includes the subject matter of any one of Example PCM1-PCM3, the method including performing rounds of federated machine learning training including: processing a capability request from the second edge computing node; generating a capability report on compute capabilities and communication capabilities of the edge computing node in response to the capability request; causing transmission of the capability report to the server; after causing transmission of the capability report, decoding the second message from the second edge computing node to initialize training parameters of a federated machine learning training round with the second edge computing node; and reporting updated model weights based on the global model for the federated machine learning training round. [0786] Example PCM5 Includes the subject matter of Example PCM4, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the second edge computing node. [0787] Example PCM6 Includes the subject matter of any one of Examples PCM4—PCM5, wherein the capability report further includes information on a number of training examples at the client. [0788] Example PCM7 Includes the subject matter of any one of Examples PCM1-PCM3, the method including: encoding, for transmission to the second edge computing node, a capability report including at least one of information based on a training loss of the first edge computing node for a global epoch of a federated machine learning training by the second edge computing node training, or information based on a gradient of the first edge computing node with respect to a pre-activation output of the first edge computing node; causing transmission of the capability report; and for a next global epoch of the federated machine learning training, decoding an updated global model from the second edge computing node. [0789] Example PCM8 Includes the subject matter of Example PCM7, the method including decoding the second message prior to causing transmission of the capability report. [0790] Example PCM9 Includes the subject matter of any one of Examples PCM7-PCM8, wherein the capability report further includes a compute rate, and at least one of an uplink communication time or a downlink communication time for communication with the second edge computing node. [0791] Example PCM10 Includes the subject matter of Example PCM1, the method including: computing kernel coefficients based on a kernel function, a local raw training data set of the first edge computing node, and a raw label set corresponding to the local raw training data set; generating a coded training data set from the raw training data set; generating a coded label set based on the kernel coefficients, the kernel function, and the raw label set; and causing the coded training data set and coded label set to be transmitted to the second edge computing node. [0792] Example PCM11 Includes the subject matter of Example PCM1, the method including further: accessing a local training data set of the first edge computing node; applying a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; after decoding the second message, iteratively, until the global model converges: computing an update to the global model using the transformed data set and a raw label set corresponding to the training data set to obtain an updated global model; and causing the update to be transmitted to the second edge computing node. [0793] Example PCM12 Includes the subject matter of Example PA1, the method including: accessing a local training data set of the first edge computing node and a label set corresponding to the local training data set; applying a Random Fourier Feature Mapping (RFFM) transform to the training data set to yield a transformed training data set; estimating a local machine learning (ML) model based on the transformed training data set and the label set; generating a coded training data set from the transformed training data set; generating a coded label set based on the coded training data set and the estimated local ML model; and causing the coded training data set and coded label set to be transmitted to the second edge computing node. [0794] Example PCM13 Includes the subject matter of Example PCM1, the method including: accessing a subset of a local training data set of the first edge computing node; generating a transformed training data subset based on a Random Fourier Feature Mapping (RFFM) transform and the training data subset; generating a coding matrix based on a distribution; generating a weighting matrix; generating a coded training data mini-batch based on multiplying the transformed training data subset with the coding matrix and the weighting matrix; and causing the coded training data mini-batch to be transmitted to the second edge computing node; wherein the weighting matrix is based on a probability of whether the coded training data mini-batch will be received at the second edge computing node. [0795] Example PCM14 Includes the subject matter of Example PCM1, the method including: obtaining, from each of a set of first edge computing nodes of the edge computing network, a maximum coding redundancy value for a coded federated learning (CFL) cycle to be performed on a global machine learning (ML) model of the federated ML training; determining a coding redundancy value based on the maximum coding redundancy values received from the edge computing devices; determining an epoch time and a number of data points to be processed at each edge computing device during each epoch of the CFL cycle based on the selected coding redundancy value; and causing the determined coding redundancy value, epoch time, and number of data points to be processed at each edge computing device to be transmitted to the set of edge computing devices. [0796] Example PCM15 Includes the subject matter of Example PCM1, the method including: determining a coded privacy budget and an uncoded privacy budget based on a differential privacy guarantee for a cycle of the federated machine learning; generating a coded data set from a raw data set of the first edge computing node based on the coded privacy budget; causing the coded data set to be transmitted to the second edge computing node; performing a round of the federated machine learning on the global model including: after receiving the second message, computing an update to the global model based on the raw data set and the uncoded privacy budget; and causing transmission of the update to the global model to the second edge computing node. [0797] Example PSA1 relates to an apparatus of an edge computing node to be operated in an edge computing network, the apparatus including an interconnect interface to connect the apparatus to one or more components of the edge computing node, and a processor to perform rounds of federated machine learning training including: processing client reports from a plurality of clients of the edge computing network; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training; causing a global model to be sent to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. [0798] Example PSA2 includes the subject matter of Example PSA1, wherein the processor is to perform rounds of federated machine learning training further including: causing dissemination, to a plurality of clients of the edge computing network, of a target data distribution at the edge computing node for federated machine learning training, wherein each of the respective reports is based on a divergence between a local data distribution of a respective one of the clients and the target data distribution; assigning a respective weight to each respective divergence based on a size of the divergence, with higher divergences having higher weights; and selecting the candidate set including using a round robin approach based on the weights. [0799] Example PSA3 includes the subject matter of Example PSA2, wherein the divergence corresponding to each client is based on one of a Kullback-Leibler divergence or a distance of the probability distributions between a local data distribution of said each client and the target data distribution. [0800] Example PSA4 includes the subject matter of any one of Examples PSA2-PSA3, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0801] Example PSA5 includes the subject matter of Example PSA1, wherein the processor is to perform rounds of federated machine learning training further including: processing weighted loss information from each of the clients; determining a probability distribution q for each of the clients based on the weighted loss information from corresponding ones of said each of the clients, wherein q is further based on an amount of data at said each of the clients, and a weight matrix for the federated machine learning training for said each of the clients for the global epoch; and selecting the candidate set including selecting a number of clients with highest probability distribution q as the candidate set. [0802] PSA6 includes the subject matter of Example PSA1, wherein the reports include at least one of information based on training losses of respective ones of said clients for an epoch of the training, or information based on gradients of respective ones of said clients with respect to a pre-activation output of respective ones of said clients; and the processor is to perform rounds of federated machine learning training further including: rank ordering the clients based on one of their training losses or their gradients; and selecting the candidate set including selecting a number of clients with highest training losses or highest gradients as the candidate set. [0803] Example PSA7 includes the subject matter of Example claim PSA6, wherein selecting a number of clients with highest training losses or highest gradients as the candidate set includes selecting a first number of clients with highest training losses or highest gradients as an intermediate set, and selecting a second number of clients from the intermediate set based on respective upload times of the first number of clients, the second number of clients corresponding to the candidate set. [0804] Example PSA8 includes the subject matter of Example PSA1, wherein the processor is to perform rounds of federated machine learning training further including: grouping clients from a plurality of available clients into respective sets of clients based on compute capabilities and communication capabilities of each of the available clients; and selecting a candidate set of clients based on a round robin approach, or based at least on one of a data distribution of each client of the candidate set or a determination that a global model of the machine learning training at the edge computing node has reached a minimum accuracy threshold. [0805] Example PSA9 includes the subject matter of Example PSA8, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the edge computing node. [0806] Example PSA10 includes the subject matter of any one of Examples PSA1-PSA9, wherein the processor is further to cause dissemination, to the plurality of clients of the edge computing network, of a global model corresponding to an epoch of a federated machine learning training. [0807] Example PSA11 includes the subject matter of any one of Examples PSA1-PSA10, wherein the processor is further to perform rounds of federated machine learning training including: obtaining coded training data from each of the selected clients; and performing machine learning training on the coded training data. [0808] Example PSA12 includes the subject matter of Example PSA11, wherein the processor is further to: determine a coding redundancy value to use in the machine learning training on the coded training data based on maximum coding redundancy values from each of the clients, the maximum coding redundancy values indicating a maximum number of coded training data points a respective client may provide; determine an epoch time and a number of data points to be processed at each client during each round federated machine learning based on the selected coding redundancy value; and cause the determined coding redundancy value, epoch time, and number of data points to be processed at each client to be transmitted to the selected clients. [0809] Example PSA13 includes the subject matter of Example PSA1, wherein the processor is further to, for a number of cycles E’ of epoch number t: discard an initial L clients from M clients sampled from N available clients; select a subsequent L clients from remaining clients N-M or N-M+initial L clients; determine load balancing parameters for the subsequent L clients; receive coded data ^^ ^^̂ from each client i of the subsequent L clients; after the number of cycles E’, calculate a global weight ^^ ( ^^+1) corresponding to epoch number t+1, based on gradient gi(t) for each client i of the K clients at epoch number t and further based on gradient gi(t+1) for each client i of the K clients at epoch number t+1, wherein gi(t) is calculated using data points based on the load balancing parameters, and gi(t+1) is calculated using gi(t) and the coded data ^^ ^^̂ . [0810] Example PSA14 includes the subject matter of Example PSA1, wherein the processor is further to: as a one stage operation, receiving a number of coded training data points from each client i of N available clients or of L clients, wherein L≤N, the number of coded data points based on li, li* and t*, wherein li corresponds to the number of raw datapoints at client i, li* corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a deadline time duration representing a smallest epoch time window within which the apparatus and the client i can jointly calculate a gradient; for every epoch number t and until time t*: receive local gradients gi(t) from Li*(t*) raw data points, wherein li*(t*) corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric during time t*; calculate a global gradient based on a per client gradient from the coded data of each client i; on or after time t*, calculate an updated global gradient from the global gradient calculated at each epoch number t. [0811] Example PSM1 relates to a method to perform federated machine learning training at an apparatus of an edge computing node in an edge computing network, the method including: processing client reports from a plurality of clients of the edge computing network; selecting a candidate set of clients from the plurality of clients for an epoch of the federated machine learning training; sending a global model to the candidate set of clients; and performing the federated machine learning training on the candidate set of clients. [0812] Example PSM2 includes the subject matter of Example PSM1, further comprising: disseminating, to a plurality of clients of the edge computing network, a target data distribution at the edge computing node for federated machine learning training, wherein each of the respective reports is based on a divergence between a local data distribution of a respective one of the clients and the target data distribution; assigning a respective weight to each respective divergence based on a size of the divergence, with higher divergences having higher weights; and selecting the candidate set including using a round robin approach based on the weights. [0813] Example PSM3 includes the subject matter of Example PSM2, wherein the divergence corresponding to each client is based on one of a Kullback-Leibler divergence or a distance of the probability distributions between a local data distribution of said each client and the target data distribution. [0814] Example PSM4 includes the subject matter of any one of Examples PSM2-PSM3, wherein selecting a candidate set of clients is based on a determination as to whether the data distribution of each client of the candidate set shows non-independent and identically distributed data (non-i.i.d.). [0815] Example PSM5 includes the subject matter of Example PSM1, further comprising: processing weighted loss information from each of the clients; determining a probability distribution q for each of the clients based on the weighted loss information from corresponding ones of said each of the clients, wherein q is further based on an amount of data at said each of the clients, and a weight matrix for the federated machine learning training for said each of the clients for the global epoch; and selecting the candidate set including selecting a number of clients with highest probability distribution q as the candidate set. [0816] Example PSM6 includes the subject matter of Example PSM1, wherein the reports include at least one of information based on training losses of respective ones of said clients for an epoch of the training, or information based on gradients of respective ones of said clients with respect to a pre-activation output of respective ones of said clients; and the method further comprises: rank ordering the clients based on one of their training losses or their gradients; and selecting the candidate set including selecting a number of clients with highest training losses or highest gradients as the candidate set. [0817] Example PSM7 includes the subject matter of Example PSM6, wherein selecting a number of clients with highest training losses or highest gradients as the candidate set includes selecting a first number of clients with highest training losses or highest gradients as an intermediate set, and selecting a second number of clients from the intermediate set based on respective upload times of the first number of clients, the second number of clients corresponding to the candidate set. [0818] Example PSM8 includes the subject matter of Example PSM1, further comprising: grouping clients from a plurality of available clients into respective sets of clients based on compute capabilities and communication capabilities of each of the available clients; and selecting a candidate set of clients based on a round robin approach, or based at least on one of a data distribution of each client of the candidate set or a determination that a global model of the machine learning training at the edge computing node has reached a minimum accuracy threshold. [0819] Example PSM9 includes the subject matter of Example PSM8, wherein the compute capabilities include a compute rate and the communication capabilities include an uplink communication time to the edge computing node. [0820] Example PSM10 includes the subject matter of Example PSM1, further comprising disseminating, to the plurality of clients of the edge computing network, a global model corresponding to an epoch of a federated machine learning training. [0821] Example PSM11 includes the subject matter of any one of Examples PSM1-PSM10, further comprising: obtaining coded training data from each of the selected clients; and performing machine learning training on the coded training data. [0822] Example PSM12 includes the subject matter of Example PSM11, further comprising: determining a coding redundancy value to use in the machine learning training on the coded training data based on maximum coding redundancy values from each of the clients, the maximum coding redundancy values indicating a maximum number of coded training data points a respective client may provide; determining an epoch time and a number of data points to be processed at each client during each round federated machine learning based on the selected coding redundancy value; and sending the determined coding redundancy value, epoch time, and number of data points to be processed at each client to the selected clients. [0823] Example PSM13 includes the subject matter of Example PSM1, further comprising, for a number of cycles E’ of epoch number t: discarding an initial L clients from M clients sampled from N available clients; selecting a subsequent L clients from remaining clients N-M or N-M+initial L clients; determining load balancing parameters for the subsequent L clients; receiving coded data ^^ ^^̂ from each client i of the subsequent L clients; after the number of cycles E’, calculating a global weight ^^ ( ^^+1) corresponding to epoch number t+1, ^^ ( ^^+1) based on gradient gi(t) for each client i of the K clients at epoch number t and further based on gradient gi(t+1) for each client i of the K clients at epoch number t+1, wherein gi(t) is calculated using data points based on the load balancing parameters, and gi(t+1) is calculated using gi(t) and the coded data ^^ ^^̂ . [0824] Example PSM14 includes the subject matter of Example PSM1, further comprising: as a one stage operation, receiving a number of coded training data points from each client i of N available clients or of L clients, wherein L≤N, the number of coded data points based on li, li* and t*, wherein li corresponds to the number of raw datapoints at client i, li* corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric, and t* corresponds to a deadline time duration representing a smallest epoch time window within which the apparatus and the client i can jointly calculate a gradient; for every epoch number t and until time t*: receive local gradients gi(t) from Li*(t*) raw data points, wherein li*(t*) corresponds to an optimal load representing a number of raw datapoints used by client i to maximize its average return metric during time t*; calculate a global gradient based on a per client gradient from the coded data of each client i; on or after time t*, calculate an updated global gradient from the global gradient calculated at each epoch number t. [0825] Additional Examples: [0826] Example L1 includes an apparatus comprising means to perform one or more elements of a method of any one of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1- DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1- HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14. [0827] Example L2 includes one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method of any one of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1- EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14. [0828] Example L3, includes a machine-readable storage including machine-readable instructions which, when executed, implement the method of any one of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14. [0829] Example L4 includes an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method of any of one of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14. [0830] Example L5 includes the apparatus of any one of claims ***, further including a transceiver coupled to the processor, and one or more antennas coupled to the transceiver, the antennas to send and receive wireless communications from other edge computing nodes in the edge computing network. [0831] Example L6 includes the apparatus of claim L5, further including a system memory coupled to the processor, the system memory to store instructions, the processor to execute the instructions to perform the training. [0832] Example L7 includes an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of Examples AM1- AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1- FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1- HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1- PCM15 and PSM1-PSM14, or any other method or process described herein. [0833] Example L8 includes a method, technique, or process as described in or related to any of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1- GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or portions or parts thereof. [0834] Example L9 includes an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1- HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or portions thereof. [0835] Example L10 includes a signal as described in or related to any of the examples herein , or portions or parts thereof. [0836] Example L11 includes a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of the examples herein , or portions or parts thereof, or otherwise described in the present disclosure. [0837] Example L12 includes a signal encoded with data as described in or related to any of the examples herein , or portions or parts thereof, or otherwise described in the present disclosure. [0838] Example L13 includes a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of the examples herein , or portions or parts thereof, or otherwise described in the present disclosure. [0839] Example L14 includes an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1- GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or portions thereof. [0840] Example L15 includes a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of Examples AM1- AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1- FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1- HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1- PCM15 and PSM1-PSM14, or portions thereof. [0841] Example L15.5 includes a message or communication between a first edge computing node and a second edge computing note, or between a client computing node and a central server, substantially as shown and described herein, wherein the message or communication is to be transmitted/received on an application programming interface (API), or, especially when used to enhance a wireless network, embedded in L1/L2/L3 layers of the protocol stack depending on the application. [0842] Example 15.6 includes a message or communication between a first edge computing node and a second edge computing note, or between a client computing node and a central server, substantially as shown and described herein, wherein the message or communication is to be transmitted/received on a Physical (PHY) layer, or on a Medium Access Control (MAC) layer as set forth in wireless standards, such as the 802.11 family of standards, or the Third Generation Partnership Project (3GPP) Long Term Evolution (LTE) or New Radio (NR or 5G) family of technical specifications. [0843] Example 15.7 includes a message or communication between a first edge computing node and a second edge computing note, or between a client computing node and a central server, substantially as shown and described herein, wherein the message or communication involves a parameter exchange as described above to allow an estimation of wireless spectrum efficiency, and is to be transmitted/received on a L1 layer of a protocol stack. [0844] Example 15.8 includes a message or communication between a first edge computing node and a second edge computing note, or between a client computing node and a central server, substantially as shown and described herein, wherein the message or communication involves a prediction of edge computing node sleep patterns, and is to be transmitted/received on a L2 layer of a protocol stack. [0845] Example 15.9 includes a message or communication between a first edge computing node and a second edge computing note, or between a client computing node and a central server, substantially as shown and described herein, wherein the message or communication is to be transmitted or received on a transport network layer, an Internet Protocol (IP) transport layer, a General Radio Packet Service Tunneling Protocol User Plane (GTP-U) layer, a User Datagram Protocol (UDP) layer, an IP layer, on a layer of a control plane protocol stack (e.g. NAS, RRC, PDCP, RLC, MAC, and PHY), on a layer of a user plane protocol stack (e.g. SDAP, PDCP, RLC, MAC, and PHY). [0846] Example L16 includes a signal in a wireless network as shown and described herein. [0847] Example L17 includes a method of communicating in a wireless network as shown and described herein. [0848] Example L18 includes a system for providing wireless communication as shown and described herein. [0849] Example NODE1 includes an edge compute node comprising the apparatus of any one of claims AA1-AA10, BA1-BA7, CA1-CA8, DA1-DA10, EA1-EA8, EAA1-EAA7, FA1- FA11, FAA1-FAA7, GA1-GA4, GAA1-GAA3, GAAA1-GAAA3, GAAAA1-GAAAA3, HA1-HA8, HAA1-HAA13, HAAA1-HAAA8, IA1-IA11, JA1-JA10, KA1-KA7, PCA1- PCA16 and PSA1-PSA14, and further comprising a transceiver coupled to the processor, and one or more antennas coupled to the transceiver, the antennas to send and receive wireless communications from other edge computing nodes in the edge computing network. [0850] Example NODE2 includes the subject matter of Example NODE1, further comprising a system memory coupled to the processor, the system memory to store instructions, the processor to execute the instructions to perform the training. [0851] Example NODE3 includes the subject matter of Example NODE1 or NODE 2, wherein the apparatus is the apparatus of any one of Examples EA1-EA8, EAA1-EAA7, FA1- FA11, GA1-GA4, GAA1-GAA3, HA1-HA8, HAAA1-HAAA8, JA1-JA10, KA1-KA7, and PSA1-PSA14, and the edge compute node further comprises: a network interface card (NIC) to provide the apparatus wired access to a core network; and a housing that encloses the apparatus, the transceiver, and the NIC. [0852] Example NODE4 includes the subject matter of Example NODE3, wherein the housing further includes power circuitry to provide power to the apparatus. [0853] Example NODE5 includes the subject matter of any one of Examples NODE3- NODE4, wherein the housing further includes mounting hardware to enable attachment of the housing to another structure. [0854] Example NODE6 includes the subject matter of any one of Examples NODE3- NODE5, wherein the housing further includes at least one input device. [0855] Example NODE6 includes the subject matter of any one of Examples NODE3- NODE6, wherein the housing further includes at least one output device. [0856] An example implementation is an edge computing system, including respective edge processing devices and nodes to invoke or perform the operations of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or other subject matter described herein. [0857] Another example implementation is a client endpoint node, operable to invoke orperform the operations of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1- EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1- GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1- IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or other subject matter described herein. [0858] Another example implementation is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1- GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or other subject matter described herein. [0859] Another example implementation is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operableto invoke or perform the operations of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1- DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1- HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or other subject matter described herein. [0860] Another example implementation is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled toan edge computing system, operable to invoke or perform the operations of Examples AM1- AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1- FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1- HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1- PCM15 and PSM1-PSM14, or other subject matter described herein. [0861] Another example implementation is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1-DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1-HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or other subject matter described herein. [0862] Another example implementation is an edge computing system operable as an edge mesh, as an edge mesh with side car loading, or with mesh-to-mesh communications, operableto invoke or perform the operations of Examples AM1-AM9, BM1-BM7, CM1-CM8, DM1- DM9, EM1-EM8, EMM1-EMM7, FM1-FM11, FMM1-FMM7, GM1-GM4, GMM1-GMM3, GMMM1-GMMM3, GMMMM1-GMMMM3, HM1-HM7, HMM1-HMM12, HMMM1- HMMM7, IM1-IM10, JM1-JM10, KM1-KM7, PCM1-PCM15 and PSM1-PSM14, or other subject matter described herein. [0863] Another example implementation is the apparatus of any one of claims AA1-AA10, BA1-BA7, CA1-CA8, DA1-DA10, EA1-EA8, EAA1-EAA7, FA1-FA11, FAA1-FAA7, GA1- GA4, GAA1-GAA3, GAAA1-GAAA3, GAAAA1-GAAAA3, HAA1-HAA13, IA1-IA11, JA1-JA10, KA1-KA7, PCA1-PCA16 and PSA1-PSA14 further including a transceiver coupled to the processor, and one or more antennas coupled to the transceiver, the antennas to send wireless communications to and to receive wireless communications from other edge computing nodes in the edge computing network. [0864] Another example includes an apparatus substantially as shown and described herein. [0865] Another example includes a method substantially as shown and described herein. [0866] Another example implementation is the apparatus of the Example of the paragraph above, further including a system memory coupled to the processor, the system memory to store instructions, the processor to execute the instructions to perform the training. [0867] Another example implementation is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of the examples herein, or other subject matter described herein. [0868] Another example implementation is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V), vehicle-to-everything (V2X), or vehicle- to-infrastructure (V2I) scenarios, and optionally operating according to ETSI MEC specifications, operable to invoke or perform the use cases discussed herein, with use of the examples herein, or other subject matter described herein. [0869] Another example implementation is an edge computing system adapted for mobile wireless communications, including configurations according to an 3GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of the Examples above, or other subject matter described herein. [0870] Any of the above-described Examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. Aspects described herein can also implement a hierarchical application of the scheme for example, by introducing a hierarchical prioritization of usage for different types of users (e.g., low/medium/high priority, etc.), based on a prioritized access to the spectrum e.g. with highest priority to tier-1 users, followed by tier-2, then tier-3, etc. users, etc. Some of the features in the present disclosure are defined for network elements (or network equipment) such as Access Points (APs), eNBs, gNBs, core network elements (or network functions), application servers, application functions, etc. Any embodiment discussed herein as being performed by a network element may additionally or alternatively be performed by a UE, or the UE may take the role of the network element (e.g., some or all features defined for network equipment may be implemented by a UE). [0871] Although these implementations have been described with reference to specific exemplary aspects, it will be evident that various modifications and changes may be made to these aspects without departing from the broader scope of the present disclosure. Many of the arrangements and processes described herein can be used in combination or in parallel implementations to provide greater bandwidth/throughput and to support edge services selections that can be made available to the edge systems being serviced. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific aspects in which the subject matter may be practiced. The aspects illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other aspects may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various aspects is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. [0872] Such aspects of the inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is in fact disclosed. Thus, although specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown. This disclosure is intended to cover any and all adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.
Next Patent: BIOPSY FORCEPS WITH TISSUE PIERCING MEMBER