Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONCEALED LEARNING
Document Type and Number:
WIPO Patent Application WO/2023/146461
Kind Code:
A1
Abstract:
A method (1300) by a first radio node operating as a master trainer for concealed learning includes transmitting (1302), to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model. The one or more software packages are transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages. The first radio node receives (1304) the local model from the second radio node. The local model is received in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates (1306) a master model and transmits (1308) the master model to the second radio node.

Inventors:
BRUHN PHILIPP (DE)
ROELAND DINAND (SE)
HALL GÖRAN (SE)
Application Number:
PCT/SE2023/050074
Publication Date:
August 03, 2023
Filing Date:
January 27, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L9/40; G06N3/098; G06N20/00
Domestic Patent References:
WO2021029797A12021-02-18
WO2021056043A12021-04-01
WO2018111270A12018-06-21
Foreign References:
US20210312336A12021-10-07
Other References:
BO LIU ET AL: "When Machine Learning Meets Privacy: A Survey and Outlook", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 November 2020 (2020-11-24), XP081821133
QUOC-VIET PHAM ET AL: "A Survey of Multi-Access Edge Computing in 5G and Beyond: Fundamentals, Technology Integration, and State-of-the-Art", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 June 2019 (2019-06-20), XP081378326
H. B. MCMAHAN ET AL.: "Communication-Efficient Learning of Deep Networks from Decentralized Data", AISTATS, 2017
B. MCMAHAN ET AL., FEDERATED LEARNING OF DEEP NETWORKS USING MODEL AVERAGING
Attorney, Agent or Firm:
BOU FAICAL, Roger (SE)
Download PDF:
Claims:
CLAIMS

1. A method (1300) by a first radio node operating as a master trainer (120) for concealed learning, the method comprising: transmitting (1302), to a second radio node operating as a local trainer (105), one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages; receiving (1304) the local model from the second radio node, wherein the local model is received in a concealed format that only the first radio node operating as the master trainer is able to decrypt; and based on at least the local model received from the second radio node, generating (1306) a master model; and transmitting (1308) the master model to the second radio node.

2. The method of Claim 1, wherein the concealed format is an encrypted format.

3. The method of any one of Claims 1 to 2, wherein the concealed format prevents any third party from decrypting the one or more software packages.

4. The method of any one of Claims 1 to 3, wherein the one or more software packages comprise one or more Open Container Initiative, OCI, packages.

5. The method of any one of Claims 1 to 4, wherein transmitting the master model to the second radio node comprises: transmitting a version of the software package that is updated based on the master model; or transmitting a portion of the software package that is updated based on the master model.

6. The method of any one of Claims 1 to 5, wherein the one or more software packages comprise: at least one policy for performing the local training of and/or generating the local model; and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.

7. The method of any one of Claims 1 to 5, comprising transmitting information to the second radio node operating as the local trainer, the information transmitted in a concealed format that prevents the second radio node from decrypting the information, the information comprising: at least one training policy for performing the local training of and/or generating the local model; and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.

8. The method of Claim 7, wherein the information is communicated between the one or more software packages and a platform operating on the one or more other radio nodes via at least one Application Programming Interface.

9. The method of any one of Claims 7 to 8, wherein the platform is unable to decrypt the one or more software packages and/or the information.

10. The method of any one of Claims 7 to 9, wherein the information comprises the master model.

11. The method of any one of Claims 7 to 10, wherein the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.

12. The method of any one of Claims 7 to 11, wherein the at least one data pre-processing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features,

SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.

13. The method of any one of Claims 7 to 12, wherein the information is based on the master model.

14. The method of any one of Claims 1 to 13, wherein receiving the local model comprises receiving an updated version of the one or more software packages that includes the local model.

15. The method of any one of Claims 1 to 14, further comprising: transmitting, to a third radio node operating as an additional local trainer, the one or more software packages for performing local training of and/or generating a local model, the one or more software packages transmitted in the concealed format; receiving, from the third radio node operating as the additional local trainer, a second local model, wherein the second local model is received in the concealed format that only the first radio node operating as the master trainer is able to decrypt; and transmitting, to the third radio node operating as the additional local trainer, the master model, and wherein the master model is generated based on the first local model and the second local model.

16. The method of any one of Claims 14 to 15, wherein the second radio node and the third radio node are located at different locations.

17. The method of any one of Claims 1 to 16, wherein at least one of the software package, the first local model, and the master model comprises an Artificial Intelligence, Al, model.

18. The method of any one of Claims 1 to 17, wherein the first radio node operating as the master trainer is associated with at least one of: an Operation & Maintenance node or system, a Service Management & Orchestration SMO node or system, aNon-Real Time RAN Intelligent Controller, Non-RT RIC, a Near Real-Time RAN Intelligent Controller, Near-RT RIC, a Core Network node, a gNodeB, and a gNodeB-Centralized Unit.

19. A method (1400) by a second radio node operating as a local trainer (105) for concealed learning, the method comprising: receiving (1402), from a first radio node operating as a master trainer (120), one or more software packages for performing local training of and/or generate a local model, the one or more software packages received in a concealed format that prevents the second radio node from decrypting the one or more software packages; using (1404) the one or more software packages to perform the local training of and/or generate local model; transmitting (1406), to the first radio node, the local model, wherein the local model is transmitted in a concealed format that the master trainer is able to decrypt; and receiving (1408), from the first radio node, a master model that is generated based on the local model.

20. The method of Claim 19, wherein the concealed format is an encrypted format.

21. The method of any one of Claims 19 to 20, wherein the concealed format prevents any third party from decrypting the one or more software packages.

22. The method of Claims 19 to 21, wherein the one or more software packages comprise one or more Open Container Initiative, OCI, packages.

23. The method of any one of Claims 19 to 22, wherein receiving the master model from the first radio node comprises: receiving a version of the software package that is updated based on the master model; or receiving a portion of the software package that is updated based on the master model.

24. The method of any one of Claims 19 to 23, wherein the one or more software packages comprise: at least one policy for performing the local training of and/or generating of the local mode; and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.

25. The method of any one of Claims 19 to 24, comprising receiving information from the first radio node operating as the master trainer, the information received in a concealed format that prevents the second radio node from decrypting the information, the information comprising: at least one training policy for performing the local training of and/or generating the local model; and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.

26. The method of Claim 25, wherein the information is communicated between the one or more software packages and a platform operating on the second radio node via at least one Application Programming Interface.

27. The method of any one of Claims 25 to 26, wherein the platform is unable to decrypt the one or more software packages and/or the information.

28. The method of any one of Claims 25 to 27, wherein the information comprises the master model.

29. The method of any one of Claims 25 to 28, wherein the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.

30. The method of any one of Claims 25 to 29, wherein the at least one data preprocessing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features,

SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.

31. The method of any one of Claims 25 to 30, wherein the information is based on the master model.

32. The method of any one of Claims 19 to 31, wherein transmitting the local model comprises transmitting an updated version of the one or more software packages that includes the local model.

33. The method of any one of Claims 19 to 32, wherein at least one of the one or more software packages, the local model, and the master model comprises an Artificial Intelligence, Al, model.

34. The method of any one of Claims 19 to 33, wherein the second radio node operating as the local trainer is associated with at least one of: a gNodeB, a user equipment, UE, and a Near Real-Time RAN Intelligent Controller, Near-RT RIC.

35. A first radio node operating as a master trainer (120) for concealed learning, the first radio node adapted to: transmit (1302), to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages; receive (1304) the local model from the second radio node, wherein the local model is received in a concealed format that only the first radio node operating as the master trainer is able to decrypt; and based on at least the local model received from the second radio node, generate (1306) a master model; and transmit (1308) the master model to the second radio node.

36. The first radio node of Claim 35, wherein the concealed format is an encrypted format.

37. The first radio node of any one of Claims 35 to 36, wherein the concealed format prevents any third party from decrypting the one or more software packages.

38. The first radio node of any one of Claims 35 to 37, wherein the one or more software packages comprise one or more Oracle Cloud Infrastructure (OCI) packages.

39. The first radio node of any one of Claims 35 to 38, wherein when transmitting the master model to the second radio node, the first radio node is adapted to: transmit a version of the software package that is updated based on the master model; or transmit a portion of the software package that is updated based on the master model.

40. The first radio node of any one of Claims 35 to 39, wherein the one or more software packages comprise: at least one policy for performing the local training of and/or generating of the local model; and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.

41. The first radio node of any one of Claims 35 to 39, adapted to transmit information to the second radio node operating as the local trainer, the information transmitted in a concealed format that prevents the second radio node from decrypting the information, the information comprising: at least one training policy for performing the local training of and/or generating the local model; and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.

42. The first radio node of Claim 41, wherein the information is communicated between the one or more software packages and a platform operating on the one or more other radio nodes via at least one Application Programming Interface.

43. The first radio node of any one of Claims 41 to 42, wherein the platform is unable to decrypt the one or more software packages and/or the information.

44. The first radio node of any one of Claims 41 to 43, wherein the information comprises the master model.

45. The first radio node of any one of Claims 41 to 44, wherein the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.

46. The first radio node of any one of Claims 41 to 45, wherein the at least one data preprocessing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features,

SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.

47. The first radio node of any one of Claims 41 to 46, wherein the information is based on the master model.

48. The first radio node of any one of Claims 41 to 47, wherein when receiving the local model the first radio node is adapted to receive an updated version of the one or more software packages that includes the local model or a portion of the software package that is updated based on the master model.

49. The first radio node of any one of Claims 35 to 48, wherein the first radio node is adapted to: transmit, to a third radio node operating as an additional local trainer, the one or more software packages for performing local training of and/or generating a local model, the one or more software packages transmitted in the concealed format; receive, from the third radio node operating as the additional local trainer, a second local model, wherein the second local model is received in the concealed format that only the first radio node operating as the master trainer is able to decrypt; and transmit, to the third radio node operating as the additional local trainer, the master model, and wherein the master model is generated based on the first local model and the second local model.

50. The first radio node of any one of Claims 48 to 49, wherein the second radio node and the third radio node are located at different locations.

51. The first radio node of any one of Claims 35 to 50, wherein at least one of the software package, the first local model, and the master model comprises an Artificial Intelligence, Al, model.

52. The first radio node of any one of Claims 35 to 51, wherein the first radio node operating as the master trainer is associated with at least one of: an Operation & Maintenance node or system, a Service Management & Orchestration SMO node or system, aNon-Real Time RAN Intelligent Controller, Non-RT RIC, a Near Real-Time RAN Intelligent Controller, Near-RT RIC, a Core Network node, a gNodeB, and a gNodeB-Centralized Unit.

53. A second radio node operating as a local trainer (105) for concealed learning, the second radio node adapted to: receive (1402), from a first radio node operating as a master trainer (120), one or more software packages for performing local training of and/or generate of a local model, the one or more software packages received in a concealed format that prevents the second radio node from decrypting the one or more software packages; use (1404) the one or more software packages to perform the local training of and/or the generating of local model; transmit (1406), to the first radio node, the local model, wherein the local model is transmitted in a concealed format that the master trainer is able to decrypt; and receive (1408), from the first radio node, a master model that is generated based on the local model.

54. The second radio node of Claim 53, wherein the concealed format is an encrypted format.

55. The second radio node of any one of Claims 53 to 54 wherein the concealed format prevents any third party from decrypting the one or more software packages.

56. The second radio node of Claims 53 to 55, wherein the one or more software packages comprise one or more Oracle Cloud Infrastructure (OCI) packages.

57. The second radio node of any one of Claims 53 to 56, wherein when receiving the master model from the first radio node, the second radio node is adapted to: receive a version of the software package that is updated based on the master model; or receive a portion of the software package that is updated based on the master model.

58. The second radio node of any one of Claims 53 to 57, wherein the one or more software packages comprise: at least one policy for performing the local training of and/or generating the local mode; and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.

59. The second radio node of any one of Claims 53 to 58, adapted to receive information from the first radio node operating as the master trainer, the information received in a concealed format that prevents the second radio node from decrypting the information, the information comprising: at least one training policy for performing the local training of and/or generating the local model; and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.

60. The second radio node of Claim 59, wherein the information is communicated between the one or more software packages and a platform operating on the second radio node via at least one Application Programming Interface.

61. The second radio node of any one of Claims 59 to 60, wherein the platform is unable to decrypt the one or more software packages and/or the information.

62. The second radio node of any one of Claims 59 to 61, wherein the information comprises the master model.

63. The second radio node of any one of Claims 59 to 62, wherein the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.

64. The second radio node of any one of Claims 59 to 63, wherein the at least one data pre-processing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features,

SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.

65. The second radio node of any one of Claims 59 to 64, wherein the information is based on the master model.

66. The second radio node of any one of Claims 53 to 65, wherein when transmiting the local model, the second radio node is adapted to transmit an updated version of the one or more software packages that includes the local model or a portion of the software package that is updated based on the master model. 67. The second radio node of any one of Claims 53 to 66, wherein at least one of the one or more software packages, the local model, and the master model comprises an Artificial Intelligence, Al, model.

68. The second radio node of any one of Claims 53 to 67, wherein the second radio node operating as the local trainer is associated with at least one of: a gNodeB, a user equipment, UE, and a Near Real-Time RAN Intelligent Controller, Near-RT RIC.

Description:
CONCEALED LEARNING

TECHNICAL FIELD

The present disclosure relates, in general, to wireless communications and, more particularly, systems and methods for concealed learning.

BACKGROUND

In the context of mobile wireless networks, machine learning (ML) may be used in a large scale, distributed, decentralized, cloud/virtualized environment where multiple stakeholders are operating and where performance is important from day one. ML in mobile wireless networks is a highly complex issue where properties such as robustness, scalability, latency, and many other metrics need to be considered.

Current approaches to geographically distributed data are often centralized. In such environments, data privacy is one of the key issues, and server security is vital since big datasets with sensitive information are located on a central server. In order to limit the need for sending and storing sensitive data, a learning approach called Federated Learning (FL) is used. FL is a technique that allows local entities such as, for example, a user equipment (UE), a gNodeB (gNB), or other network nodes, to collectively use the advantages of shared Artificial Intelligence (AI)/ML models trained by multiple local entities, which may be referred to as local learners, without having to exchange sensitive raw data and without storing such sensitive raw data in a central location. In a system adopting FL, each local learner has its own (local) training dataset that does not need to be uploaded to the central server. Instead, each local learner computes an update to the latest global model, and only this update is communicated to the server. See, H. B. McMahan et al., “Communication-Efficient Learning of Deep Networks from Decentralized Data,” AISTATS, 2017. At the server, a central entity, referred to as master trainer, generates a single, average, model from all the local models. The new global model is then sent back to the local learners.

FIGURE 1 illustrates an overview of FL. As illustrated, the global model resides in the top node, and training is done in the access sites or clients, depending on the use-case. FL enables users to leverage the benefits of shared AI/ML models trained from a large, distributed dataset without the need of sharing the data to a central entity. The principal goal of an AI/ML model is to minimize a loss function, such as linear regression, logistic regression, etc. The AI/ML models used in FL may, for example, be neural networks. A neural network consists of a set of neurons or activations connected by certain rules. The input layer is responsible for receiving the input data. The rightmost layer is called the output layer and gets the output data. The hidden layers in the middle of the multi-hop neural network compute a function of the inputs. This is not directly interfaced outside of the network. Moreover, there is no connection between neurons in the same layer and each neuron on the nth layer is connected to all neurons on the (n — l)th layer. The output of the (n — l)th layer is the input of the nth layer. Each inter-neuron connection has an associated multiplicative weight, and each neuron in a layer has an associated additive bias term.

FIGURE 2 illustrates an example of a neural network. Combining local Stochastic Gradient Descent (SGD) on each local learner (i.e., local trainer) with communication rounds to a central server (i.e., master trainer) that performs model averaging provides a suitable performance while maintaining robustness. See, .B. McMahan et al., Federated Learning of Deep Networks using Model Averaging. One method of combining multiple locally trained neural network models is to perform averaging of the model parameters. This method has shown gains in comparison of using each model separately. See, H. B. McMahan et al., “Communication-Efficient Learning of Deep Networks from Decentralized Data,” AISTATS, 2017.

FL is not restricted to neural network models and SGD learning. Models may very well be of a different type, like a decision tree or a Markov chain.

FIGURE 3 illustrates a schematic overview of FL. As shown, local models are trained at multiple locations, which are represented as Location A and Location B in FIGURE 3, and sent to some other location, which is represented as Location C in FIGURE 3. At Location C, all local models are combined into a master model. The illustrated example of FIGURE 3 includes two local trainers. In general, however, there are one or more local trainers. The master model is sent back to local trainers and can be used for local inference as well as further local training.

Each training and data pre-processing function may be under the responsibility of a different organization or vendor. For example, vendor X may be responsible for data preprocessing and model training at location A, vendor Y may be responsible for the same at location B, and vendor Z may be responsible for master training at location C.

The master trainer controls the FL by policies for the local sites. At least these training policies must describe what architecture and model packaging the master trainer expects from the local trainers. The master trainer needs to have enough information to interpret the local models when building the master model. This is indicated in FIGURE 3 by the dashed lines from the master trainer to the local trainers. Optionally, the master trainer may also prescribe how the feature engineering such as, for example, data pre-processing, needs to be performed. This is indicated in FIGURE 3 by the dashed lines from the master trainer to the local data processing blocks performed by the local trainers. In an extreme case, the training and data preprocessing policies sent by the master trainer provide details for how the local training shall be done (this may include all the hyperparameters) and how the feature engineering shall be done. In practice, the provision of these details by the master trainer basically demotes the local trainers to dumb entities, leaving all the intelligence at the master trainer.

There currently exist certain challenges, however. For example, FL is a very useful technology for certain use cases. As such, it is now being discussed in many standardization fora including 3GPP SA2, 3GPP SA5 and ORAN. It is expected to come up in other fora as well, like the 3GPP RAN groups. However, when performing FL in a multi-organization data, policies and the details of AI/ML models of an organization are necessarily exposed to the organizations.

SUMMARY

To address the foregoing problems with existing solutions, systems and methods are provided for concealed federated or non-federated learning between a master trainer and one or more local trainers, where the local pre-processing, the local training and the resulting model(s) do not need to be shared in a detectable and/or decodable way. In this manner, IP is protected.

According to certain embodiments, a method by a first radio node operating as a master trainer for concealed learning, includes transmitting, to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages. The first radio node receives the local model from the second radio node in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates a master model and transmits the master model to the second radio node.

According to certain embodiments, a first radio node operating as a master trainer for concealed learning is adapted to transmit, to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages. The first radio node receives the local model from the second radio node in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates a master model and transmits the master model to the second radio node.

According to certain embodiments, a method by a second radio node operating as a local trainer for concealed learning includes receiving, from a first radio node operating as a master trainer 120, one or more software packages for performing local training of and/or generate of a local model. The one or more software packages is received in a concealed format that prevents the second radio node from decrypting the one or more software packages. The second radio node uses the one or more software packages to perform the local training of and/or the generating of local model. The second radio node transmits the local model to the first radio node in a concealed format that the master trainer is able to decrypt. The second radio node receives, from the first radio node, a master model that is generated based on the local model.

According to certain embodiments, a second radio node operating as a local trainer for concealed learning is adapted to receive, from a first radio node operating as a master trainer, one or more software packages for performing local training of and/or generate of a local model. The one or more software packages is received in a concealed format that prevents the second radio node from decrypting the one or more software packages. The second radio node is adapted to use the one or more software packages to perform the local training of and/or the generating of local model. The second radio node is adapted to transmit the local model to the first radio node in a concealed format that the master trainer is able to decrypt. The second radio node is adapted to receive, from the first radio node, a master model that is generated based on the local model.

Certain embodiments of the present disclosure may provide one or more technical advantages. For example, a technical advantage may be that certain embodiments allow a first radio node (e.g., a master trainer located at master site) to provide a model to a second radio node (e.g., a local trainer located at local site) in a concealed form that allows the first radio node to maintain full control over the training. Certain embodiments eliminate the necessity for the first radio node to send detailed and complex training policies to the second radio node and/or eliminate the necessity for the first radio node to trust the second radio node that the training policies are applied fully.

As another example, a technical advantage may be that certain embodiments allow the first radio node to maintain full control over the data pre-processing. Accordingly, certain embodiments may eliminate the necessity for the first radio node to send detailed and complex data pre-processing policies to the second node and/or eliminate the necessity for the first radio node to trust the second radio node that the data pre-processing policies are applied fully.

As yet another example, a technical advantage may be that certain embodiments protect IP comprised within the model.

As yet another example, a technical advantage may be that certain embodiments protect IP comprised within the data pre-processing.

As still another example, a technical advantage may be that certain embodiments maintain the advantages of FL such as, for example, the protection of local raw data and possible saving data transport resources, while at the same time enabling the second node to further train the model.

Another technical advantage may be that a first radio node can maintain full control over which data is used to train the local model at a second radio node, without having to send detailed and complex data detection policies to the second radio node and having to trust the second radio node that such data detection policies are applied fully. This enables the first radio node to ensure that certain (e.g., corrupted) data is not used to train the local model and, thus, does not adversely affect the performance of the model. In case of FL, this also ensures that such data does not adversely affect the performance of the master model either.

Other advantages may be readily apparent to one having skill in the art. Certain embodiments may have none, some, or all of the recited advantages.

BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the disclosed embodiments and their features and advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:

FIGURE 1 illustrates an overview of FL;

FIGURE 2 illustrates an example of a neural network;

FIGURE 3 illustrates a schematic overview of FL; FIGURE 4 illustrates a schematic overview of concealed FL, according to certain embodiments;

FIGURE 5 illustrates a schematic overview of concealed FL as applied in a multivendor context, according to certain embodiments;

FIGURE 6 illustrates example Application Program Interfaces (APIs) between training and execution and a platform, according to certain embodiments;

FIGURE 7 illustrates example of APIs between data pre-processing and a platform, according to certain embodiments;

FIGURE 8 illustrates an example communication system, according to certain embodiments;

FIGURE 9 illustrates an example UE, according to certain embodiments;

FIGURE 10 illustrates an example network node, according to certain embodiments;

FIGURE 11 illustrates a block diagram of a host, according to certain embodiments;

FIGURE 12 illustrates a virtualization environment in which functions implemented by some embodiments may be virtualized, according to certain embodiments;

FIGURE 13 illustrates a host communicating via a network node with a UE over a partially wireless connection, according to certain embodiments;

FIGURE 14 illustrates an example method by a first radio node operating as a master trainer, according to certain embodiments;

FIGURE 15 illustrates an example method by a second radio node operating as a local trainer, according to certain embodiments;

FIGURE 16 illustrates another example method by a first radio node operating as a master trainer, according to certain embodiments; and

FIGURE 17 illustrates another example method by a second radio node operating as a local trainer, according to certain embodiments.

DETAILED DESCRIPTION

Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.

As used herein, ‘node’ or ‘radio node’ can be a network node or a UE.

Examples of network nodes are NodeB, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB (eNB), gNodeB (gNB), Master eNB (MeNB), Secondary eNB (SeNB), integrated access backhaul (IAB) node, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), Central Unit (e.g. in a gNB), Distributed Unit (e.g. in a gNB), Baseband Unit, Centralized Baseband, C-RAN, access point (AP), transmission points, transmission nodes, Remote Radio Unit (RRU), Remote Radio Head (RRH), nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc.), Operations & Maintenance (O&M) node or system, Operations Support System (OSS), Self Organizing Network (SON), positioning node (e.g. E-SMLC), etc.

Another example of a node or radio node is user equipment (UE), which is a nonlimiting term and refers to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device to device (D2D) UE, vehicular to vehicular (V2V), machine type UE, MTC UE or UE capable of machine to machine (M2M) communication, Personal Digital Assistant (PDA), Tablet, mobile terminals, smart phone, laptop embedded equipment (LEE), laptop mounted equipment (LME), Unified Serial Bus (USB) dongles, etc.

The term radio access technology (RAT), may refer to any RAT such as, for example, Universal Terrestrial Radio Access Network (UTRA), Evolved Universal Terrestrial Radio Access Network (E-UTRA), narrow band internet of things (NB-IoT), WiFi, Bluetooth, next generation RAT, NR, 4G, 5G, etc. Any of the equipment denoted by the terms node, network node or radio network node may be capable of supporting a single or multiple RATs.

According to certain embodiments described herein, a network node can also be a RAN node, a Core Network node, an 0AM, an Service Management & Orchestration (SMOO node or system, a Network Management System (NMS), a Non-Real Time RAN Intelligent Controller (Non-RT RIC), a Real-Time RAN Intelligent Controller (RT-RIC), a gNB, eNB, Enhanced-gNB (en-gNB), Next Generation-eNB (ng-eNB), gNB-CU, gNB-Centralized Unit- Control Plane (gNB-CU-CP), gNB-Centralized Unit-User Plane (gNB-CU-UP), eNB- Centralized Unit (eNB-CU), eNB-Centralized Unit-Control Plane (eNB-CU-CP), eNB- Centralized Unit-User Plane (eNB-CU-UP), Integrated Access and Backhaul (IAB) node, IAB- donor Distributed Unit (DU), lAB-donor Centralized Unit (CU), IAB-DU, lAB-Mobile Termination (IAB-MT), O-RAN Centralized Unit (O-RAN CU), O-RAN Centralized Unit- Control Plane (O-CU-CP), O-RAN-Centralized Unit-User Plane (O-CU-UP), O-RAN Distributed Unit (O-DU), O-RAN Radio Unit (O-RU), O-RAN eNB (O-eNB), Network Data Analytics Function (NWDAF), Management Data Analytics Function (MDAF) and/or a UE.

As described above, FL is a technique that allows local entities such as, for example, a UE, a gNB, and other network nodes, to collectively use the advantages of shared AI/ML models trained by multiple local entities. The problem that arises, however, is how to support the FL approach in standardized, multi-organization (e.g., multi-vendor) systems, while at the same time allowing for the protection of Intellectual Property (IP) that is included in the details of AI/ML models. This IP includes, amongst others, training techniques and data preprocessing steps. As described in FIGURE 3, in FL, the master trainer may give the local trainers detailed instructions (i.e., policies) on how the local data pre-processing and local training shall be done. The master trainer must provide at least enough instructions so that the different local models can be combined into a master model. By giving those instructions, the master trainer reveals sensitive details about the AI/ML model, such as hyperparameters, and the data pre-processing steps, like feature engineering.

With the current technology, there is no method that allows for FL in multi-organization systems, where a first organization carries out master training and one or more other organizations participate in local training, without the need to openly exchange sensitive information, such as data pre-processing steps, training techniques, or AI/ML models, between the organizations. However, certain embodiments described herein allow FL without compromising IP protection. In other words, the advantages of FL are kept but the disadvantages of revealing IP are solved. For example, methods and systems are provided for (federated or non-federated) learning between a master trainer and one or more local trainers, where the local pre-processing, the local training, and/or the resulting model (s) do not need to be shared in a detectable and/or decodable way. In this manner, IP that includes sensitive details about the AI/ML model, such as hyperparameters, and the data pre-processing steps is protected. Certain embodiments enable FL to be fully controlled by, and only visible to, one organization but to be done in multiple locations that can, in some embodiments, belong to different organizations. In other words, certain embodiments described herein enable concealed FL for standardized, multi-organization systems. As with ordinary (i.e., non-concealed) FL, local data from the different sites remains local data in the different sites. More specifically, in a particular embodiment, the local data remains on site and is not visible to the organization responsible for training. However, according to certain embodiments described herein, the organization responsible for training (i.e., the master trainer) can now protect its IP.

It is generally recognized, however, that it is not required that all packages, policies and models are transmitted in a concealed way. Certain packages, policies, models, and/or portions thereof may be concealed while other packages, policies, models, or portions thereof may not be concealed, in various embodiments. Alternatively, none of the packages, policies, models, or portions thereof may be concealed, in a particular embodiment.

For example, according to certain embodiments, a master trainer sends a software package to one or more locations. At each location, the software package performs local training, which results in a local model. The trained local models are sent back to the master trainer, where the master trainer generates a single master model. The master trainer may send policies on how to perform training together with the training software package or separately. Optionally, the master model is sent together with those policies (this master model becomes the new local model). Optionally, the master trainer sends a second software package to the locations to perform local data pre-processing, where the pre-processed data is input for the local training. The master trainer may send policies on how to perform the pre-processing together with the pre-processing software package or separately. As is described in more detail below, software packages, training and data pre-processing policies, and models and/or any pieces or portions thereof may be sent in a concealed way, in some embodiments.

It may be noted that, even though certain embodiments are described as being in the context of FL, the described methods, systems, and techniques may also be applied to non-FL. One example is transfer learning, where a model is initially trained at a first location, and then send to a second location for further training (including the data pre-processing). The policies would be sent from the first location to the second location using methods and techniques described herein.

According to certain embodiments, data pre-processing and training and execution functions are done using software packages (of any form), which are executed on a provided platform at a local location associated with a local trainer. In particular, and as is described in more detail below, the data pre-processing and training policies are sent as concealed messages from a master training environment associated with the master trainer to the local site of the local trainer location, where they are fed into the software packages to configure the data preprocessing and training and execution functions. Where the concealed messages are encrypted, for example, these messages may only be decryptable and comprehensible in the given software and, thus, are not decryptable or otherwise decodable by the platform or, in general, the organization owning or running the platform or the local trainer owning the local data. In general, the models discussed herein may be Al and/or ML models.

It may also be noted that the embodiments described herein can be applied to several domains, e.g., Radio Access Networks (RANs), where gNBs are the local trainers, or Core Networks (CNs), where NWDAFs are the local trainers.

FIGURE 4 illustrates a high-level schematic 100 of concealed FL, according to certain embodiments. Similar to FIGURE 3 described above, local models 102a and 102b are trained by local trainers 105a and 105b at respective local locations, which are represented as Location A and Location B. The local models 102a and 102b are sent to master location, which is represented as Location C. There, in a master training environment 115, the master trainer 120 combines the local models 102a and 102b to generate a master model 125 based on the local models. Although the example scenario depicted in FIGURE 4 includes two local trainers at two local locations for generating two local models, there may be any number of local trainers and/or locations for generating any number of local models. The models mentioned herein may be Al and/or ML models, in particular embodiments.

According to certain embodiments, each of local models 102a and 102b include training & execution modules 130a and 130b, respectively. The training and execution modules 130a and 130b generate the local models 102a and 102b, respectively. The master trainer 120 controls the FL by providing policies for the training and execution modules 130a and 130b to use when training the local models 102a and 102b. This is indicated in FIGURE 4 by the dashed lines from the master trainer 120 to the training and execution modules 130a and 130b.

In particular embodiments, each of local trainers 105a and 105b also include data preprocessing modules 135a and 135b, respectively , which generate features that are then input into the respective training and execution modules 130a and 130b. The master trainer 120 may prescribe how the data pre-processing needs to be performed. This is indicated in FIGURE 4 by the dashed lines from the master trainer 120 to the data pre-processing modules 135a and 135b. According to certain embodiments, the data pre-processing and training policies are sent as encrypted messages from a server or other computing device associated with the master trainer 120 to the local trainers 105a and 105b.

In schematic 100, the data pre-processing modules 135a and 135b and the training and execution modules 130a and 135 are software functions and/or packages running on top of one or more execution platforms 140a and 140b at the local locations. At the local site, the platform operates to feed the data pre-processing and training policies into the software packages to configure the data pre-processing and training. Messages containing the software package and/or data and training policies are only decryptable and comprehensible in the given software module. Thus, neither the platform operated by the local trainers 105a and 105b nor the organization owning or running the platform is able to decode or otherwise determine the contents of the software packages and/or data and training policies provided from the master trainer 120.

These software packages may include standardized interfaces, which are described in further detail below. According to certain embodiments, however, the internals of the software packages and their functions are not visible to the local trainers 105a and 105b, the platforms operating at the local locations, or any party or node between the master trainer 120 and the local trainers 105a and 105b. Thus, modules 130a and 130b and 135a and 135b that receive data and policies to perform the training of the local models 102a and 102b based on that input. However, the policies provided by the master trainer 120 and the details relating to how the training is performed by these modules is not revealed to anyone other than the master trainer 12O.At each location associated with the local trainers 105a and 105b, the software package performs local training, which results in a local model 102a and 102b. As described above, the trained local models 102a and 102b are sent back to the master trainer 120, and the master trainer 120 generates a single master model 125 from the locally trained local models 102a and 102b. The master trainer 120 then sends the master model back to the local trainers 105a and 105b. In a particular embodiment, the master model 125 then becomes the new local model used by the local trainers 105a and 105b.

As described herein, the master trainer 120 sends software package(s), training and data pre-processing policies, and local trained models 102a and 102b to the local trainers 105a and 105. Likewise, local trainers 105a and 105b send local trained models 102a and 102b to the master trainer 120. In a particular embodiment, the master trainer 120 may send training and data pre-processing policies along with or within the training and/or data pre-processing software package. In another particular embodiment, the master trainer 120 may send the policies for how to perform training and/or data pre-processing separately from the software package. According to various embodiments, each of the software packages, policies, and models are sent in a concealed way. Thus, in contrast to previous techniques and systems, the embodiments described herein enable FL to be performed using concealed software packages, policies, and AI/ML models so as to protect the IP of both the master trainer and the local trainers.

Though certain embodiments are described in the context of FL, the methods, techniques, and systems described herein also apply to non-FL. For example, the methods and techniques may be applied to transfer learning, which includes a model that is initially trained at a first location and then sent to a second location for further training. The training performed at the second location may also include data pre-processing. Similar to the embodiments described above with regard to FL, the policies for the data and pre-processing used to performed the training would be sent from the first location to the second location and, thus, is sent in a concealed manner that prohibits the second location from decoding or otherwise detecting the contents of the policies.

Additionally, the embodiments described herein can be applied to several domains. For example, the embodiments may apply to Radio Access Networks (RANs), where gNBs are the local trainers. As another example, the embodiments may apply to Core Networks (CNs), where NWDAFs are the local trainers.

FIGURE 5 illustrates an example high-level schematic 200 of concealed FL as applied in a multi-vendor context, according to certain embodiments. Many of the components and features illustrated in FIGURE 5 are similar to those described above with regard to FIGURE 4 and have not been described in detail.

In contrast to FIGURE 4, however, FIGURE 5 illustrates a multi-vendor setting. For example, vendor X may be responsible for data pre-processing and model training at Location A, vendor Y may be responsible for data pre-processing and model training at Location B, and vendor Z may be responsible for masting training at Location C.

More particularly, in the depicted scenario of FIGURE 5, all boxes shown as white boxes could be one company’s components, while the boxes shown with the pattern fill could be other vendors or the Communications Service Provider (CSP) itself. Stated differently, all of the white boxes may be the responsibility of a vendor or operator that is associated with the master trainer, and the patterned boxes may be the responsibility of one or more other vendors associated with the local trainers. As described in more detail below, the interfaces between the different boxes may be standardized, but the information carried over these interfaces is not revealed. For example, local models may still be transported to the master trainer via a standardized interface, but the internals of the local models are kept hidden.

Another option would be that the data pre-processing is not concealed (in that case, data pre-processing would be depicted as a patterned box in FIGURE 5), but only the training is concealed.

APIs Between Master Trainer and Local Trainers

The following APIs are defined on the interface(s) between master trainers and local trainers:

• Sending a training & execution or data pre-processing software package from master location to local location.

• Sending a model from master location to local training & execution (and sending a local trained model back to the master trainer).

• Sending training & execution or data pre-processing policies from master location to local location.

In one example described above, the master trainer provides a software package for training & execution to the local trainer, and another software package for data pre-processing. Alternatively, one package contains both training & execution and data pre-processing. The package may be, for example, an executable or an Open Container Initiative (OCI) package. According to certain embodiments, the internals of the package(s) are concealed by appropriate measures for IP protection. Only information on how to execute and interact with the software package(s) (e.g., how to use the APIs between local platform and local training & execution as well as the APIs between local platform and local data pre-processing) may be provided by the master trainer to the platform, if any.

As described above, policies (e.g., training or data pre-processing policies) may be provided as part of the software package(s) or may be provided separately such as, for example, at a later time. These policies are encrypted from the platform’s point of view and can only be decrypted and decoded by the training & execution or data pre-processing function inside of the respective software package.

The master model may be inside that software package or combined with the policies or the master model may be provided separately. The latter two approaches would be advantageous if the FL consists of multiple rounds; in this case, a master model is received by a local trainer, made to be the new local model, further trained locally, and after that sent back to the master trainer. For this purpose, the local model can be received from the training & execution module, or, in other words, extracted from the software package in a concealed (i.e. , encrypted) form, so that the internals of the local model are kept hidden. Only the master trainer can decrypt and comprehend the local models received from local trainers.

Either via the training & execution software package and policies and/or the data preprocessing software package and policies, the master trainer can ensure that certain (e.g., corrupted) data is not used to train a local model and, thus, does not adversely affect the performance of the local model. In one example, such data is identified by means of plausibility checks with domain-knowledge-based rules. In another example, such data is identified using anomaly detection techniques. The identified data can be discarded, or alternatively flagged, as part of the data pre-processing or prior to training. In case of FL, this further ensures that such data does not adversely affect the performance of the master model.

Further details of these APIs are described below.

APIs Between Local Platform and Local Training & Execution

FIGURE 6 illustrates example APIs 300 between a training & execution module 305 and the platform 310 on which the module is run, according to certain embodiments.

For example, APIs 300 may include a training policies API for sending policies used for training, for example, from the platform 310 to the training & execution module 305. In particular embodiments, the policies communicated via the API 300 may include one or more of:

• batch size for stochastic optimizers,

• learning rate (constant or adaptive in any way),

• cost/loss function

• optimization algorithm for minimizing the cost/loss function, e.g., (stochastic) gradient descent, Adam, or LBFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno)

• regularization type/method and parameters (e.g., amount/strength), e.g., dropout regularization with 50% dropout rate (or dropout probability); and • maximum depth of a tree (in case of tree-based AI/ML models such as Random Forest or gradient-boosted decision trees as used for example by XGBoost).

Additionally or alternatively, in particular embodiments, the policies communicated via the API 300 may include one or more of:

• number of branches or leaf nodes in a decision tree;

• number of trees in a forest (in case of tree-based AI/ML models such as Random Forest);

• number of iterations or epochs;

• number of (hidden) layers in a neural network;

• number of (hidden) units for each (hidden) layer in a neural network;

• activation function in a neural network, e.g., Sigmoid, ReLU, or Tanh;

• early stopping criteria and related parameters such as fraction of training data to set aside as validation set for early stopping criteria evaluation;

• number of clusters in a clustering algorithm; and/or

• k in kNN or k-Nearest Neighbors algorithm.

As another example, APIs 300 may include a training data API for inputting one or more new input-output vector pairs to the training & execution software package for further training of the (local) AI/ML model.

As still another example, APIs 300 may include a master model API for sending a new master model to the training & execution module so that the new master model can replacing the current local trained model.

As still another example, APIs 300 may include a local model API for reading the current local trained model from the training & execution module so that the local trained module can be sent to the master trainer.

As another example, APIs 300 may include an inference API for querying the AI/ML model, e.g., to provide a new input in order to receive the corresponding output (e.g., prediction).

As still another example, APIs 300 may include a test API for testing the current local trained model, i.e., access the performance of the AI/ML model with a test dataset comprised in the training & execution software package. As still another example, APIs 300 may include a setup API for enabling installation of a new training & execution software package, or for resetting the current local trained model to the latest received master model.

APIs Between Local platform and Local Data Pre-Processing

FIGURE 7 illustrates example APIs 400 between a data pre-preprocessing module 405 and the platform 410 on which the module is run, according to certain embodiments.

For example, APIs 400 may include a pre-processing policies API for sending policies used for pre-processing from the platform 410 to the data pre-processing module 405. In particular embodiments, the policies communicated via the API 400 may include one or more of:

■ imputation of missing values,

■ handling of noise and outliers, e.g., binning and/or capping,

■ removal of unwanted data, e.g., duplicated, corrupted, inconsistent, contradictory, and/or irrelevant data,

■ feature scaling,

■ feature selection,

■ feature encoding, e.g., label/ordinal encoding and/or one-hot encoding,

■ feature creation such as generation of polynomial features,

■ feature transformation such as SVD-based transformation (e.g., PCA), and

■ discretization/quantization of continuous values into discrete features.

As another example, APIs 400 may include a raw data API for inputting data to the data pre-processing module 405.

As another example, APIs 400 may include a training data API for outputting data from the data pre-processing module 405 to the platform 410.

As still another example, APIs 400 may include a setup API to enable installation of a new data pre-processing software package in the data pre-processing module 405.

FIGURE 8 shows an example of a communication system 500 in accordance with some embodiments. In the example, the communication system 500 includes a telecommunication network 502 that includes an access network 504, such as a radio access network (RAN), and a core network 506, which includes one or more core network nodes 508. The access network 504 includes one or more access network nodes, such as network nodes 510a and 510b (one or more of which may be generally referred to as network nodes 510), or any other similar 3 rd Generation Partnership Project (3GPP) access node or non-3GPP access point. The network nodes 510 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 512a, 512b, 512c, and 512d (one or more of which may be generally referred to as UEs 512) to the core network 506 over one or more wireless connections.

Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, the communication system 500 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. The communication system 500 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.

The UEs 512 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 510 and other communication devices. Similarly, the network nodes 510 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 512 and/or with other network nodes or equipment in the telecommunication network 502 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 502.

In the depicted example, the core network 506 connects the network nodes 510 to one or more hosts, such as host 516. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts. The core network 506 includes one more core network nodes (e.g., core network node 508) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 508. Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF). The host 516 may be under the ownership or control of a service provider other than an operator or provider of the access network 504 and/or the telecommunication network 502, and may be operated by the service provider or on behalf of the service provider. The host 516 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.

As a whole, the communication system 500 of FIGURE 8 enables connectivity between the UEs, network nodes, and hosts. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.

In some examples, the telecommunication network 502 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 502 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 502. For example, the telecommunications network 502 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.

In some examples, the UEs 512 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to the access network 504 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 504. Additionally, a UE may be configured for operating in single- or multi-RAT or multi -standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).

In the example, the hub 514 communicates with the access network 504 to facilitate indirect communication between one or more UEs (e.g., UE 512c and/or 512d) and network nodes (e.g., network node 510b). In some examples, the hub 514 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs. For example, the hub 514 may be a broadband router enabling access to the core network 506 for the UEs. As another example, the hub 514 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 510, or by executable code, script, process, or other instructions in the hub 514. As another example, the hub 514 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data. As another example, the hub 514 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 514 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 514 then provides to the UE either directly, after performing local processing, and/or after adding additional local content. In still another example, the hub 514 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.

The hub 514 may have a constant/persistent or intermittent connection to the network node 510b. The hub 514 may also allow for a different communication scheme and/or schedule between the hub 514 and UEs (e.g., UE 512c and/or 512d), and between the hub 514 and the core network 506. In other examples, the hub 514 is connected to the core network 506 and/or one or more UEs via a wired connection. Moreover, the hub 514 may be configured to connect to an M2M service provider over the access network 504 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with the network nodes 510 while still connected via the hub 514 via a wired or wireless connection. In some embodiments, the hub 514 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 510b. In other embodiments, the hub 514 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 510b, but which is additionally capable of operating as a communication start and/or end point for certain data channels. FIGURE 9 shows a UE 600 in accordance with some embodiments. As used herein, a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs. Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc. Other examples include any UE identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-IoT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.

A UE may support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-everything (V2X). In other examples, a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).

The UE 600 includes processing circuitry 602 that is operatively coupled via a bus 604 to an input/ output interface 606, a power source 608, a memory 610, a communication interface 612, and/or any other component, or any combination thereof. Certain UEs may utilize all or a subset of the components shown in FIGURE 9. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.

The processing circuitry 602 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 610. The processing circuitry 602 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry 602 may include multiple central processing units (CPUs).

In the example, the input/output interface 606 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices. Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. An input device may allow a user to capture information into the UE 600. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof. An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.

In some embodiments, the power source 608 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. The power source 608 may further include power circuitry for delivering power from the power source 608 itself, and/or an external power source, to the various parts of the UE 600 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 608. Power circuitry may perform any formatting, converting, or other modification to the power from the power source 608 to make the power suitable for the respective components of the UE 600 to which power is supplied.

The memory 610 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth. In one example, the memory 610 includes one or more application programs 614, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 616. The memory 610 may store, for use by the UE 600, any of a variety of various operating systems or combinations of operating systems.

The memory 610 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof. The UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’ The memory 610 may allow the UE 600 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 610, which may be or comprise a device-readable storage medium.

The processing circuitry 602 may be configured to communicate with an access network or other network using the communication interface 612. The communication interface 612 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 622. The communication interface 612 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network). Each transceiver may include a transmitter 618 and/or a receiver 620 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth). Moreover, the transmitter 618 and receiver 620 may be coupled to one or more antennas (e.g., antenna 622) and may share circuit components, software or firmware, or alternatively be implemented separately.

In the illustrated embodiment, communication functions of the communication interface 612 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/intemet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.

Regardless of the type of sensor, a UE may provide an output of data captured by its sensors, through its communication interface 612, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).

As another example, a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change. For example, the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.

A UE, when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare. Non-limiting examples of such an loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot. A UE in the form of an loT device comprises circuitry and/or software in dependence of the intended application of the loT device in addition to other components as described in relation to the UE 600 shown in FIGURE 9.

As yet another specific example, in an loT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node. The UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the UE may implement the 3GPP NB-IoT standard. In other scenarios, a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.

In practice, any number of UEs may be used together with respect to a single use case. For example, a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone. When the user makes changes from the remote controller, the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed. The first and/or the second UE can also include more than one of the functionalities described above. For example, a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.

FIGURE 10 shows a network node 700 in accordance with some embodiments. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).

Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).

Other examples of network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).

The network node 700 includes a processing circuitry 702, a memory 704, a communication interface 706, and a power source 708. The network node 700 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the network node 700 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, the network node 700 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memory 704 for different RATs) and some components may be reused (e.g., a same antenna 710 may be shared by different RATs). The network node 700 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 700, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 700.

The processing circuitry 702 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 700 components, such as the memory 704, to provide network node 700 functionality.

In some embodiments, the processing circuitry 702 includes a system on a chip (SOC). In some embodiments, the processing circuitry 702 includes one or more of radio frequency (RF) transceiver circuitry 712 and baseband processing circuitry 714. In some embodiments, the radio frequency (RF) transceiver circuitry 712 and the baseband processing circuitry 714 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 712 and baseband processing circuitry 714 may be on the same chip or set of chips, boards, or units.

The memory 704 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 702. The memory 704 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 702 and utilized by the network node 700. The memory 704 may be used to store any calculations made by the processing circuitry 702 and/or any data received via the communication interface 706. In some embodiments, the processing circuitry 702 and memory 704 is integrated.

The communication interface 706 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 706 comprises port(s)/terminal(s) 716 to send and receive data, for example to and from a network over a wired connection. The communication interface 706 also includes radio front-end circuitry 718 that may be coupled to, or in certain embodiments a part of, the antenna 710. Radio front-end circuitry 718 comprises filters 720 and amplifiers 722. The radio front-end circuitry 718 may be connected to an antenna 710 and processing circuitry 702. The radio front-end circuitry may be configured to condition signals communicated between antenna 710 and processing circuitry 702. The radio front-end circuitry 718 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. The radio front-end circuitry 718 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 720 and/or amplifiers 722. The radio signal may then be transmitted via the antenna 710. Similarly, when receiving data, the antenna 710 may collect radio signals which are then converted into digital data by the radio front-end circuitry 718. The digital data may be passed to the processing circuitry 702. In other embodiments, the communication interface may comprise different components and/or different combinations of components.

In certain alternative embodiments, the network node 700 does not include separate radio front-end circuitry 718, instead, the processing circuitry 702 includes radio front-end circuitry and is connected to the antenna 710. Similarly, in some embodiments, all or some of the RF transceiver circuitry 712 is part of the communication interface 706. In still other embodiments, the communication interface 706 includes one or more ports or terminals 716, the radio front-end circuitry 718, and the RF transceiver circuitry 712, as part of a radio unit (not shown), and the communication interface 706 communicates with the baseband processing circuitry 714, which is part of a digital unit (not shown).

The antenna 710 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. The antenna 710 may be coupled to the radio front-end circuitry 718 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, the antenna 710 is separate from the network node 700 and connectable to the network node 700 through an interface or port.

The antenna 710, communication interface 706, and/or the processing circuitry 702 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 710, the communication interface 706, and/or the processing circuitry 702 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.

The power source 708 provides power to the various components of network node 700 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). The power source 708 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 700 with power for performing the functionality described herein. For example, the network node 700 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 708. As a further example, the power source 708 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.

Embodiments of the network node 700 may include additional components beyond those shown in FIGURE 10 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the network node 700 may include user interface equipment to allow input of information into the network node 700 and to allow output of information from the network node 700. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 700.

FIGURE 11 is a block diagram of a host 800, which may be an embodiment of the host 516 of FIGURE 8, in accordance with various aspects described herein. As used herein, the host 800 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm. The host 800 may provide one or more services to one or more UEs.

The host 800 includes processing circuitry 802 that is operatively coupled via a bus 804 to an input/output interface 806, a network interface 808, a power source 810, and a memory 812. Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as FIGURES 6 and 7, such that the descriptions thereof are generally applicable to the corresponding components of host 800.

The memory 812 may include one or more computer programs including one or more host application programs 814 and data 816, which may include user data, e.g., data generated by a UE for the host 800 or data generated by the host 800 for a UE. Embodiments of the host 800 may utilize only a subset or all of the components shown. The host application programs 814 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FL AC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems). The host application programs 814 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, the host 800 may select and/or indicate a different host for over-the-top services for a UE. The host application programs 814 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.

FIGURE 12 is a block diagram illustrating a virtualization environment 900 in which functions implemented by some embodiments may be virtualized.

In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 900 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host. Further, in embodiments in which the virtual node does not require radio connectivity (e.g., a core network node or host), then the node may be entirely virtualized.

Applications 902 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment 900 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.

Hardware 904 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 906 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 908a and 908b (one or more of which may be generally referred to as VMs 908), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. The virtualization layer 906 may present a virtual operating platform that appears like networking hardware to the VMs 908. The VMs 908 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 906. Different embodiments of the instance of a virtual appliance 902 may be implemented on one or more of VMs 908, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.

In the context of NFV, a VM 908 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 908, and that part of hardware 904 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 908 on top of the hardware 904 and corresponds to the application 902.

Hardware 904 may be implemented in a standalone network node with generic or specific components. Hardware 904 may implement some functions via virtualization. Alternatively, hardware 904 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 910, which, among others, oversees lifecycle management of applications 902. In some embodiments, hardware 904 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 912 which may alternatively be used for communication between hardware nodes and radio units.

FIGURE 13 shows a communication diagram of a host 1002 communicating via a network node 1004 with a UE 1006 over a partially wireless connection in accordance with some embodiments. Example implementations, in accordance with various embodiments, of the UE (such as a UE 512a of FIGURE 8 and/or UE 600 of FIGURE 9), network node (such as network node 510a of FIGURE 8 and/or network node 700 of FIGURE 7), and host (such as host 516 of FIGURE 8 and/or host 800 of FIGURE 11) discussed in the preceding paragraphs will now be described with reference to FIGURE 13.

Like host 800, embodiments of host 1002 include hardware, such as a communication interface, processing circuitry, and memory. The host 1002 also includes software, which is stored in or accessible by the host 1002 and executable by the processing circuitry. The software includes a host application that may be operable to provide a service to a remote user, such as the UE 1006 connecting via an over-the-top (OTT) connection 1050 extending between the UE 1006 and host 1002. In providing the service to the remote user, a host application may provide user data which is transmitted using the OTT connection 1050.

The network node 1004 includes hardware enabling it to communicate with the host 1002 and UE 1006. The connection 1060 may be direct or pass through a core network (like core network 506 of FIGURE 8) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks. For example, an intermediate network may be a backbone network or the Internet.

The UE 1006 includes hardware and software, which is stored in or accessible by UE 1006 and executable by the UE’s processing circuitry. The software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 1006 with the support of the host 1002. In the host 1002, an executing host application may communicate with the executing client application via the OTT connection 1050 terminating at the UE 1006 and host 1002. In providing the service to the user, the UE's client application may receive request data from the host's host application and provide user data in response to the request data. The OTT connection 1050 may transfer both the request data and the user data. The UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT connection 1050.

The OTT connection 1050 may extend via a connection 1060 between the host 1002 and the network node 1004 and via a wireless connection 1070 between the network node 1004 and the UE 1006 to provide the connection between the host 1002 and the UE 1006. The connection 1060 and wireless connection 1070, over which the OTT connection 1050 may be provided, have been drawn abstractly to illustrate the communication between the host 1002 and the UE 1006 via the network node 1004, without explicit reference to any intermediary devices and the precise routing of messages via these devices. As an example of transmitting data via the OTT connection 1050, in step 1008, the host 1002 provides user data, which may be performed by executing a host application. In some embodiments, the user data is associated with a particular human user interacting with the UE 1006. In other embodiments, the user data is associated with a UE 1006 that shares data with the host 1002 without explicit human interaction. In step 1010, the host 1002 initiates a transmission carrying the user data towards the UE 1006. The host 1002 may initiate the transmission responsive to a request transmitted by the UE 1006. The request may be caused by human interaction with the UE 1006 or by operation of the client application executing on the UE 1006. The transmission may pass via the network node 1004, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 1012, the network node 1004 transmits to the UE 1006 the user data that was carried in the transmission that the host 1002 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 1014, the UE 1006 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 1006 associated with the host application executed by the host 1002.

In some examples, the UE 1006 executes a client application which provides user data to the host 1002. The user data may be provided in reaction or response to the data received from the host 1002. Accordingly, in step 1016, the UE 1006 may provide user data, which may be performed by executing the client application. In providing the user data, the client application may further consider user input received from the user via an input/ output interface of the UE 1006. Regardless of the specific manner in which the user data was provided, the UE 1006 initiates, in step 1018, transmission of the user data towards the host 1002 via the network node 1004. In step 1020, in accordance with the teachings of the embodiments described throughout this disclosure, the network node 1004 receives user data from the UE 1006 and initiates transmission of the received user data towards the host 1002. In step 1022, the host 1002 receives the user data carried in the transmission initiated by the UE 1006.

One or more of the various embodiments improve the performance of OTT services provided to the UE 1006 using the OTT connection 1050, in which the wireless connection 1070 forms the last segment. More precisely, the teachings of these embodiments may improve one or more of, for example, data rate, latency, and/ or power consumption and, thereby, provide benefits such as, for example, reduced user waiting time, relaxed restriction on file size, improved content resolution, better responsiveness, and/or extended battery lifetime. In an example scenario, factory status information may be collected and analyzed by the host 1002. As another example, the host 1002 may process audio and video data which may have been retrieved from a UE for use in creating maps. As another example, the host 1002 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights). As another example, the host 1002 may store surveillance video uploaded by a UE. As another example, the host 1002 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs. As other examples, the host 1002 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.

In some examples, a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection 1050 between the host 1002 and UE 1006, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 1002 and/or UE 1006.

In some embodiments, sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 1050 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities. The reconfiguring of the OTT connection 1050 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 1004. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 1002. The measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 1050 while monitoring propagation times, errors, etc.

FIGURE 14 illustrates a method 1100 by a first radio node for concealed FL, according to certain embodiments. As illustrated, the method begins at step 1102, when the first radio transmits, to one or more other radio nodes that are located remotely from the first radio node, a local training package for performing local training and/or generating a local model. At step 1104, the first radio node receives one or more local models from the one or more other radio nodes. Based on the one or more local models received from the one or more other radio nodes, the first radio node generates a master model, at step 1106. At step 1108, the first radio node transmits the master model to the one or more other radio nodes.

In a further particular embodiment, the first radio node is operating as a training node and the one or more other radio nodes are operating as learning nodes. In a particular embodiment, the local training package comprises a software package that can be executed on a platform at/by the other radio nodes.

In a particular embodiment, the first radio node transmits, to the one or more other radio nodes, at least one policy for performing the local training.

In a further particular embodiment, the at least one policy for performing the local training is transmitted to the one or more radio nodes with the local training package.

In a further particular embodiment, the at least one policy for performing the local training is transmitted to the one or more radio nodes separately from the local training package.

In a further particular embodiment, the at least one policy for performing the local training is concealed from the one or more radio nodes.

In a further particular embodiment, the operations associated with the local training package for performing the local training and/or generating the local model are concealed from the one or more radio nodes.

In a particular embodiment, the one or more local models are received in concealed form.

In a particular embodiment, the master model is transmitted in concealed form.

In a particular embodiment, the first radio node transmits, to the one or more other radio nodes, a pre-processing package for performing data pre-processing at the respective radio node. The data pre-processing is performed prior to generating the local model, and the output of performing the data pre-processing is the input for the local training package.

In a particular embodiment, the pre-processing package comprises a software package that can be executed on a platform at/by the other radio nodes.

In a further particular embodiment, the first radio node transmits, to the one or more other radio nodes, at least one policy for performing the data pre-processing.

In a further particular embodiment, the at least one policy for performing the data preprocessing is transmitted to the one or more radio nodes with the pre-processing package. In a further particular embodiment, the at least one policy for performing the data preprocessing is transmitted to the one or more radio nodes separately from the pre-processing package.

In a further particular embodiment, the at least one policy for performing the data preprocessing is concealed from the one or more radio nodes.

In a further particular embodiment, the operations associated with the pre-processing package for performing the data pre-processing are concealed from the one or more radio nodes.

In a particular embodiment, the one or more radio nodes include a plurality of radio node. In this scenario, receiving the one or more local models includes receiving a plurality of local models, wherein each local model is associated with a respective one of the plurality of radio nodes. The master model is generated based on the plurality of local models.

In a particular embodiment, the first radio node is a user equipment (UE).

In a further particular embodiment, the first radio node provides user data and forwards the user data to a host via the transmission to a network node.

In a particular embodiment, the first radio node is a network node.

In a further particular embodiment, the first radio node obtains user data and forwards the user data to a host or a user equipment.

FIGURE 15 illustrates a method 1200 by a second radio node for concealed FL, according to certain embodiments. As illustrates, the method begins at step 1202 when the second radio node receives, from a first radio node, a local training package for performing local training to generate a local model. Based on the local training package, the second radio node performs the local training to generate the local model, at step 1204. At step 1206, the second radio node transmits, to the first radio node, the local model. At step 1208, the second radio node receives, from the first radio node, a master model that is generated based on the local model and at least one other local model associated with at least one other radio node. At step 1210, the second radio node stores the master model.

In a particular embodiment, the second radio node is operating as a learning node and the first radio node is operating as a training node.

In a particular embodiment, the local training package comprises a software package that can be executed on a platform at/by the second radio node.

In a particular embodiment, the second radio node receives, from the first radio node, at least one policy for performing the local training. In a further particular embodiment, the at least one policy for performing the local training is received with the local training package.

In a further particular embodiment, the at least one policy for performing the local training is received separately from the local training package.

In a further particular embodiment, wherein the at least one policy for performing the local training is concealed from the second radio node.

In a particular embodiment, operations associated with the local training package for performing the local training and/or generating the local model are concealed.

In a particular embodiment, the one or more local models are transmitted in concealed form.

In a particular embodiment, the master model is received in concealed form.

In a particular embodiment, the second radio node receives, from the first radio node, a pre-processing package for performing data pre-processing at the second radio node. The data pre-processing is performed prior to generating the local model, and the output of performing the data pre-processing is the input for the local training package.

In a particular embodiment, the pre-processing package comprises a software package that can be executed on a platform at/by the second radio node.

In a particular embodiment, the second radio node receives, from the first radio node, at least one policy for performing the data pre-processing.

In a further particular embodiment, the at least one policy for performing the data preprocessing is received with the pre-processing package.

In a further particular embodiment, the at least one policy for performing the data preprocessing is received separately from the pre-processing package.

In a further particular embodiment, the at least one policy for performing the data preprocessing is concealed from the second radio node.

In a further particular embodiment, operations associated with the pre-processing package for performing the data pre-processing are concealed from the second radio node.

In a particular embodiment, the second radio node is a UE.

In a further particular embodiment, the UE provides user data and forwards the user data to a host via the transmission to a network node.

In a particular embodiment, the second radio node is a network node.

In a particular embodiment, the network node obtains user data and forwards the user data to a host or a UE. FIGURE 16 illustrates an example method 1300 by a first radio node operating as a master trainer for concealed learning, according to certain embodiments. The method begins at step 1302 when the first radio node transmits, to a second radio node operating as a local trainer, one or more software packages for performing local training of and/or generating a local model, the one or more software packages being transmitted in a concealed format that prevents the second radio node from decrypting the one or more software packages. The first radio node receives the local model from the second radio node in a concealed format that only the first radio node operating as the master trainer is able to decrypt. Based on at least the local model received from the second radio node, the first radio node generates a master model and transmits the master model to the second radio node.

In a particular embodiment, the concealed format is an encrypted format.

In a particular embodiment, the concealed format prevents any third party from decrypting the one or more software packages.

In a particular embodiment, the one or more software packages comprise one or more Open Container Initiative (OCI) packages.

In a particular embodiment, transmitting the master model to the second radio node includes transmitting a version of the software package that is updated based on the master model or transmitting a portion of the software package that is updated based on the master model.

In a particular embodiment, the one or more software packages include at least one policy for performing the local training of and/or generating of the local model and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.

In a particular embodiment, the first radio node transmits information to the second radio node operating as the local trainer. The information is transmitted in a concealed format that prevents the second radio node from decrypting the information. The information includes at least one training policy for performing the local training of and/or generating of the local model and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.

In a particular embodiment, the information is communicated between the one or more software packages and a platform operating on the one or more other radio nodes via at least one API. In a particular embodiment, the platform is unable to decrypt the one or more software packages and/or the information.

In a particular embodiment, the information comprises the master model.

In a particular embodiment, the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.

In a particular embodiment, the at least one data pre-processing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features, SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.

In a particular embodiment, the information is based on the master model.

In a particular embodiment, when receiving the local model, the first radio node receives an updated version of the one or more software packages that includes the local model.

In a particular embodiment, the first radio node transmits, to a third radio node operating as an additional local trainer, the one or more software packages for performing local training of and/or generating a local model, the one or more software packages transmitted in the concealed format. The first radio node receives, from the third radio node operating as the additional local trainer, a second local model in the concealed format that only the first radio node operating as the master trainer is able to decrypt. The first radio node transmits the master model to the third radio node operating as the additional local trainer, and the master model is generated based on the first local model and the second local model.

In a particular embodiment, the second radio node and the third radio node are located at different locations.

In a particular embodiment, at least one of the software package, the first local model, and the master model comprises an Al model.

In a particular embodiment, the first radio node operating as the master trainer is associated with at least one of: an Operation & Maintenance node or system, a SMO node or system, aNon-RT RIC, aNear-RT RIC, a Core Network node, a gNB, and a gNB-CU.

FIGURE 17 illustrates an example method 1400 by a second radio node operating as a local trainer, according to certain embodiments. The method begins at step 1402 when the second radio node receives, from a first radio node operating as a master trainer 120, one or more software packages for performing local training of and/or generate of a local model. The one or more software packages is received in a concealed format that prevents the second radio node from decrypting the one or more software packages. At step 1404, the second radio node uses the one or more software packages to perform the local training of and/or the generating of local model. At step 1406, the second radio node transmits the local model to the first radio node in a concealed format that the master trainer is able to decrypt. At step 1408, the second radio node receive, from the first radio node, a master model that is generated based on the local model.

In a particular embodiment, the concealed format is an encrypted format.

In a particular embodiment, the concealed format prevents any third party from decrypting the one or more software packages.

In a particular embodiment, the one or more software packages comprise one or more OCI packages.

In a particular embodiment, when receiving the master model from the first radio node, the second radio node receives a version of the software package that is updated based on the master model or a portion of the software package that is updated based on the master model.

In a particular embodiment, the one or more software packages include at least one policy for performing the local training of and/or generating of the local mode and/or at least one policy for performing data pre-processing for generating at least one feature for the local training of and/or generating of the local model.

In a particular embodiment, the second radio node receives information from the first radio node operating as the master trainer. The information is received in a concealed format that prevents the second radio node from decrypting the information, and the information includes at least one training policy for performing the local training of and/or generating of the local model and/or at least one data pre-processing policy for generating at least one feature for the local training of and/or generating of the local model.

In a particular embodiment, the information is communicated between the one or more software packages and a platform operating on the one or more other radio nodes via at least one API.

In a particular embodiment, the platform is unable to decrypt the one or more software packages and/or the information.

In a particular embodiment, the information comprises the master model. In a particular embodiment, the at least one training policy indicates at least one of: a batch size; a learning rate; an optimization algorithm for minimizing a cost function; a regularization type or method; at least one parameter associated with a regularization type or method; and a maximum depth of a decision tree used by the local model.

In a particular embodiment, the at least one data pre-processing policy indicates at least one of: imputation of missing values, feature scaling and/or encoding, generation of polynomial features, SVD-based transformation, and discretization and/or quantization of continuous values into discrete features.

In a particular embodiment, the information is based on the master model.

In a particular embodiment, when transmitting the local model, the second radio node transmits an updated version of the one or more software packages that includes the local model.

In a particular embodiment, at least one of the one or more software packages, the local model, and the master model comprises an Al model.

In a particular embodiment, the second radio node operating as the local trainer is associated with at least one of a gNB, a UE, and a Near-RT RIC.

Although the computing devices described herein (e.g., UEs, network nodes, hosts) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.

In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer- readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.

EXAMPLE EMBODIMENTS

Group A Example Embodiments

Example Embodiment Al . A method by a UE for concealed FL, the method comprising: any of the UE steps, features, or functions described above, either alone or in combination with other steps, features, or functions described above.

Example Embodiment A2. The method of the previous embodiment, further comprising one or more additional UE steps, features or functions described above.

Example Embodiment A3. The method of any of the previous embodiments, further comprising: providing user data; and forwarding the user data to a host computer via the transmission to the network node.

Group B Example Embodiments

Example Embodiment BL A method performed by a network node for concealed FL, the method comprising: any of the network node steps, features, or functions described above, either alone or in combination with other steps, features, or functions described above.

Example Embodiment B2. The method of the previous embodiment, further comprising one or more additional network node steps, features or functions described above.

Example Embodiment B3. The method of any of the previous embodiments, further comprising: obtaining user data; and forwarding the user data to a host or a UE. Group C Example Embodiments

Example Embodiment Cl. A method by a first radio node for concealed FL, the method comprising: transmitting, to one or more other radio nodes (e.g., at least a second radio node) that are located remotely from the first radio node, a local training package for performing local training and/or generating a local model; receiving one or more local models from the one or more other radio nodes; and based on the one or more local models received from the one or more other radio nodes, generating a master model; and transmitting the master model to the one or more other radio nodes.

Example Embodiment C2. The method of Example Embodiment Cl, further comprising transmitting, to the one or more other radio nodes, at least one policy for performing the local training.

Example Embodiment C3. The method of Example Embodiment C2, wherein the at least one policy for performing the local training is transmitted to the one or more radio nodes with the local training package.

Example Embodiment C4. The method of Example Embodiment C2, wherein the at least one policy for performing the local training is transmitted to the one or more radio nodes separately from the local training package.

Example Embodiment C5. The method of any one of Example Embodiments C2 to C4, wherein the at least one policy for performing the local training is concealed from the one or more radio nodes.

Example Embodiment C6. The method of any one of Example Embodiments Cl to C5, further comprising, transmitting, to the one or more other radio nodes, a pre-processing package for performing data pre-processing at the respective radio node, the data preprocessing performed prior to generating the local model, wherein the output of performing the data pre-processing is the input for the local training package.

Example Embodiment C7. The method of Example Embodiment C6, further comprising transmitting, to the one or more other radio nodes, at least one policy for performing the data pre-processing.

Example Embodiment C8. The method of Example Embodiment C7, wherein the at least one policy for performing the data pre-processing is transmitted to the one or more radio nodes with the pre-processing package. Example Embodiment C9. The method of Example Embodiment C7, wherein the at least one policy for performing the data pre-processing is transmitted to the one or more radio nodes separately from the pre-processing package.

Example Embodiment CIO. The method of any one of Example Embodiments C7 to C9, wherein the at least one policy for performing the data pre-processing is concealed from the one or more radio nodes.

Example Embodiment Cl 1. The method of any one of Example Embodiments C6 to CIO, wherein operations associated with the pre-processing package for performing the data pre-processing are concealed from the one or more radio nodes.

Example Embodiment Cl 2. The method of any one of Example Embodiments Cl to Cl 1, wherein: the one or more radio nodes comprise a plurality of radio nodes, and receiving the one or more local models comprises receiving a plurality of local models, wherein a local model is associated with a respective one of the plurality of radio nodes; and the master model is generated based on the plurality of local models.

Example Embodiment Cl 3. The method of any one of Example Embodiments Cl to Cl 2, wherein operations associated with the local training package for performing the local training and/or generating the local model are concealed from the one or more radio nodes.

Example Embodiment Cl 4. The method of any one of Example Embodiments Cl to Cl 3, wherein the radio node is a user equipment (UE).

Example Embodiment Cl 5. The method of Example Embodiment C14, further comprising: providing user data; and forwarding the user data to a host via the transmission to the network node.

Example Embodiment Cl 6. The method of any one of Example Embodiments Cl to Cl 3, wherein the radio node is a network node.

Example Embodiment Cl 7. The method of Example Embodiment Cl 6, further comprising: obtaining user data; and forwarding the user data to a host or a user equipment.

Example Embodiment Cl 8. A radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments Cl to Cl 7.

Example Embodiment Cl 9. A radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments Cl to Cl 7.

Example Embodiment C20. A computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments Cl to C17. Example Embodiment C21. A computer program product comprising computer program, the computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments Cl to C17.

Example Embodiment C22. A non-transitory computer readable medium storing instructions which when executed by a computer perform any of the methods of Example Embodiments Cl to Cl 7.

Group D Example Embodiments

Example Embodiment DI. A method by a second radio node for concealed FL, the method comprising: receiving, from a first radio node, a local training package for performing local training to generate a local model; based on the local training package, performing the local training to generate the local model; and transmitting, to the first radio node, the local model; and receiving, from the first radio node, a master model that is generated based on the local model and at least one other local model associated with at least one other radio node; and storing the master model.

Example Embodiment D2. The method of Example Embodiment DI, further comprising receiving, from the first radio node, at least one policy for performing the local training.

Example Embodiment D3. The method of Example Embodiment D2, wherein the at least one policy for performing the local training is received with the local training package.

Example Embodiment D4. The method of Example Embodiment D2, wherein the at least one policy for performing the local training is received separately from the local training package.

Example Embodiment D5. The method of any one of Example Embodiments D2 to D4, wherein the at least one policy for performing the local training is concealed from second radio node.

Example Embodiment D6. The method of any one of Example Embodiments DI to D5, further comprising, receiving, from the first radio node, a pre-processing package for performing data pre-processing at the second radio node, the data pre-processing performed prior to generating the local model, wherein the output of performing the data pre-processing is the input for the local training package. Example Embodiment D7. The method of Example Embodiment D6, further comprising receiving, from the first radio node, at least one policy for performing the data preprocessing.

Example Embodiment D8. The method of Example Embodiment D7, wherein the at least one policy for performing the data pre-processing is received with the pre-processing package.

Example Embodiment D9. The method of Example Embodiment D7, wherein the at least one policy for performing the data pre-processing is received separately from the preprocessing package.

Example Embodiment DIO. The method of any one of Example Embodiments D7 to D9, wherein the at least one policy for performing the data pre-processing is concealed from the second radio node.

Example Embodiment Dl l. The method of any one of Example Embodiments D6 to DIO, wherein operations associated with the pre-processing package for performing the data pre-processing are concealed from the second radio node.

Example Embodiment DI 2. The method of any one of Example Embodiments DI to Dll, wherein operations associated with the local training package for performing the local training and/or generating the local model are concealed from the second radio node.

Example Embodiment DI 3. The method of any one of Example Embodiments DI to DI 2, wherein the second radio node is a UE.

Example Embodiment DI 4. The method of Example Embodiment D13, further comprising: providing user data; and forwarding the user data to a host via the transmission to the network node.

Example Embodiment DI 5. The method of any one of Example Embodiments DI to DI 2, wherein the second radio node is a network node.

Example Embodiment DI 6. The method of Example Embodiment D15, further comprising: obtaining user data; and forwarding the user data to a host or a user equipment.

Example Embodiment DI 7. A second radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments DI to DI 6.

Example Embodiment DI 8. A second radio node comprising processing circuitry configured to perform any of the methods of Example Embodiments DI to DI 6. Example Embodiment DI 9. A computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments DI to D16.

Example Embodiment D20. A computer program product comprising computer program, the computer program comprising instructions which when executed on a computer perform any of the methods of Example Embodiments DI to DI 6.

Example Embodiment D21. A non-transitory computer readable medium storing instructions which when executed by a computer perform any of the methods of Example Embodiments DI to DI 6.

Group E Example Embodiments

Example Embodiment El . A UE for concealed FL, comprising: processing circuitry configured to perform any of the steps of any of the Group A, C, and D Example Embodiments; and power supply circuitry configured to supply power to the processing circuitry.

Example Embodiment E2. A network node for concealed FL, the network node comprising: processing circuitry configured to perform any of the steps of any of the Group B, C, and D Example Embodiments; power supply circuitry configured to supply power to the processing circuitry.

Example Embodiment E3. A UE for concealed FL, the UE comprising: an antenna configured to send and receive wireless signals; radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform any of the steps of any of the Group A, C, and D Example Embodiments; an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry; an output interface connected to the processing circuitry and configured to output information from the UE that has been processed by the processing circuitry; and a battery connected to the processing circuitry and configured to supply power to the UE.

Example Embodiment E4. A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to provide user data; and a network interface configured to initiate transmission of the user data to a cellular network for transmission to a UE, wherein the UE comprises a communication interface and processing circuitry, the communication interface and processing circuitry of the UE being configured to perform any of the steps of any of the Group A, C, and D Example Embodiments to receive the user data from the host.

Example Embodiment E5. The host of the previous Example Embodiment, wherein the cellular network further includes a network node configured to communicate with the UE to transmit the user data to the UE from the host.

Example Embodiment E6. The host of the previous 2 Example Embodiments, wherein: the processing circuitry of the host is configured to execute a host application, thereby providing the user data; and the host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.

Example Embodiment E7. A method implemented by a host operating in a communication system that further includes a network node and a UE, the method comprising: providing user data for the UE; and initiating a transmission carrying the user data to the UE via a cellular network comprising the network node, wherein the UE performs any of the operations of any of the Group A embodiments to receive the user data from the host.

Example Emboidment E8. The method of the previous Example Embodiment, further comprising: at the host, executing a host application associated with a client application executing on the UE to receive the user data from the UE.

Example Embodiment E9. The method of the previous Example Embodiment, further comprising: at the host, transmitting input data to the client application executing on the UE, the input data being provided by executing the host application, wherein the user data is provided by the client application in response to the input data from the host application.

Example Emboidment El 0. A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to provide user data; and a network interface configured to initiate transmission of the user data to a cellular network for transmission to a E), wherein the UE comprises a communication interface and processing circuitry, the communication interface and processing circuitry of the UE being configured to perform any of the steps of any of the Group A, C, and D Example Embodiments to transmit the user data to the host.

Example Emboidment El 1. The host of the previous Example Embodiment, wherein the cellular network further includes a network node configured to communicate with the UE to transmit the user data from the UE to the host. Example Embodiment El 2. The host of the previous 2 Example Embodiments, wherein: the processing circuitry of the host is configured to execute a host application, thereby providing the user data; and the host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.

Example Embodiment El 3. A method implemented by a host configured to operate in a communication system that further includes a network node and a UE, the method comprising: at the host, receiving user data transmitted to the host via the network node by the UE, wherein the UE performs any of the steps of any of the Group A, C, and D Example Embodiments to transmit the user data to the host.

Example Embodiment El 4. The method of the previous Example Embodiment, further comprising: at the host, executing a host application associated with a client application executing on the UE to receive the user data from the UE.

Example Embodiment El 5. The method of the previous Example Embodiment, further comprising: at the host, transmitting input data to the client application executing on the UE, the input data being provided by executing the host application, wherein the user data is provided by the client application in response to the input data from the host application.

Example Embodiment El 6. A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to provide user data; and a network interface configured to initiate transmission of the user data to a network node in a cellular network for transmission to a UE, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C, and D Example Embodiments to transmit the user data from the host to the UE.

Example Embodiment El 7. The host of the previous Example Embodiment, wherein: the processing circuitry of the host is configured to execute a host application that provides the user data; and the UE comprises processing circuitry configured to execute a client application associated with the host application to receive the transmission of user data from the host.

Example Embodiment El 8. A method implemented in a host configured to operate in a communication system that further includes a network node and a UE, the method comprising: providing user data for the UE; and initiating a transmission carrying the user data to the UE via a cellular network comprising the network node, wherein the network node performs any of the operations of any of the Group B, C, and D Example Embodiments to transmit the user data from the host to the UE.

Example Embodiment El 9. The method of the previous Example Embodiment, further comprising, at the network node, transmitting the user data provided by the host for the UE.

Example Emboidment E20. The method of any of the previous 2 Example Embodiments, wherein the user data is provided at the host by executing a host application that interacts with a client application executing on the UE, the client application being associated with the host application.

Example Embodiment E21. A communication system configured to provide an over- the-top service, the communication system comprising: a host comprising: processing circuitry configured to provide user data for a user equipment (UE), the user data being associated with the over-the-top service; and a network interface configured to initiate transmission of the user data toward a cellular network node for transmission to the UE, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C, and D Example Embodiments to transmit the user data from the host to the UE.

Example Embodiment E22. The communication system of the previous Example Embodiment, further comprising: the network node; and/or the user equipment.

Example Embodiment E23. A host configured to operate in a communication system to provide an OTT service, the host comprising: processing circuitry configured to initiate receipt of user data; and a network interface configured to receive the user data from a network node in a cellular network, the network node having a communication interface and processing circuitry, the processing circuitry of the network node configured to perform any of the operations of any of the Group B, C, and D Example Embodiments to receive the user data from a UE for the host.

Example Embodiment E24. The host of the previous 2 Example Embodiments, wherein: the processing circuitry of the host is configured to execute a host application, thereby providing the user data; and the host application is configured to interact with a client application executing on the UE, the client application being associated with the host application.

Example Embodiment E25. The host of the any of the previous 2 Example Embodiments, wherein the initiating receipt of the user data comprises requesting the user data. Example Embodiment E26. A method implemented by a host configured to operate in a communication system that further includes a network node and a UE, the method comprising: at the host, initiating receipt of user data from the UE, the user data originating from a transmission which the network node has received from the UE, wherein the network node performs any of the steps of any of the Group B, C, and D Example Embodiments to receive the user data from the UE for the host.

Example Embodiment E27. The method of the previous Example Embodiment, further comprising at the network node, transmitting the received user data to the host.