Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR CONTROLLING THE TRAINING OF MACHINE LEARNING MODELS IN PIPELINED ANALYTICS SERVICE IMPLEMENTATIONS
Document Type and Number:
WIPO Patent Application WO/2023/227937
Kind Code:
A1
Abstract:
An Analytics Service Training Pipeline (ASTP) is defined, along with an ML Model and Analytics Registry (MMAR) function. Analytics Services and Subservices are identified by Analytics ID(s). The ASTP describes the data stages within an Analytics (Sub)Service implementation, and how they apply to ML model (re-)training operations. The NWDAF containing MMAR receives ASTP information from a Network Manager or NWDAF containing AnLF, stores it, and supplies it to the NWDAF containing MTLF, for use in ML model (re-)training. When an Analytics (Sub)Service implementation is deployed in the NWDAF containing AnLF, its ASTP is also made available in the NWDAF containing MMAR. When the NWDAF containing MTLF receives a model provision or info request, it retrieves the relevant ASTP, and proceeds accordingly. In this manner, once the ASTP definition is available at the NWDAF containing MTLF, it knows the information and order required re-train a given ML model.

Inventors:
MONJAS LLORENTE MIGUEL ANGEL (ES)
GARCIA MARTIN MIGUEL ANGEL (ES)
Application Number:
PCT/IB2022/062899
Publication Date:
November 30, 2023
Filing Date:
December 29, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
G06N5/02; G06N20/20; H04L41/16
Foreign References:
US20220108214A12022-04-07
US20210326736A12021-10-21
US20220078140A12022-03-10
EP22382496A2022-05-25
EP21383092A2021-12-03
Other References:
LUO ZHAOJING ET AL: "MLCask: Efficient Management of Component Evolution in Collaborative Data Analytics Pipelines", 2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE), IEEE, 19 April 2021 (2021-04-19), pages 1655 - 1666, XP033930342, DOI: 10.1109/ICDE51399.2021.00146
Attorney, Agent or Firm:
GREEN, Edward H. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method (100), of training or re-training one or more Machine Learning, ML, models associated with an Analytics Service, characterized by: receiving (102), from a consumer, one of a request for information about ML models, and a request for provision of one or more ML models, each request including at least one of an Analytics ID, an Analytics ID-Analytics Filter pair, and an ML model identifier. accessing (104) an ML Model and Analytics Registry, MMAR, storing mappings between Analytics Service Training Pipelines and at least one of Analytics IDs, Analytics ID-Analytics Filter pairs, and ML model identifiers, to retrieve an Analytics Service Training Pipeline associated with the identified Analytics ID, Analytics ID- Analytics Filter pair, or ML model identifier; and training or re-training (106) one or more ML models according to the retrieved Analytics Service Training Pipeline; wherein each Analytics Service Training Pipeline identifies data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation.

2. The method (100) of claim 1 , further characterized by providing the consumer information about the ML models, or information for retrieving an ML model, associated with the Analytics Service or Analytics Subservice.

3. The method (100) of any preceding claim wherein accessing an MMAR to retrieve an Analytics Service Training Pipeline associated with the identified Analytics Service or Analytics Subservice comprises using an extension of the Nnwdaf services to retrieve the Analytics Service Training Pipeline.

4. The method (100) of claim 3 wherein an extension of the Nnwdaf services includes signaling for storing and retrieving Analytics Service Training Pipelines to and from the MMAR.

5. The method (100) of any preceding claim wherein an Analytics Service Training Pipeline is represented as a Directed Acyclic Graph, DAG, comprising an identifier, nodes, and edges, wherein each edge is directed from one node to another node, and wherein following the edge directions never forms a closed loop.

6. The method (100) of claim 5 wherein each node in a DAG of an Analytics Service Training Pipeline belongs one of four classes, comprising data input, data output, data processor, and ML model, and wherein each edge in the DAG connects a first node to a second node in one direction.

7. The method (100) of claim 6 wherein one of, all the nodes in the DAG, and all edges in the DAG, include an attribute identifying an output data schema of the node or edge, respectively.

8. The method (100) of claim 6 wherein a data input node represents a data ingestion operation, and includes at least an attribute indicating a network function or other data source.

9. The method (100) of claim 6 wherein a data output node represents data delivered to the consumer of the Analytics Service, and includes at least an attribute identifying its output data schema.

10. The method (100) of claim 6 wherein a data processor node represents a transformation of data input to produce data output, and includes at least an identifier that enables the MMAR to retrieve a data processor implementation from a software component repository.

11. The method (100) of claim 6 wherein an ML model node represents an ML model trainer that receives data from an upstream data processor node and trains or re-trains an ML model, and includes an identifier that enables the MMAR to retrieve an ML model trainer implementation from a software component repository.

12. The method (100) of claim 11 wherein the identifier of an ML model node is the identifier of the ML model it trains.

13. The method according to any of the preceding claims where the method is performed by a Model Training Logical Function, MTLF, of a telecommunication network.

14. A method (200), performed by a Machine Learning, ML, Model and Analytics Registry, MMAR of providing information for training or re-training Machine Learning, ML, models, wherein the MMAR maps Analytics Service Training Pipelines to one of Analytics IDs, Analytics ID-Analytics Filter pairs, and ML model identifiers, the method (200) characterized by: receiving (202), from a Model Training Logical Function, MTLF, a request for an Analytics Service Training Pipeline, the request including at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier; and returning (204) information about an Analytics Service Training Pipeline associated with the Analytics ID, Analytics ID-Analytics Filter pair, or ML model identifier; whereby the MTLF determines how to train or re-train one or more ML models associated to the received Analytics Service Training Pipeline.

15. The method (200) of claim 14 wherein information about an Analytics Service Training Pipeline associated with the Analytics ID, Analytics ID-Analytics Filter pair, or ML model identifier comprises the Analytics Service Training Pipeline.

16. The method (200) of claim 14 wherein information about an Analytics Service Training Pipeline associated with the Analytics ID, Analytics ID-Analytics Filter pair, or ML model identifier comprises a Uniform Resource Locator (URL) of a Pipeline Registry containing the Analytics Service Training Pipeline.

17. The method (200) of claim 14, further characterized by, prior to receiving the request from the MTLF: receiving, from a Network Manager, a request to store an Analytics Service Training Pipeline, the request including at least one of an Analytics ID, Analytics ID- Analytics Filter pair, and an ML model identifier; and storing a mapping between the Analytics ID, Analytics ID-Analytics Filter pair, or ML model identifier and the Analytics Service Training Pipeline. storing the Analytic Service Training Pipeline or Uniform Resource Locator (URL) of a Pipeline Registry containing the Analytics Service Training Pipeline

18. The method (200) of claim 14, further characterized by, prior to receiving the request from the MTLF: receiving, from an Analytics Logical Function, AnLF, a request to store an Analytics Service Training Pipeline, the request including at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier; and storing a mapping between at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier and the Analytics Service Training Pipeline. storing the Analytic Service Training Pipeline or Uniform Resource Locator (URL) of the a Pipeline Registry containing the Analytics Service Training Pipeline

19. A network node (20, 30) of a telecommunication network implementing a data analytics network function, characterized by: communication circuitry (26) configured to communicate with other nodes of the telecommunication network; and processing circuitry (22) operatively connected to the communication circuitry and configured to: implement a Machine Learning, ML, Model Training Logical Function, MTLF, configured to train and re-train ML models according to Analytics Service Training Pipelines associated with Analytics IDs, Analytics ID-Analytics Filter pairs, or ML model identifiers specified by consumers, and provide the trained and re-trained ML models to the consumers; and implement an ML Model and Analytics Registry, MMAR, configured to map one of Analytics IDs, Analytics ID-Analytics Filter pairs, and ML model identifiers to Analytics Service Training Pipelines, and to provide the MTLF information about an Analytics Service Training Pipeline in response to a request including an Analytics ID, an Analytics ID-Analytics Filter pair, or ML model identifier.

20. The network node (20, 30) of claim 19 wherein information about an Analytics Service Training Pipeline comprises the Analytics Service Training Pipeline.

21. The network node (20, 30) of claim 19 wherein information about an Analytics Service Training Pipeline comprises a Uniform Resources Locator, URL, of a Pipeline Registry containing the Analytics Service Training Pipeline.

22. The network node (20, 30) of any of claims 19-21, wherein the processing circuitry is further configured to implement an Analytics Logical Function (AnLF) configured to provide one or more Analytics IDs to the MTLF, and to receive from the MTLF one or more ML models associated with the Analytics Services identified by corresponding Analytics I D(s) , the ML models trained or re-trained according to Analytics Service Training Pipelines associated with the Analytics Services.

23. The network node (20, 30) of any of claims 19-21, wherein the processing circuitry is further configured to implement an Analytics Logical Function (AnLF) configured to provide one or more Analytics ID-Analytics Filter pairs to the MTLF, and to receive from the MTLF one or more ML models associated with the Analytics Subservices identified by corresponding Analytics ID-Analytics Filters pairs, the ML models trained or re-trained according to Analytics Service Training Pipelines associated with the Analytics Subservices.

24. A method according to any of the claims 19-23, where the MMAR is implemented in a data analytics network function of a telecommunications network.

25. A method (300) of generating and storing an Analytics Service Training Pipeline, characterized by: generating an Analytics Service Training Pipeline identifying data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation; and sending, to an ML Model and Analytics Registry, MMAR, the Analytics Service Training Pipeline and at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier.

26. The method (300) of claim 25, further characterized by: sending, to a Model Training Logical Function, MTLF, of a telecommunication network one of a request for information about ML models, and a request for provision of one or more ML models, each request including at least one of the Analytics ID, Analytics ID-Analytics Filter pair, and ML model identifier; and receiving, from the MTLF, information about a trained or re-trained ML model, or information for retrieving the trained or re-trained ML model, associated with the Analytics Service or Analytics Subservice; wherein the ML model is trained or re-trained according to the associated Analytics Service Training Pipeline.

27. The method (300), wherein the method is performed by an Analytics Logical Function, AnLF, of a telecommunications network.

28. A network node (20, 40) of a telecommunication network implementing a data analytics network function, characterized by: communication circuitry (26) configured to communicate with other nodes of the telecommunication network; and processing circuitry (22) operatively connected to the communication circuitry and configured to: implement an Analytics Logical Function, AnLF, configured to generate an Analytics Service Training Pipeline identifying data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation, and send the Analytics Service Training Pipeline and at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier to an ML Model and Analytics Registry, MMAR.

Description:
METHOD AND APPARATUS FOR CONTROLLING THE TRAINING OF MACHINE LEARNING MODELS IN PIPELINED ANALYTICS SERVICE IMPLEMENTATIONS

RELATED APPLICATIONS

This application claims priority to EP Application No. 22382496.2, filed 25 May 2022, disclosure of which is incorporated in its entirety by reference herein.

FIELD OF DISCLOSURE

The present disclosure relates generally to telecommunication networks, and in particular to an Analytics Service Training Pipeline, defining the pipeline structure of an Analytics Service (use case) implementation, and providing information about the training and re-training of Machine Learning (ML) models associated with the Analytics Service.

BACKGROUND

Telecommunication networks, including network nodes and network devices, including radio network devices such as cellphones and smartphones, are ubiquitous in many parts of the world. These networks continue to grow in capacity and sophistication. To accommodate both more users and a wider range of types of devices that may benefit from telecommunications, the technical standards governing the operation of telecommunication networks continue to evolve. The fourth generation of network standards (4G, also known as Long Term Evolution, or LTE) has been deployed, the fifth generation (5G, also known as New Radio, or NR) is in development or the early stages of deployment, and the sixth generation (6G) is being planned.

Release 15 (Rel-15) of the Third Generation Partnership Project (3GPP) standard for 5G networks introduced a new Network Function (NF) called the Network Data Analytics Function (NWDAF), the basic functionality of which is specified in Release 16 (Rel-16). See 3GPP Technical Standard (TS) 23.288 v17.3.0, “Architecture enhancements for 5G System (5GS) to support network data analytics services.” Analytics Services are identified by an Analytics ID. It is additionally possible to use a finer-grained referencing system by using Analytics Filters, which can identify different scopes or outputs within a given Analytics Service identified by an Analytics ID. Herein, these are identified as Analytics Subservices, identified by an Analytics ID and Analytics Filter (also referred to herein as an Analytics ID-Analytics Filter pair). Generally, the NWDAF performs two types of analytics processing: statistical analytics and predictive analytics. Statistical analytics provide information about what is currently happening in the network (or has happened in the past). Predictive analytics provides information about what is likely to happen in the future, based on current and historical data. In both cases, advanced NWDAFs rely on Machine Learning (ML) models. As known in the art, ML is an application of Artificial Intelligence (Al) that refers to computer systems having the ability to automatically learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data - usually very voluminous datasets. The above-referenced 3GPP TS 23.288, at sec. 4.2.0, defines two network functions which a NWDAF may contain.

An Analytics Logical Function (AnLF) is a component of a NWDAF that performs data analytics and exposes the analytics service(s) to other NFs. A Model Training Logical Function (MTLF) is a component that trains (and re-trains) ML models for use in providing the analytics service(s) and exposes the training service to other NFs. A NWDAF can include an AnLF (referred to as “NWDAF containing AnLF”), an MTLF (“NWDAF containing MTLF”), or both. A NWDAF containing MTLF can train the machine learning model(s) used in providing an analytics service by another NWDAF containing AnLF.

Figure 1, which is based on Figure 6.2A.1-1 of 3GPP TS 23.288, depicts the specified procedure by which a consumer, such as an NWDAF containing AnLF, retrieves ML model(s) associated with an Analytics Service (or with an Analytics Subservice within an Analytics Service, or with a set of Analytics Services), identified by its Analytics ID (or Analytics ID and Analytics Filter, or set of Analytics ID(s) respectively), whenever said ML model(s) have been trained by the NWDAF containing MTLF and become available (/.e., via Nnwdaf_MLModelProvision services). The procedure is straightforward: (1) the NWDAF containing AnLF subscribes for receiving notifications related to a (set of) Analytics Service(s), or a set of Analytics Subservices within one or several Analytics Services, by accessing an NWDAF containing MTLF. This subscription request supports the same Analytics Filters that a regular request for Analytics would support, so that Analytics Subservices can be identified. (2) the NWDAF containing MTLF responds to the subscription request.

Upon the creation or re-training of an ML model, (3) the NWDAF containing MTLF delivers a notification to the subscribed NWDAF containing AnLF that includes a set of two-item tuples: the Analytics ID (perhaps including an Analytics Filter as well) related to the ML model; and a Uniform Resources Locator (URL) for the ML model associated to the Analytics Service(s), or to the Analytics Subservices that fulfills the Analytics Filter condition. (4) the NWDAF containing AnLF responds to the notification.

This process implies that the NWDAF containing AnLF receives the address of an ML model artifact in a Model Repository (which may or may not be hosted by the NWDAF containing MTLF). Implicitly, the NWDAF containing AnLF is expected to be able to use this artifact to properly serve the ML model and perform its inference task.

Figure 2, which is based on Figure 6.2A.3-1 of 3GPP TS 23.288, depicts a single request/response procedure by which a consumer, such as an NWDAF containing AnLF, retrieves information of ML models associated with a (set of) Analytics I D(s), or with a set of Analytics Filters (within one or several Analytics Services), from an NWDAF containing MTLF, as described in sec. 6.2A.3 of 3GPP TS 23.288 (/.e., via Nnwdaf_MLModellnfo services). (1) The consumer submits an Nnwdaf_MLModellnfo_Request request, and (2) receives an Nnwdaf_MLModellnfo_Request response. As with the Nnwdaf_MLModel Provision services, the request in the Nnwdaf_MLModellnfo services refers to a (set of) Analytics Service(s), or a set of Analytics Subservices within one or several Analytics Services. The service operation response is the same set of two-item tuples described above.

These service operations assume that that an ML model is associated to a given Analytics Service (or to an Analytics Subservice within an Analytics Service), and vice versa, that each Analytics Service (or Analytics Subservice within an Analytics Service) uses a single ML model. Note that all references herein to an ML model refer broadly to any kind of AI/ML model, regardless of whether traditional ML techniques or more advanced Al approaches are used in its creation and (re-)training. This drawback is mitigated by an approach described in co-pending patent application EP21383092.0, assigned to the assignee of the present disclosure, by including an explicit ML model identifier in the Nnwdaf_MLModellnfo and Nnwdaf_MLModelProvision service operations. This enables a finer-grained selection of the items subject to re-training at the NWDAF containing MTLF and subsequent notification.

However, even this improvement is not sufficient in several scenarios related to chaining ML models to implement an Analytics Service (or Analytics Subservice within an Analytics Service), where the prediction generated by one ML model is used as one of the inputs for another ML model in a sequence. Consider, for example, scenarios such as those depicted in Figure 3 and Figure 4, where an Analytics Service (or Analytics Subservice within an Analytics Service) is served by several chained ML models - where the output of one ML model is the input (possibly with some intermediate data processing) of another, downstream, ML model.

Figure 3 depicts three different ML models receiving the same inputs, and generating an output, where a final function either aggregates the ML model outcomes, or subjects them to a voting procedure. The result of the aggregation/voting function is the system output.

Figure 4 depicts a scenario where ML model B receives the system input and, e.g., classifies the input. Input data are additionally input to downstream ML models A and C. The system output is the output of ML model A or ML model C, selected in response to the output of ML model B.

In either case, when the NWDAF containing MTLF receives a request from a consumer, e.g., from the NWDAF containing AnLF, to subscribe to notifications when a new ML model, or new version of an existing ML model, is created (e.g., after ML model re-training), a two-step procedure is required. First, the NWDAF containing MTLF determines if re-training is required (i.e. , on a predefined basis; or by continuously monitoring the ML model metrics in order to compare said metrics with some predefined thresholds). Second, when the re-training decision has been made, the NWDAF containing MTLF re-trains the ML model.

The ML model (re-)training procedure has a dependency - training data are required. As the input/output of each Analytics Service (or Analytics Subservice within Analytics Service) is specified (at least the recommended or possible data sources, and the set of data the Analytics Service implementation may access or subscribe to), the NWDAF containing MTLF is expected to be able to request the storage of (or access) such “standard” data when it re-trains an ML model.

However, in a chained scenario as the one in Figure 4, this expectation is not met: the NWDAF containing MTLF cannot re-train, e.g., ML model A unless the ML model B has been re-trained first, as the input to ML model A is at least partially the output of the upstream ML model B (possibly with some intermediate data processing).

Additionally, although the input to a given Analytics Service is specified, such a set of data is only a subset of all the data that could be used for computing a prediction. That is, during the ML model design phase, data scientists may determine that better results can be achieved by using additional, perhaps non-standardized, data sources and therefore, the NWDAF containing MTLF may need to access such data to carry out the re-training operation. This issue is more acute in multi-vendor environments, such as those being introduced in 3GPP Technical Specification Group Service and System Aspects Working Group 2 (3GPP TSG SA2). For example, an NWDAF containing MTLF may handle one or more ML models that support one or more Analytics Services (or Analytics Subservices) deployed in an NWDAF containing AnLF provided by a different vendor. If the NWDAF containing MTLF determines that an ML model must be re-trained, there is no standard procedure to re-train the dependent ML models.

The Background section of this disclosure is provided to place aspects of the present disclosure in technological and operational context, to assist those of skill in the art in understanding their scope and utility. Approaches described in the Background section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Unless explicitly identified as such, no statement herein is admitted to be prior art merely by its inclusion in the Background section.

SUMMARY

The following presents a simplified summary of the disclosure in order to provide a basic understanding to those of skill in the art. This summary is not an extensive overview of the disclosure and is not intended to identify key/critical elements of aspects of the disclosure or to delineate the scope of the disclosure. The sole purpose of this summary is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.

According to one or more aspects described and claimed herein, the concept of Analytics Service Training Pipeline is introduced, along with a new logical function within the NWDAF, the ML Model and Analytics Registry (MMAR). The Analytics Service Training Pipeline describes the data stages within an Analytics Service implementation, and how they apply to ML model training operations. An Analytics Service may implement several pipelines (for example, one per Analytics Subservice within the Analytics Service). The NWDAF containing MMAR stores and supplies the Analytics Service Training Pipeline(s) information to the NWDAF containing MTLF, for use in carrying out ML model (re-)training operations. When an Analytics Service implementation is deployed in the NWDAF containing AnLF, its training pipeline(s) definition(s) are made available, as well in the NWDAF containing MMAR. Then, when the NWDAF containing MTLF receives an Nnwdaf_MLModelProvision or Nnwdaf_MLModellnfo request, it retrieves the relevant training pipeline definition(s) from the NWDAF containing MMAR, and proceeds accordingly. In this manner, once the Analytics Service Training Pipeline definition is available at the NWDAF containing MTLF, it knows which information to retrieve to re-train a given ML model, and in which order (if applicable) the re-training must occur.

One aspect relates to a method of training or re-training one or more ML models associated with an Analytics Service (or Analytics Subservice within an Analytics Service). One of a request for information about ML models, and a request for provision of one or more ML models, is received from a consumer. Each request includes an Analytics ID (or a pair Analytics ID-Analytics Filter). An ML Model and Analytics Registry (MMAR) storing mappings between Analytics Services (or Analytics Subservices) and Analytics Service Training Pipelines is accessed to retrieve an Analytics Service Training Pipeline associated with the identified Analytics Service (or Analytics Subservice). One or more ML models are trained or re-trained according to the retrieved Analytics Service Training Pipeline. Each Analytics Service Training Pipeline identifies data stages in an implementation of an associated Analytics Service (or Analytics Subservice), and how the data stages apply to training operations of one or more ML models in the Analytics Service (or Analytics Subservice(s)) implementation.

Another aspect relates to a method, performed by a Machine Learning (ML) Model and Analytics Registry (MMAR) of providing information for training or re-training Machine Learning ML models, wherein the MMAR maps Analytics Service Training Pipelines to one of Analytics IDs, Analytics ID-Analytics Filter pairs, and ML model identifiers. An ML model training pipeline exposure request including an Analytics ID is received from a Model Training Logical Function (MTLF). Information about an Analytics Service Training Pipeline associated with the Analytics Service (or Analytics Subservice) identified by its corresponding Analytics ID (or pair of Analytics ID and Analytics Filter) is returned. The MTLF determines how to train or re-train one or more ML models associated with the Analytics Service (or Analytics Subservice) in response to the received Analytics Service Training Pipeline.

Yet another aspect relates to a method of generating and storing an Analytics Service Training Pipeline. An Analytics Service Training Pipeline identifying data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation is generated. The Analytics Service Training Pipeline and at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier is sent to an ML Model and Analytics Registry (MMAR).

Still another aspect relates to a network node of a telecommunication network implementing a data analytics network function. The network node includes communication circuitry configured to communicate with other nodes of the telecommunication network, and processing circuitry operatively connected to the communication circuitry. The processing circuitry is configured to: implement a Model Training Logical Function (MTLF) configured to train and re-train ML models according to Analytics Service Training Pipelines associated with Analytics Services or Analytics Subservices specified by consumers, and provide the trained and re-trained ML models to the consumers; and implement an ML Model and Analytics Registry (MMAR) configured to map Analytics IDs (or pairs of Analytics IDs and Analytics Filters) to Analytics Service Training Pipelines, and to provide the MTLF information about Analytics Service Training Pipelines in response to exposure requests including Analytics IDs (or pairs of Analytics ID and Analytics Filters).

Still another aspect relates to a network node of a telecommunication network implementing a data analytics network function. The network node includes communication circuitry configured to communicate with other nodes of the telecommunication network, and processing circuitry operatively connected to the communication circuitry. The processing circuitry is configured to implement an Analytics Logical Function (AnLF) configured to generate an Analytics Service Training Pipeline identifying data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation, and send the Analytics Service Training Pipeline and at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier to an ML Model and Analytics Registry (MMAR).

BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which aspects of the disclosure are shown. However, this disclosure should not be construed as limited to the aspects set forth herein. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Like numbers refer to like elements throughout.

Figure 1 is a signaling diagram of prior art Machine Learning (ML) model provision, copied from 3GPP TS 23.288, Figure 6.2A.1-1.

Figure 2 is a signaling diagram of prior art ML model discovery, copied from 3GPP TS 23.288, Figure 6.2A.3-1.

Figure 3 is a diagram of a multiple ML models operating cooperatively in a pipeline configuration. Figure 4 is a diagram of multiple ML models operating in a pipeline configuration.

Figure 5 is a block diagram of a Network Data Analytics Function (NWDAF) containing an Analytics Logic Function (AnLF), an NWDAF containing an ML Model Training Logic Function (MTLF), and an NWDAF containing an inventive ML Model and Analytics Registry (MMAR).

Figure 6 depicts Store and Read operations between an NWDAF containing AnLF, NWDAF containing MTLF, Network Manager, and an inventive NWDAF containing MMAR.

Figure 7 is an example of a Directed Analytic Graph (DAG).

Figure 8 is a diagram of an Analytics Service Training Pipeline.

Figure 9 is a signaling diagram of a Network Manager storing an Analytics Service Training Pipeline to an NWDAF containing MMAR.

Figure 10 is a signaling diagram of an NWDAF containing AnLF storing an Analytics Service Training Pipeline to an NWDAF containing MMAR.

Figure 11 is a signaling diagram of a NWDAF containing MTFL retrieving an Analytics Service Training Pipeline from an NWDAF containing MMAR.

Figure 12 is a flow diagram of a method 100 of (re-)training an ML model by use of an Analytics Service Training Pipeline.

Figure 13 is a flow diagram of a method 200 of providing information for (re-)training ML models.

Figure 14 is a flow diagram of a method 300 of generating and storing an Analytics Service Training Pipeline.

Figure 15 is hardware block diagram of a network node implementing an NWDAF.

Figure 16 is a functional block diagram of a network node implementing an NWDAF having an MLTF and MMAR.

Figure 17 is a functional block diagram of a network node implementing an NWDAF having an AnLF.

DETAILED DESCRIPTION

For simplicity and illustrative purposes, the present disclosure is described by referring mainly to an exemplary aspect thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be readily apparent to one of ordinary skill in the art that the present disclosure may be practiced without limitation to these specific details. In this description, well known methods and structures have not been described in detail so as not to unnecessarily obscure the present disclosure.

Network entities and logical functions are described herein with reference to a 3GPP 5G telecommunication network, using nomenclature and acronyms from 3GPP TSs. However, aspects of the present disclosure are not limited to 5G, and may be advantageously applied to other network architectures. Inventive features, such as network entities and logical functions, are indicated in the drawings by dashed lines, and inventive services and signaling messages are indicated in this disclosure by italics typeface.

ML Model and Analytics Registry

Figure 5 depicts a block diagram of an NWDAF containing AnLF and NWDAF containing MTLF communicating with an inventive entity: an ML Model and Analytics Registry (MMAR). The NWDAF containing MMAR stores mappings between Analytics IDs (or pairs of Analytics IDs and Analytics Filters) and associated Analytics Service Training Pipelines. As indicated in Figure 5, the NWDAF containing AnLF and NWDAF containing MTLF communicate with the NWDAF containing MMAR via inventive extensions to the Nnwdaf services.

Figure 6 depicts the operations performed by Network Functions on the NWDAF containing MMAR. In particular, the NWDAF containing AnLF and a Network Manager may store Analytics Service Training Pipelines to the NWDAF containing MMAR, and the NWDAF containing MTLF may retrieve them.

The Analytics Service Training Pipelines stored by and retrieved from the NWDAF containing MMAR are themselves an inventive concept, which are more fully described herein. An Analytics Service Training Pipeline facilitates the proper (re-)training of ML models associated to some Analytics Services, such as those employing multiple ML models in a pipelined configuration. The MMAR may be structured and operated in a variety of ways; this is expected to be handled by an Operations, Administration and Maintenance (OAM) logical function.

Nnwdaf Service Extensions

As mentioned above, the NWDAF containing MTLF communicates with the NWDAF containing MMAR via inventive extensions to the Nnwdaf services. In particular, an inventive Nnwdaf_MLModelTrainingPipeline service is defined, which is added to the collection of Nnwdaf services. The Nnwdaf VILModelTrainingPipeline service adds at least two service operations: Nnwdaf_MLModelTrainingPipeline_Register, and Nnwdaf_MLModelTrainingPipeline_Get. These operations allow for the storage and retrieval, respectively, of Analytics Service Training Pipelines in the NWDAF containing MMAR.

Analytics Service Training Pipelines

A key concept of aspects of the present disclosure is the existence and management of the Analytics Service Training Pipeline, which defines the pipeline structure of an Analytics Service (and associated Analytics Subservices), and hence provides vital information about the proper (re-)training of ML models associated to the Analytics Service (and associated Analytics Subservices). A Directed Analytic Graph (DAG) is a directed graph (meaning the direction of progress through it is explicitly defined) having no directed cycles. A DAG consists of nodes and edges. Each edge is directed from one node to another, such that following those directions cannot ever form a closed loop (/.e., back to a starting node). Figure 7 is one example of a DAG, with the nodes depicted by circles (labeled with capital letters), and the edges represented by vectors. No matter which node one starts at, traversing the DAG successively from one node to the next along the directed edges, one cannot return to a starting node.

Analytics Service Training Pipelines describe, for a given Analytics Service or Analytics Subservice, the stages of data transformation needed for carrying out the (re-)training of the associated ML models. An Analytics Service, or Analytics Subservice, may implement several pipelines. At the same time, similar data pipelines exist in the Analytics Service implementation within the NWDAF containing AnLF, since the same data transformation operations are used when using the ML model(s) for inference. However, these pipelines are internal to the Analytics Service implementation, and need not be exposed. Within an Analytics Service Training Pipeline, nodes can belong to four different classes: data input; data output; data processor; and ML model. In one aspect, the edges of the Analytics Service Training Pipeline are simply directed connections between nodes - that is, from one node to another node (and only in that direction).

For interoperability, at least one of input and output data schema are explicitly specified. This can be done in a number of ways. For example, in one aspect, each node adds an attribute specifying its output data schema. Input data schema thus need not be specified, as the input data schema of a given node is the output data schema of the immediate upstream node. Alternatively, of course, input data schema may be explicitly specified, and nodes output data in conformance with the downstream node input data schema. As another alternative, the nodes may not specify input or output data schema, with data schema being associated with edges, in which case, each edge adds an attribute with the schema of the data that goes through it. In the aspect described below, each node explicitly specifies its output data schema; however, the present disclosure is not limited to this convention.

The four different classes of nodes in an Analytics Service Training Pipeline - data input; data output; data processor; and ML model - are each described below:

Data Input: A Data Input represents a data ingestion operation. In addition to its class, a Data Input description contains several attributes: At least, (a) the Network Function or data source from which the data originates; and (b) its output data schema.

Data Output: A Data Output represents the data delivered to the consumer of the Analytics Service. In addition to its class, a Data Output description contains at least an additional attribute: its output data schema.

Data Processor: A Data Processor represents a transformation of data input to produce data output - usually to generate the ML model data input or to transform the output data of an ML model so that they can be consumed by the next downstream node in the DAG. Other alternatives are also valid: chaining several Data Processors to create the data consumed by the next downstream node in the DAG. In addition to its class, a Data Processor node must specify several additional attributes: (a) its output data schema, and (b) an identifier that enables the NWDAF containing MMAR to retrieve the Data Processor implementation from a software component repository (for example, a Docker image name or a Helm Chart identifier), or to select the right software component from its internal implementation.

ML Model: An ML Model node represents an ML model trainer - a software component that can ingest data output by an upstream Data Processor and (re-)train an ML model. In addition to its class, an ML model must specify (a) its output data schema; and (b) an identifier that enables the NWDAF containing MMAR to retrieve the ML model trainer implementation from a software component repository, or to select the right software component from its internal implementation.

Figure 8 depicts an exemplary Analytics Service Training Pipeline. The Analytics Service Training Pipeline is in the form of a DAG. It consists of two Data Input nodes; three Data Processor nodes (two pre-processing data immediately before respective ML model nodes, and one post-processing data output by the last ML model node); two ML model nodes; and one Data Output node. Edges 1-7 define data flow sequentially from the two input nodes through the pipeline. Edge 8 indicates that some data from the input_1 node flow directly to the preprocessing node upstream of the second ML model node. From the information in this Analytics Service Training Pipeline, a NWDAF containing MTLF ascertains that to update the associated Analytics Service, it must (re-)train the first ML model prior to (re-)training the second ML model, as the latter operation depends on the output of the first ML model.

An Analytics Service Training Pipeline can be represented by means of any data exchange or data serialization format. The following is one exemplary representation of the Analytics Service Training Pipeline depicted in Figure 8, using JavaScript Object Notation (JSON): {

"id" : "nwdaf pipelines /mobility/pipeline 1" ,

"nodes" : [

{

"class" : "data input" ,

"id" : "input 1" ,

"represents" : "AMF" , "datastore" : "database : 77777" , "database" : "mobility" , "collection" : "live" , "schema" : { "$ id" : " /schemas /nwdaf /mobility events" , "name" : "AmfStats" , "properties" : { "amf State" : { "type" : "string" } , "currentTime " : { "type" : "integer"

} ,

"eventcounter" : {

"type" : "string"

} ,

"startTime" : { "type": "integer"

}

},

"required" : [

"amf State" ,

"startTime " ,

"currentTime "

] ,

"type": "object" } }, { "class": "data input", "id": "input_2", "represents": "OAM", "datastore" : "topology: 8001", "api endpoint": "/nnrf-nfm/vl/nf-instances", "schema": {

"$id": "/schemas/topology" , "name": "Topology", "properties": {

"type": "array", "items": {

"type": "object", "properties": {

"cell in": "string",

"cell out": "string"

}

}

}

}

},

{

"class": "data processor",

"id": "preprocessing 1", "source" :

"registry.ericsson.se: 4567/ nwdaf /mobility preprocessor : 0.1.1 " , "schema": {

"type": "object", "properties": { "user id": "string", "enodeBl": "string", "timestampl " : "number", "enodeB2": "string", "timestamp2 " : "number" } }

},

{

"class": "model",

"id": "nwdaf models/mobility/model 1", "source" :

"registry.ericsson.se: 4567/ nwdaf /mobility Istm trainer:1.2.5", "schema": { "type": "object", "properties": { "xl": "string", "x2": "number" }

}

},

{

"class": "data processor",

"id": "preprocessing 2",

"source" :

"registry . ericsson . se : 4567 /nwdaf /mobility forecast residual generator:

0.0.6",

"schema": {

"type": "object",

"properties": {

"timestamp": "number",

"residual": "number"

}

}

},

{

"class": "model",

"id": "nwdaf models/mobility/model 2",

"source" :

"registry . ericsson . se : 4567 /nwdaf /mobility anomaly detection trainer:0.

2.6",

"schema": {

"type": "object",

"properties": {

"xl": "number",

"yl": "integer"

}

}

},

{

"class": "data processor",

"id": "postprocessing",

"source" :

"registry.ericsson.se: 4567/ nwdaf /mobility postprocessor : 0.2.5 " ,

"schema": {

"type": "object",

"properties": {

"timestamp": "number",

"anomaly": "boolean"

}

}

},

{

"class": "data output",

"id": "do", ”

"represents": "message broker",

"brokers": [ "broker : 9999" ] ,

"topic": "analytics"

}

] ,

"edges": [

{

"from": "input 1",

"to": "preprocessing 1",

"id": "edge_l" ” {

"from": "input 2",

"to": "preprocessing 1",

"id": "edge_2" ”

},

{

"from": "preprocessing 1",

"to": "model_l",

"id": "edge_3"

},

{

"from": "model 1",

"to": "preprocessing 2",

"id": "edge_4" ”

},

{

"from": "input 1",

"to": "preprocessing 2",

"id": "edge_8" ”

},

{

"from": "preprocessing 2",

"to": "model_2",

"id": "edge_5"

},

{

"from": "preprocessing 2",

"to": "postprocessing",

"id": "edge_6"

},

{

"from": "postprocessing",

"to": "output",

"id": "edge_7"

}

Where:

• id is the Analytics Service Training Pipeline identifier. It is assumed to be unique.

• nodes represent the nodes in the graph. For each node, the following keys can be found: o id: Node identifier. It is assumed to be unique. If the node belongs to the ML Model class, the node identifier is the ML model identifier used in the service operations described above. o represents: It describes the data source this node represents. o source: An identifier at a software component repository. o schema: Output data schema. In the example, JSON schema is used. o datastore, database, collection, brokers, and topic are specific keys that are relevant for data input or data output nodes and describe how to access them.

• edges represent the edges in the graph. For each edge, the following keys can be found: o id: Edge identifier. It is assumed to be unique. o from: The edge source. o to: The edge target.

Network Manager Storing Analytics Service Training Profile

A Network Manager, or any other Operations, Administration and Maintenance Operation (OAM) function may provision the NWDAF containing MMAR with Analytics Service Training Pipeline(s) for a given Analytics Service or Analytics Subservice. This allows a predominantly static configuration to support deployments that do not require dynamic changes. The signaling is depicted in Figure 9.

Furthermore, in some aspects, the Network Manager optionally adds additional granularity, by storing a different combination of Analytics Service Training Pipelines and Analytics Service, or Analytics Subservice, for each class or identifier of NWDAF containing AnLF. This feature allows for configuration the network, such that different classes of NWDAF containing AnLF (or different instances of NWDAF containing AnLF) are configured to use different definitions of the Analytics Service Training Pipelines for a given Analytics Service. The NWDAF containing MTLF will read the information from the NWDAF containing MMAR, so that the NWDAF containing MTLF is aware, at any time, of the Analytics Service Training Pipelines that a given class or instance of NWDAF containing AnLF is configured to use.

NWDAF containing AnLF Registering Analytics Service Training Profile

Figure 10 depicts a more dynamic method for an NF, such as an NWDAF containing AnLF, to register an Analytics Service Training Pipeline associated with an Analytics Service or Analytics Subservice it is currently using. This dynamicity offers advantages, since it allows the NWDAF containing AnLF to modify the Analytics Service Training Pipeline due to, e.g., time of day, day of the week, network conditions, or any other trigger. Additionally, it allows the NWDAF containing MTLF to query the NWDAF containing MMAR to determine which Analytics Service Training Pipelines are being currently used by a given NWDAF containing AnLF and for a given Analytics Service or Analytics Subservice. Inventive entities and NFs are depicted in dashed line, and inventive signaling is indicated by italic typeface.

In this case, when the NWDAF containing AnLF starts using a given Analytics Service Training Pipeline for a given Analytics Service, or Analytics Subservice, it contacts the NWDAF containing MMAR and registers the combination of Analytics Service Training Pipeline, Analytics ID (or pair Analytics ID-Analytics Filter), and its own ID (that is, the ID of the NWDAF containing AnLF). The NWDAF ID can be a URL, Fully Qualified Domain Name (FQDN), IP address, etc. Nnwdaf MLModelTrainingPipeline Get Service Operation

Figure 11 depicts a case where an NWDAF containing MTLF retrieves an Analytics Service Training Pipeline associated with an Analytics Service or Analytics Subservice being used by an NWDAF containing AnLF, from an NWDAF containing MMAR.

At step 1, the NWDAF containing AnLF subscribes to notifications on the availability of new ML models that belong to a given Analytics Service, or Analytics Subservice, implementation. Alternatively, as described above, the NWDAF containing AnLF may subscribe to notifications about an explicit ML model. This service operation (Nnwdaf_MLModelProvision_Subscribe) is defined and operates conventionally.

Upon receipt of the subscription request, the NWDAF containing MTLF stores the subscription data and initiates, according to its internal procedures, a continuous monitoring process to determine if, at a given time, any of the ML models the subscription request refers to must be (re-)trained. Note that, in this example, the reception of the Nnwdaf_MLModelProvision_Subscribe request (step 1) at an NWDAF containing MTLF constitutes the trigger for the query (step 2) to the NWDAF containing MMAR. This is a typical case, but other triggers may exist. For example, the NWDAF containing MTLF may determine that a given ML model has drifted and needs to be retrained. Accordingly, the NWDAF containing MTLF issues the query (step 2) to determine how this ML model is used within an Analytics Service implementation, to be able to properly to re-train the ML model.

At step 2, upon determining it must (re-)train one or more ML models in the specified Analytics Service or Analytics Subservice, the NWDAF containing MTLF uses the service operation Nnwdaf_MLModelTrainingPipeline_Get to request the definition of the Analytics Service Training Pipeline associated to the Analytics Service, the Analytics Subservice, or to specific ML model(s), from the NWDAF containing MMAR. At least the Analytics ID, pair Analytics ID-Analytics Filter, or ML model identifier(s) must be included in the request.

At step 3, the NWDAF containing MMAR returns information about an Analytics Service Training Pipeline associated with the Analytics Service, the Analytics Subservice, or ML model(s). In one aspect, the information comprises the Analytics Service Training Pipeline itself. In another aspect, the information comprises a URL to a Pipeline Registry containing the Analytics Service Training Pipeline. In the latter aspect, the NWDAF containing MTLF then accesses the Pipeline Registry to retrieve the Analytics Service Training Pipeline (not shown in Fig. 8).

NWDAF containing MTLF then uses the Analytics Service Training Pipeline to determine how to (re-)train the ML models. Once the ML models are (re-)trained, at step 4, the NWDAF containing MTLF notifies the NWDAF containing AnLF of the availability of the updated mL Model, using conventional signaling. Methods and Apparatuses

Figure 12 depicts a method 100, in accordance with particular aspects, of training or retraining one or more ML models associated with an Analytics Service, by an NWDAF containing MTLF. One of a request for information about ML models, and a request for provision of one or more ML models, is received from a consumer, such as an NWDAF containing AnLF. Each request includes one of an Analytics ID, a pair Analytics ID-Analytics Filter, and ML model identifier(s) (block 102). An NWDAF containing MMAR is accessed to retrieve an Analytics Service Training Pipeline associated with the identified Analytics ID, pair Analytics ID-Analytics Filter, or ML model identifier(s). The NWDAF containing MMAR stores mappings between Analytics IDs, pairs Analytics ID-Analytics Filter, or ML model identifier(s), and Analytics Service Training Pipelines (block 104). One or more ML models are trained or re-trained according to the retrieved Analytics Service Training Pipeline (block 106).

Figure 13 depicts a method 200, in accordance with other particular aspects, of providing information for training or re-training ML models, by an NWDAF containing MMAR. The NWDAF containing MMAR maps Analytics IDs, pairs Analytics ID-Analytics Filter, or ML model identifier(s) to Analytics Service Training Pipelines. An ML model training pipeline exposure request, including one of an Analytics ID, a pair Analytics ID-Analytics Filter, and ML model identifier(s), is received from an NWDAF containing MTLF (block 202). Information about an Analytics Service Training Pipeline associated with the Analytics ID, pair Analytics ID- Analytics Filter, or ML model identifier(s) is returned (block 204). The information may comprise the Analytics Service Training Pipeline itself, or a URL to a Pipeline Registry storing the Analytics Service Training Pipeline. The NWDAF containing MTLF then determines how to train or re-train one or more ML models in response to the Analytics Service Training Pipeline.

Figure 14 depicts a method 300, in accordance with other particular aspects, of generating and storing an Analytics Service Training Pipeline. An Analytics Service Training Pipeline, identifying data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation, is generated (block 302). The Analytics Service Training Pipeline and at least one of an Analytics ID, Analytics ID- Analytics Filter pair, and an ML model identifier is sent to an ML Model and Analytics Registry (MMAR) (block 304).

Note that apparatuses described herein may perform the methods 100, 200, 300 herein and any other processing by implementing any functional means, modules, units, or circuitry. In one aspect, for example, the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures. The circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory. For instance, the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several aspects. In aspects that employ memory, the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.

Figure 15 for example illustrates a hardware block diagram of network node 20 as implemented in accordance with one or more aspects of the present disclosure. In one aspect the network node 20 implements a Network Data Analytics Function (NWDAF) having, in one embodiment, at least one Model Training Logical Function (MTLF) and an ML Model and Analytics Registry (MMAR). In another aspect, the network node 20 implements a NWDAF having an Analytics Logical Function (AnLF).

As shown, the network node 20 includes processing circuitry 22 and communication circuitry 26. The communication circuitry 26 is configured to transmit and/or receive information to and/or from one or more network nodes or entities outside the network, e.g., via any communication technology. The processing circuitry 22 is configured to perform processing described above, such as by executing instructions stored in memory 24, which may be internal, as shown, or may be external to the processing circuitry 22. The processing circuitry 22 in this regard may implement certain functional means, units, or modules.

Figure 16 illustrates a functional block diagram of network node 30 implementing an NWDAF, according to still other aspects. As shown, the network node 30 implements various functional means, units, or modules, e.g., via the processing circuitry 22 in Figure 15 and/or via software code. These functional means, units, or modules, e.g., for implementing the methods 100 and 200 herein, include for instance: MTLF implementing unit 32, and MMAR implementing unit 34.

MTLF implementing unit 32 is configured to implement an ML Model Training Logical Function (MTLF) configured to train and re-train ML models according to Analytics Service Training Pipelines associated with Analytics IDs, pairs Analytics ID-Analytics Filter, or ML model identifiers specified by consumers, and provide the trained and re-trained ML models to the consumers. MMAR implementing unit 34 is configured to implement an ML Model and Analytics Registry (MMAR) configured to store Analytics Service Training Pipelines or references to said pipelines, store the mapping of Analytics IDs, pairs Analytics ID-Analytics Filter, and ML model identifiers to Analytics Service Training Pipelines, and to provide the MTLF information about Analytics Service Training Pipelines in response to exposure requests including Analytics IDs, pairs Analytics ID-Analytics Filter, or ML model identifiers. Figure 17 illustrates a functional block diagram of network node 40 implementing an NWDAF, according to still other aspects. As shown, the network node 40 implements various functional means, units, or modules, e.g., via the processing circuitry 22 in Figure 15 and/or via software code. These functional means, units, or modules, e.g., for implementing the method 300 herein, include for instance: AnLF implementing unit 42.

AnLF implementing unit 42 is configured to implement an Analytics Logical Function (AnLF) configured to generate an Analytics Service Training Pipeline identifying data stages in an implementation of an associated Analytics Service or Analytics Subservice, and how the data stages apply to training operations of one or more ML models in the Analytics Service or Analytics Subservice implementation, and send the Analytics Service Training Pipeline and at least one of an Analytics ID, Analytics ID-Analytics Filter pair, and an ML model identifier to an ML Model and Analytics Registry (MMAR).

Those skilled in the art will also appreciate that aspects herein further include corresponding computer programs.

A computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above.

Aspects further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.

In this regard, aspects herein also include a computer program product stored on a non- transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.

Aspects further include a computer program product comprising program code portions for performing the steps of any of the aspects herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.

Aspects of the present disclosure present numerous advantages over the prior art. They enable the NWDAF containing MTLF to access data sources others than 3GPP-standardized sources when (re-)training the ML models that support a given Analytics Service. This enables a configurable deployment of Analytics Service implementations. Aspects also enable the NWDAF containing MTLF to identify the input datasets needed for (re-)training an ML model, and to generate them in the appropriate order. Aspects further allow the NWDAF containing AnLF to express the conditions upon which the NWDAF containing MTLF should aim to retrain the ML models that support a given Analytics Service or Analytics Subservice. Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the aspects disclosed herein may be applied to any other aspect, wherever appropriate. Likewise, any advantage of any of the aspects may apply to any other aspects, and vice versa. Other objectives, features, and advantages of the enclosed aspects will be apparent from the description.

The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. As used herein, the term “configured to” means set up, organized, adapted, or arranged to operate in a particular way; the term is synonymous with “designed to.”

Some of the aspects contemplated herein are described more fully with reference to the accompanying drawings. Other aspects, however, are contained within the scope of the subject matter disclosed herein. The disclosed subject matter should not be construed as limited to only the aspects set forth herein; rather, these aspects are provided by way of example to convey the scope of the subject matter to those skilled in the art.