Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USER-CENTRIC LIFE CYCLE MANAGEMENT OF AI/ML MODELS DEPLOYED IN A USER EQUIPMENT (UE)
Document Type and Number:
WIPO Patent Application WO/2023/148009
Kind Code:
A1
Abstract:
Embodiments include methods performed by a first network node for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs). Such methods include transmitting to a UE a first message that includes: at least one AI/ML model to be available and/or deployed in or at the UE, an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE, or LCM information associated with at least one AI/ML model available and/or deployed in or at the UE. Such methods include receiving from the UE a second message that includes: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. Other embodiments include complementary methods for a UE and a second network node.

Inventors:
SOLDATI PABLO (SE)
LUNARDI LUCA (IT)
Application Number:
PCT/EP2023/051265
Publication Date:
August 10, 2023
Filing Date:
January 19, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L41/16; H04W24/02
Domestic Patent References:
WO2022013095A12022-01-20
WO2022015221A12022-01-20
WO2022013090A12022-01-20
Other References:
3GPP TS 38.463
3GPP TR 38.804
3GPP TS 36.300
3GPP TECHNICAL REPORT (TR) 37.817
3GPP TR 37.817
Attorney, Agent or Firm:
ERICSSON AB (SE)
Download PDF:
Claims:
CLAIMS

1 . A method performed by a first network node for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipments, UEs, operating in a radio access network, RAN, the method comprising: transmitting (1130) to a UE a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receiving (1140) from the UE a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

2. The method of claim 1, wherein: the second message is received in response to transmitting (1130) the first message; the second message includes the LCM report; and the first message includes the LCM request or the LCM information.

3. The method of claim 2, wherein the LCM report in the second message includes one of more of the following about the LCM performed by the UE: identifier of at least one AI/ML model; indication of whether the UE has re-trained the identified at least one AI/ML model; indication of whether the UE will re-train the identified at least one AI/ML model; indication of whether the UE has modified the identified at least one AI/ML model; indication of whether the UE will modify the identified at least one AI/ML model; reason why the identified at least one AI/ML model will be modified; request to test, verify, validate, or evaluate the identified at least one AI/ML model; information related to modifications performed by the UE on the identified at least one AI/ML model; information related to techniques the UE used for re-training the identified at least one AI/ML model; information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

4. The method of any of claims 1-3, further comprising receiving (1120) from a second network node a fourth message including at least one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and wherein the first message is transmitted in response to receiving the fourth message.

5. The method of claim 4, wherein: the first message is transmitted in response to receiving (1120) the fourth message; and the first message includes at least part of the information received in the fourth message.

6. The method of any of claims 4-5, further comprising transmitting (1150) to the second network node a third message that includes at least one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

7. The method of claim 6, wherein: the third message is transmitted in response to receiving (1140) the second message; and the third message includes at least part of the information received in the second message.

8. The method of claim 6, wherein: the fourth message is received in response to transmitting (1150) the third message; the third message includes the LCM request; and the fourth message includes the LCM information.

9. The method of claim 8, further comprising transmitting (1160) to the second network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE, wherein the further third message is transmitted in response to receiving (1140) the second message from the UE.

10. The method of claim 1, wherein: the first message is transmitted in response to receiving (1140) the second message; the second message includes the LCM request; and the first message includes the LCM information or the at least one AI/ML model.

11. The method of claim 10, further comprising transmitting (1110) to the UE a further first message including the at least one AI/ML model, wherein: the second message is received in response to transmitting (1110) the further first message, and the first message includes the LCM information associated with the at least one AI/ML model.

12. The method of any of claims 1-11, wherein one of the following applies: the first and second network nodes are different network nodes in a RAN; the first and second network nodes are different units or functions of one network node in a RAN; the first and second network nodes are in different RANs; or one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network, CN, node or function; a service management and orchestration, SMC, function; or part of an operations/administration/maintenance, CAM, system.

13. A method performed by a user equipment, UE, operating in a radio access network, RAN, for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in the UE, the method comprising: receiving (1220) from a first network node a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and transmitting (1240) to the first network node a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

14. The method of claim 13, wherein: the second message is transmitted in response to receiving (1220) the first message; the second message includes the LCM report; and the first message includes the LCM request or the LCM information.

15. The method of claim 14, further comprising performing (1230) LCM of the at least one AI/ML model in accordance with the LCM request or the LCM information included in the first message, wherein the LCM report in the second message includes one of more of the following about the LCM performed by the UE: identifier of at least one AI/ML model; indication of whether the UE has re-trained the identified at least one AI/ML model; indication of whether the UE will re-train the identified at least one AI/ML model; indication of whether the UE has modified the identified at least one AI/ML model; indication of whether the UE will modify the identified at least one AI/ML model; reason why the identified at least one AI/ML model will be modified; request to test, verify, validate, or evaluate the identified at least one AI/ML model; information related to modifications performed by the UE on the identified at least one AI/ML model; information related to techniques the UE used for re-training the identified at least one AI/ML model; information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

16. The method of claim 13, wherein: the first message is received in response to transmitting (1240) the second message; the second message includes the LCM request; and the first message includes the LCM information or the at least one AI/ML model.

17. The method of claim 16, further comprising receiving (1210) from the first network node a further first message including the at least one AI/ML model, wherein: the second message is transmitted in response to receiving (1210) the further first message, and the first message includes the LCM information associated with the at least one AI/ML model.

18. The method of any of claims 13-17, further comprising applying (1250) the at least one AI/ML model for one or more of the following UE operations in the RAN: estimating and/or compressing channel state information, CSI; beam management; positioning; link adaptation; estimating UE and/or network energy saving for a UE configuration; estimating signal quality; and estimating UE traffic.

19. A method performed by a second network node for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipments, UEs, operating in a radio access network, RAN, the method comprising: transmitting (1310) to a first network node a fourth message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at a UE connected to the first network node; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receiving (1320) from the first network node a third message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

20. The method of claim 19, wherein: the third message is received in response to transmitting (1310) the fourth message; the third message includes the LCM report; and the fourth message includes the LCM information or the LCM request.

21 . The method of claim 20, wherein the LCM report in the third message includes one of more of the following about the LCM performed by the UE: identifier of at least one AI/ML model; indication of whether the UE has re-trained the identified at least one AI/ML model; indication of whether the UE will re-train the identified at least one AI/ML model; indication of whether the UE has modified the identified at least one AI/ML model; indication of whether the UE will modify the identified at least one AI/ML model; reason why the identified at least one AI/ML model will be modified; request to test, verify, validate or evaluate the identified at least one AI/ML model; information related to modifications performed by the UE on the identified at least one AI/ML model; information related to techniques the UE used for re-training the identified at least one AI/ML model; information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

22. The method of claim 19, wherein: the fourth message is transmitted in response to receiving (1320) the third message; the third message includes the LCM request; and the fourth message includes the LCM information.

23. The method of claim 22, further comprising receiving (1330) from the first network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE, wherein the further third message is received in response to transmitting (1310) the fourth message.

24. The method of any of claims 19-23, wherein one of the following applies: the first and second network nodes are different network nodes in a RAN; the first and second network nodes are different units or functions of one network node in a RAN; the first and second network nodes are in different RANs; or one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network, CN, node or function; a service management and orchestration, SMC, function; or part of an operations/administration/maintenance, CAM, system.

25. A first network node (510, 810, 1408, 1410, 1418, 1420, 1600, 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipments, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), the first network node comprising: communication interface circuitry (1606, 1804) configured to communicate with UEs and with at least a second network node (830, 1408, 1410, 1418, 1420, 1600, 1802); and processing circuitry (1602, 1804) operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to: transmit to a UE a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receive from the UE a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

26. The first network node of claim 25, wherein the processing circuitry and the communication interface circuitry are further configured to perform operations corresponding to any of the methods of claims 2-12.

27. A first network node (510, 810, 1408, 1410, 1418, 1420, 1600, 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipment, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), the first network node being further configured to: transmit to a UE a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receive from the UE a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

28. The first network node of claim 27, being further configured to perform operations corresponding to any of the methods of claims 2-12.

29. A non-transitory, computer-readable medium (1604, 1804) storing computer-executable instructions that, when executed by processing circuitry (1602, 1804) associated with a first network node (510, 810, 1408, 1410,

1418. 1420. 1600. 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipment, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), configure the first network node to perform operations corresponding to any of the methods of claims 1- 12.

30. A computer program product (1604a, 1804a) comprising computer-executable instructions that, when executed by processing circuitry (1602, 1804) associated with a first network node (510, 810, 1408, 1410, 1418,

1420. 1600. 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipment, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), configure the first network node to perform operations corresponding to any of the methods of claims 1-12.

31. A user equipment, UE (520, 820, 1412, 1500, 1906) configured to operate in a radio access network, RAN (199, 299, 1404) and for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models, deployed at the UE, the UE comprising: communication interface circuitry (1510) configured to communicate with at least a first network node (510, 810, 1408, 1410, 1418, 1420, 1600, 1802); and processing circuitry (1502) operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to: receive from the first network node a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and transmit to the first network node a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

32. The UE of claim 31 , wherein the processing circuitry and the communication interface circuitry are further configured to perform operations corresponding to any of the methods of claims 14-18.

33. A user equipment, UE (520, 820, 1412, 1500, 1906) configured to operate in a radio access network, RAN (199, 299, 1404) and for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models, deployed at the UE, the UE being further configured to: receive from a first network node (510, 810, 1408, 1410, 1418, 1420, 1600, 1802) a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and transmit to the first network node a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

34. The UE of claim 33, being further configured to perform operations corresponding to any of the methods of claims 14-18.

35. A non-transitory, computer-readable medium (1510) storing computer-executable instructions that, when executed by processing circuitry (1502) associated with a user equipment, UE (520, 820, 1412, 1500, 1906) configured to operate in a radio access network, RAN (199, 299, 1404) and for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models, configure the UE to perform operations corresponding to any of the methods of claims 13-18.

36. A computer program product (1514) comprising computer-executable instructions that, when executed by processing circuitry (1502) associated with a user equipment, UE (520, 820, 1412, 1500, 1906) configured to operate in a radio access network, RAN (199, 299, 1404) and for life cycle management, LCM, of artificial intel ligence/machine learning, AI/ML, models, configure the UE to perform operations corresponding to any of the methods of claims 13-18.

37. A second network node (830, 1408, 1410, 1418, 1420, 1600, 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipments, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), the second network node comprising: communication interface circuitry (1606, 1804) configured to communicate with UEs and with at least a first network node (510, 810, 1408, 1410, 1418, 1420, 1600, 1802); and processing circuitry (1602, 1804) operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to: transmit to the first network node a fourth message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at a UE connected to the first network node; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receive from the first network node a third message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

38. The second network node of claim 37, wherein the processing circuitry and the communication interface circuitry are further configured to perform operations corresponding to any of the methods of claims 20-24.

39. A second network node (830, 1408, 1410, 1418, 1420, 1600, 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipments, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), the second network node being further configured to: transmit to a first network node (510, 810, 1408, 1410, 1418, 1420, 1600, 1802) a fourth message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at a UE connected to the first network node; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receive from the first network node a third message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

40. The second network node of claim 39, being further configured to perform operations corresponding to any of the methods of claims 20-24.

41. A non-transitory, computer-readable medium (1604, 1804) storing computer-executable instructions that, when executed by processing circuitry (1602, 1804) associated with a second network node (830, 1408, 1410, 1418, 1420, 1600, 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipment, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), configure the second network node to perform operations corresponding to any of the methods of claims 19-

24.

42. A computer program product (1604a, 1804a) comprising computer-executable instructions that, when executed by processing circuitry (1602, 1804) associated with a second network node (830, 1408, 1410, 1418, 1420, 1600, 1802) configured for life cycle management, LCM, of artificial intelligence/machine learning, AI/ML, models in user equipment, UEs (520, 820, 1412, 1500, 1906) operating in a radio access network, RAN (199, 299, 1404), configure the second network node to perform operations corresponding to any of the methods of claims 19-24.

Description:
USER-CENTRIC LIFE CYCLE MANAGEMENT OF AI/ML MODELS DEPLOYED IN A USER EQUIPMENT (UE)

TECHNICAL FIELD

The present disclosure relates generally to wireless communication networks, and more specifically to techniques for managing artificial intelligence and/or machine learning (AI/ML) models used by user equipment (UE) when operating in such networks.

BACKGROUND

Currently the fifth generation (“5G”) of cellular systems, also referred to as New Radio (NR), is being standardized within the Third-Generation Partnership Project (3GPP). NR is developed for maximum flexibility to support multiple and substantially different use cases. These include enhanced mobile broadband (eMBB), machine type communications (MTC), ultra-reliable low latency communications (URLLC), side-link device-to-device (D2D), and several other use cases.

Figure 1 illustrates an exemplary high-level view of the 5G network architecture, consisting of a Next Generation RAN (NG-RAN) 199 and a 5G Core (5GC) 198. NG-RAN 199 can include a set of gNodeB's (gNBs) connected to the 5GC via one or more NG interfaces, such as gNBs 100, 150 connected via interfaces 102, 152, respectively. In addition, the gNBs can be connected to each other via one or more Xn interfaces, such as Xn interface 140 between gNBs 100 and 150. With respect the NR interface to UEs, each of the gNBs can support frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof.

NG-RAN 199 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL). The NG- RAN architecture, i.e., the NG-RAN logical nodes and interfaces between them, is defined as part of the RNL. For each NG-RAN interface (NG, Xn, F1) the related TNL protocol and the functionality are specified. The TNL provides services for user plane transport and signaling transport.

The NG RAN logical nodes shown in Figure 1 include a central (or centralized) unit (CU or gNB-CU) and one or more distributed (or decentralized) units (DU or gNB-DU). For example, gNB 100 includes gNB-CU 110 and gNB- DUs 120 and 130. CUs (e.g., gNB-CU 110) are logical nodes that host higher-layer protocols and perform various gNB functions such controlling the operation of DUs. Each DU is a logical node that hosts lower-layer protocols and can include, depending on the functional split, various subsets of the gNB functions. As such, each of the CUs and DUs can include various circuitry needed to perform their respective functions, including processing circuitry, transceiver circuitry (e.g., for communication), and power supply circuitry.

A gNB-CU connects to gNB-DUs over respective F1 logical interfaces, such as interfaces 122 and 132 shown in Figure 1. The gNB-CU and connected gNB-DUs are only visible to other gNBs and the 5GC as a gNB. In other words, the F1 interface is not visible beyond gNB-CU.

Centralized control plane protocols (e.g., PDCP-C and RRC) can be hosted in a different CU than centralized user plane protocols (e.g., PDCP-U). For example, a gNB-CU can be divided logically into a CU-CP function (including RRC and PDCP for signaling radio bearers) and CU-UP function (including PDCP for UP). A single CU-CP can be associated with multiple CU-UPs in a gNB. The CU-CP and CU-UP communicate with each other using the E1-AP protocol over the E1 interface, as specified in 3GPP TS 38.463 (v15.4.0). Furthermore, the F1 interface between CU and DU (see Figure 1) is functionally split into F1-C between DU and CU-CP and F1-U between DU and CU-UP. Three deployment scenarios for the split gNB architecture shown in Figure 1 are CU-CP and CU-UP centralized, CU-CP distributed/CU-UP centralized, and CU-CP centralized/CU-UP distributed.

Figure 2 shows another high-level view of an exemplary 5G network architecture, including a NG-RAN 299 and 5GC 298. As shown in the figure, NG-RAN 299 can include gNBs (e.g., 210a, b) and ng-eNBs (e.g., 220a, b) that are connected with each other via respective Xn interfaces. The gNBs and ng-eNBs are also connected via the NG interfaces to 5GC 298, more specifically to access and mobility management functions (AMFs, e.g., 230a, b) via respective NG-C interfaces and to user plane functions (UPFs, e.g, 240a, b) via respective NG-U interfaces. Moreover, the AMFs can communicate with one or more policy control functions (PCFs, e.g., 250a, b) and network exposure functions (NEFs, e.g., 260a, b).

Each of the gNBs can support the NR radio interface including frequency division duplexing (FDD), time division duplexing (TDD), or a combination thereof. Each of ng-eNBs can support the fourth generation (4G) Long- Term Evolution (LTE) radio interface. Unlike conventional LTE eNBs, however, ng-eNBs connect to the 5GC via the NG interface. Each of the gNBs and ng-eNBs can serve a geographic coverage area including one more cells, such as cells 211a-b and 221 a-b shown in Figure 2. Depending on the cell in which it is located, a UE 205 can communicate with the gNB or ng-eNB serving that cell via the NR or LTE radio interface, respectively. Although Figure 2 shows gNBs and ng-eNBs separately, it is also possible that a single NG-RAN node provides both types of functionality.

LTE Rel-12 introduced dual connectivity (DC) whereby a UE in RRC_CONNECTED state can be connected to two network nodes simultaneously, thereby improving connection robustness and/or capacity. In LTE DC, these two network nodes are referred to as "Master eNB” (MeNB) and "Secondary eNB” (SeNB), or more generally as master node (MN) and secondary node (SN). More specifically, a UE is configured with a Master Cell Group (MCG) associated with the MN and a Secondary Cell Group (SCG) associated with the SN.

3GPP TR 38.804 (v14.0.0) describes various exemplary DC scenarios or configurations in which the MN and SN can apply NR, LTE, or both. For example, EN-DC refers to the scenario where the MN (eNB) employs LTE and the SN (gNB) employs NR, and both are connected to an LTE Evolved Packet Core (EPC). Other multi-RAT (MR) DC scenarios are possible in the network architecture shown in Figure 2.

Machine learning (ML) is a type of artificial intelligence (Al) that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy. ML algorithms build models based on sample (or "training”) data, with the models being used subsequently to make predictions or decisions. ML algorithms can be used in a wide variety of applications (e.g., medicine, email filtering, speech recognition, etc.) in which it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

AI/ML can be used to enhance the performance of a RAN, such as NG-RAN. 3GPP document RP-201620 defines a study item on "Enhancement for Data Collection for NR and EN -DC”, which aims to study the functional framework for RAN intelligence enabled by further data collection through use cases, examples etc., and to identify the potential standardization impacts on current NG-RAN nodes and interfaces. For example, some objectives include studying high level principles for RAN intelligence enabled by Al, the Al functionality , input/output of the component for Al-enabled optimization, and identifying benefits of Al-enabled NG-RAN such as energy saving, load balancing, mobility management, coverage optimization, etc.

SUMMARY

In some cases, UEs may be enabled and/or configured to use AI/ML functionality when operating in a RAN (e.g., NG-RAN). When a UE re-trains, modifies, or updates an AI/ML model that it uses for operation in the RAN, the model behavior (or policy) can change drastically such that the UE's new model produces different results for a given set of inputs than before being re-trained, modified or updated. Such change in behavior can affect the RAN's performance and/or ability to serve the UE. This problem is exacerbated when the RAN is trying to serve many UEs that use different models and/or model versions for the same operations. Currently, however, the RAN does not have adequate information about or control over the life cycle managemen t of AI/ML models used by UEs.

Embodiments of the present disclosure provide specific improvements to life cycle management (LCM) of AI/ML models used by UEs operating in a RAN, such as by providing, enabling, and/or facilitating solutions to exemplary problems summarized above and described in more detail below.

Embodiments include methods {e.g., procedures) performed by a first network node. These exemplary methods can include transmitting to a UE a first message that includes one of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE.

These exemplary methods can also include receiving from the UE a second message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

In some embodiments, the second message is received in response to transmitting the first message, the second message includes the LCM report, and the first message includes the LCM request or the LCM information. In some embodiments, the LCM report in the second message includes one of more of the following about the LCM performed by the UE:

• identifier of at least one AI/ML model;

• indication of whether the UE has re-trained the identified at least one AI/ML model;

• indication of whether the UE will re-train the identified at least one AI/ML model;

• indication of whether the UE has modified the identified at least one AI/ML model;

• indication of whether the UE will modify the identified at least one AI/ML model;

• reason why the identified at least one AI/ML model will be modified;

• request to test, verify, validate or evaluate the identified at least one AI/ML model;

• information related to modifications performed by the UE on the identified at least one AI/ML model;

• information related to techniques the UE used for re-training the identified at least one AI/ML model;

• information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and

• one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

In some embodiments, these exemplary methods can also include receiving from a second network node a fourth message including one of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deploy ed in or at the UE.

In such embodiments, the first message is transmitting in response to receiving the fourth message.

In some of these embodiments, the first message is transmitted in response to receiving the fourth message and the first message includes at least part of the information received in the fourth message. In some of these embodiments, these exemplary methods can also include transmitting to the second network node a third message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

In some of these embodiments, the third message is transmitted in response to receiving the second message and includes at least part of the information received in the second message.

In other of these embodiments, the fourth message is received in response to transmitting the third message, the third message includes the LCM request, and the fourth message includes the LCM information. In such embodiments, these exemplary methods can also include transmitting to the second network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. The further third message is transmitted in response to receiving the second message from the UE.

In other embodiments, the first message is transmitted in response to receiving the second message, the second message includes the LCM request, and the first message includes the LCM information or the at least one AI/ML model. In some of these embodiments, these exemplary methods can also include transmitting to the UE a further first message including the at least one AI/ML model. In such case, the second message is received in response to transmitting the further first message and the first message includes the LCM information associated with the at least one AI/ML model.

In various embodiments, any one of the following can apply:

• the first and second network nodes are different network nodes in a RAN;

• the first and second network nodes are different units or functions of one network node in a RAN;

• the first and second network nodes are in different RANs; or

• one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMC) function, or part of an operations/administration/ maintenance (CAM) system.

Other embodiments include methods (e.g., procedures) performed by a UE. These exemplary methods can include receive from the first network node a first message that includes one of the following: • at least one AI/ML model to be available and/or deployed in or at the UE ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deploy ed in or at the UE. These exemplary method can also include transmitting to the first network node a second message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

The order and content of the first and second messages can correspond to any of those summarized above in relation to first network node embodiments.

In some embodiments, these exemplary methods can also include performing LCM of the at least one AI/ML model in accordance with the LCM request or the LCM information included in the first message. In such case, the LCM report in the second message can include any of the specific information items mentioned above in relation to first network node embodiments.

In some of these embodiments, these exemplary methods can also include receiving from the first network node a further first message including the at least one AI/ML model. The second message is transmitted in response to receiving the further first message and the first message includes the LCM information associated with the at least one AI/ML model.

In some embodiments, these exemplary methods can also include applying the at least one AI/ML model for one or more of the following UE operations in the RAN:

• estimating and/or compressing channel state information (CSI);

• beam management;

• positioning;

• link adaptation;

• estimating UE and/or network energy saving for a UE configuration;

• estimating signal quality; and

• estimating UE traffic.

Other embodiments include methods (e.g., procedures) performed by a second network node. These exemplary methods can include transmitting to a first network node a fourth message that includes one of the following:

• at least one AI/ML model to be available and/or deployed in or at a UE connected to the first network node ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE ; and These exemplary methods can also include receiving from the first network node a third message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE .

The order and content of the third and fourth messages can correspond to any of those summarized above in relation to first network node embodiments.

In some embodiments, the exemplary method can also include receiving from the first network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. The further third message is received in response to transmitting the fourth message.

In various embodiments, any one of the following can apply:

• the first and second network nodes are different network nodes in a RAN;

• the first and second network nodes are different units or functions of one network node in a RAN;

• the first and second network nodes are in different RANs; or

• one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a CN node or function, an SMC function, or part of an CAM system.

Other embodiments include network nodes (e.g., base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, CN node, CAM, SMC, etc.) and UEs (e.g., wireless devices) configured to perform operations corresponding to any of the exemplary methods described herein. Other embodiments include non-transitory, computer-readable media storing program instructions that, when executed by processing circuitry, configure such network nodes or UEs to perform operations corresponding to any of the exemplary methods described herein.

These and other embodiments described herein provide flexible and efficient techniques for a first network node to control the LCM of an AI/ML model executed by a UE, e.g., that is connected to the first network node . In this manner, embodiments can mitigate and/or avoid improper UE behavior, excessive use of network resources, and reduction in capacity to serve other UEs that can otherwise be due to spurious, incorrect, and/or undesirable UE modifications and training of AI/ML models.

These and other objects, features, and advantages of embodiments of the present disclosure will become apparent upon reading the following Detailed Description in view of the Drawings briefly described below.

BRIEF DESCRIPTION OF THE DRAWINGS

Figures 1-2 illustrate two high-level views of an exemplary 5G/NR network architecture.

Figure 3 is a block diagram of an exemplary framework for RAN intelligence based on AI/ML models.

Figure 4 is a block diagram of another exemplary framework for RAN intelligence based on AI/ML models, including model management functionality.

Figures 5-7 are signal flow diagrams of exemplary AI/ML model life cycle management (LCM) procedures between a UE and a first network node, according to various embodiments of the present disclosure .

Figures 8-10 are signal flow diagrams of exemplary AI/ML model LCM procedures between a UE, a first network node, and a second network node, according to various embodiments of the present disclosure .

Figure 11 shows a flow diagram of an exemplary method (e.g., procedure) for a first network node (e.g., base station, eNB, gNB, ng-eNB, etc.), according to various embodiments of the present disclosure.

Figure 12 shows a flow diagram of an exemplary method (e.g., procedure) for a UE (e.g., wireless device), according to various embodiments of the present disclosure.

Figure 13 shows a flow diagram of an exemplary method (e.g., procedure) for a second network node (e.g., base station, eNB, gNB, ng-eNB, etc.), according to various embodiments of the present disclosure.

Figure 14 shows a communication system according to various embodiments of the present disclosure.

Figure 15 shows a UE according to various embodiments of the present disclosure.

Figure 16 shows a network node according to various embodiments of the present disclosure.

Figure 17 shows host computing system according to various embodiments of the present disclosure.

Figure 18 is a block diagram of a virtualization environment in which functions implemented by some embodiments of the present disclosure may be virtualized.

Figure 19 illustrates communication between a host computing system, a network node, and a UE via multiple connections, at least one of which is wireless, according to various embodiments of the present disclosure.

DETAILED DESCRIPTION

Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided as examples to convey the scope of the subject matter to those skilled in the art.

Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods and/or procedures disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein can be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments can apply to any other embodiments, and vice versa. Other objects, features, and advantages of the enclosed embodiments will be apparent from the following description. Furthermore, the following terms are used throughout the description given below:

• Radio Access Node: As used herein, a "radio access node” (or equivalently "radio network node,” "radio access network node,” or "RAN node”) can be any node in a radio access network (RAN) that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g., gNB in a 3GPP 5G/NR network or an enhanced or eNB in a 3GPP LTE network), base station distributed components (e.g., CU and DU), a high-power or macro base station, a low-power base station (e.g., micro, pico, femto, or home base station, or the like), an integrated access backhaul (IAB) node, a transmission point (TP), a transmission reception point (TRP), a remote radio unit (RRU or RRH), and a relay node.

• Core Network Node: As used herein, a "core network node” is any type of node in a core network. Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a serving gateway (SGW), a PDN Gateway (P-GW), a Policy and Charging Rules Function (PCRF), an access and mobility management function (AMF), a session management function (SMF), a user plane function (UPF), a Charging Function (CHF), a Policy Control Function (PCF), an Authentication Server Function (AUSF), a location management function (LMF), or the like.

• Wireless Device: As used herein, a "wireless device” (or "WD” for short) is any type of device that is capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. Unless otherwise noted, the term "wireless device” is used interchangeably herein with the term "user equipment” (or "UE” for short), with both of these terms having a different meaning than the term "network node”.

• Radio Node: As used herein, a "radio node” can be either a "radio access node” (or equivalent term) or a "wireless device.”

• Network Node: As used herein, a "network node” is any node that is either part of the radio access network {e.g., a radio access node or equivalent term) or of the core network {e.g., a core network node discussed above) of a cellular communications network. Functionally, a network node is equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the cellular communications network, to enable and/or provide wireless access to the wireless device, and/or to perform other functions {e.g., administration) in the cellular communications network.

• Base station: As used herein, a "base station” may comprise a physical or a logical node transmitting or controlling the transmission of radio signals, e.g., eNB, gNB, ng-eNB, en-gNB, centralized unit (CU)Zdistributed unit (DU), transmitting radio network node, transmission point (TP), transmission reception point (TRP), remote radio head (RRH), remote radio unit (RRU), Distributed Antenna System (DAS), relay, etc.

• Node: As used herein, the term "node” (without prefix) can be any type of node that can in or with a wireless network (including RAN and/or core network), including a radio access node (or equivalent term), core network node, or wireless device. However, the term "node” may be limited to a particular type (e.g., radio access node) based on its specific characteristics in any given context of use.

The above definitions are not meant to be exclusive. In other words, various ones of the above terms may be explained and/or described elsewhere in the present disclosure using the same or similar terminology. Nevertheless, to the extent that such other explanations and/or descriptions conflict with the above definitions, the above definitions should control.

Note that the description given herein focuses on a 3GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. Furthermore, although the term "cell” is used herein, it should be understood that (particularly with respect to 5G NR) beams may be used instead of cells and, as such, concepts described herein apply equally to both cells and beams. 5G/NR technology shares many similarities with LTE. For example, NR uses CP-OFDM (Cyclic Prefix Orthogonal Frequency Division Multiplexing) in the DL and both CP-OFDM and DFT-spread OFDM (DFT-S-OFDM) in the UL. As another example, in the time domain, NR DL and UL physical resources are organized into equal-sized 1-ms subframes. A subframe is further divided into multiple slots of equal duration, with each slot including multiple OFDM-based symbols. However, time-frequency resources can be configured much more flexibly for an NR cell than for an LTE cell. For example, rather than a fixed 15-kHz OFDM sub-carrier spacing (SOS) as in LTE, NR SOS can range from 15 to 240 kHz, with even greater SOS considered for future NR releases.

In addition to providing coverage via cells as in LTE, NR networks also provide coverage via "beams.” In general, a downlink (DL, i.e., network to UE) "beam” is a coverage area of a network-transmitted reference signal (RS) that may be measured or monitored by a UE. In NR, for example, RS can include any of the following: synchronization sig nal/P BOH block (SSB), channel state information RS (CSI -RS), tertiary reference signals (or any other sync signal), positioning RS (PRS), demodulation RS (DMRS), phase-tracking reference signals (PTRS), etc. In general, SSB is available to all UEs regardless of the state of their connection with the network, while other RS (e.g., CSI-RS, DM-RS, PTRS) are associated with specific UEs that have a network connection.

The radio resource control (RRC) protocol controls communications between UE and gNB at the radio interface as well as mobility of a UE between cells in the NG-RAN. RRC also broadcasts system information (SI) and performs establishment, configuration, maintenance, and release of data radio bearers (DRBs) and signaling radio bearers (SRBs) and used by UEs. Additionally, RRC controls addition, modification, and release of carrier aggregation (CA) and dual-connectivity (DC) configurations for UEs. RRC also performs various security functions such as key management.

After a UE is powered ON it will be in the RRCJDLE state until an RRC connection is established with the network, at which time the UE will transition to RRCJDONNECTED state (e.g., where data transfer can occur). The UE returns to RRCJDLE after the connection with the network is released. In RRCJDLE state, the UE’s radio is active on a discontinuous reception (DRX) schedule configured by upper layers. During DRX active periods (also referred to as “DRX On durations"), an RRCJDLE UE receives SI broadcast in the cell where the UE is camping, performs measurements of neighbor cells to support cell reselection, and monitors a paging channel on PDCCH for pages from 5GC via gNB. An NR UE in RRCJDLE state is not known to the gNB serving the cell where the UE is camping. However, NR RRC includes an RRCJNACTIVE state in which a UE is known (e.g., via UE context) by the serving gNB. RRCJNACTIVE has some properties similar to a "suspended” condition used in LTE.

As briefly mentioned above, LTE Rel-12 introduced dual connectivity (DC) whereby a UE in RRC_CONNECTED state can be connected to two network nodes simultaneously, thereby improving connection robustness and/or capacity. In LTE DC, these two network nodes are referred to as "Master eNB” (MeNB) and "Secondary eNB” (SeNB). More generally, these two network nodes are referred to as master node (MN) and secondary node (SN). A UE is configured with a Master Cell Group (MCG) associated with the MN and a Secondary Cell Group (SCG) associated with the SN.

Each of these groups of serving cells include one MAC entity, a set of logical channels with associated RLC entities, a primary cell (PCell or PSCell), and optionally one or more secondary cells (SCells). The term "Special Cell” (or “SpCell” for short) refers to the PCell of the MCG or the PSCell of the SCG depending on whether the UE's MAC entity is associated with the MCG or the SCG, respectively. In non-DC operation (e.g., CA), SpCell refers to the PCell. An SpCell is always activated and supports physical uplink control channel (PUCCH) transmission and contentionbased random access by UEs.

The MN provides SI and terminates the control plane (CP) connection towards the UE and, as such, is the UE's controlling node, including for handovers to and from SNs. For example, the MN terminates the connection between the RAN (e.g., eNB) and the MME for an LTE UE. The reconfiguration, addition, and removal of SCells can be performed by RRC. When adding a new SCell, dedicated RRC signaling is used to send the UE all required SI of the SCell, such that UEs need not acquire SI directly from the SCell broadcast. In addition, either or both of the MCG and the SCG can include multiple cells working in CA.

Both MN and SN can terminate the user plane (UP) to the UE. For example, the LTE DC UP includes three different types of bearers. MCG bearers are terminated in the MN, and the SN is not involved in the transport of UP data for MCG bearers. Likewise, SCG bearers are terminated in the SN, and the MN is not involved in the transport of UP data for SCG bearers. Finally, split bearers (and their corresponding S1 -U connections to S-GW) are also terminated in MN. However, PDCP data is transferred between MN and SN via X2-U. Both SN and MN are involved in transmitting data for split bearers.

3GPP TR 38.804 (v14.0.0) describes various exemplary DC scenarios or configurations in which the MN and SN can apply NR, LTE, or both. The following terminology is used to describe these exemplary DC scenarios or configurations:

• DC: LTE DC (/.e., both MN and SN employ LTE, as discussed above);

• EN-DC: LTE-NR DC where MN (eNB) employs LTE and SN (gNB) employs NR, and both are connected to EPC.

• NGEN-DC: LTE-NR dual connectivity where a UE is connected to one ng-eNB that acts as a MN and one gNB that acts as a SN. The ng-eNB is connected to the 5GC and the gNB is connected to the ng -eNB via the Xn interface.

• NE-DC: LTE-NR dual connectivity where a UE is connected to one gNB that acts as a MN and one ng-eNB that acts as a SN. The gNB is connected to 5GC and the ng-eNB is connected to the gNB via the Xn interface.

• NR-DC (or NR-NR DC): both MN and SN employ NR.

• MR-DC (multi-RAT DC): a generalization of the Intra-E-UTRA Dual Connectivity (DC) described in 3GPP TS 36.300 (v16.3.0), where a multiple Rx/Tx UE may be configured to utilize resources provided by two different nodes connected via non-ideal backhaul, one providing E-UTRA access and the other one providing NR access. One node acts as the MN and the other as the SN. The MN and SN are connected via a network interface and at least the MN is connected to the core network. EN-DC, NE-DC, and NGEN-DC are different example cases of MR-DC.

As also mentioned above, 3GPP document RP-201620 defines a study item (SI) on "Enhancement for Data Collection for NR and EN-DC”, which aims to study the functional framework for RAN intelligence enabled by further data collection through use cases, examples etc., and to identify the potential standardization impacts on current NG-RAN nodes and interfaces. For example, some objectives include:

• Study high level principles for RAN intelligence enabled by Al, the Al functionality , input/output of the component for Al-enabled optimization;

• Identify benefits of Al -enabled NG-RAN such as energy saving, load balancing, mobility management, coverage optimization, etc.;

• Study standardization impacts for the identified use cases including: the data that m ay be needed by an Al function as input and data that may be produced by an Al function as output, which is interpretable for multi-vendor support;

• Study standardization impacts on the node or function in current NG-RAN architecture to receive/provide the input/output data; and

• Study standardization impacts on the network interface(s) to convey the input/output data among network nodes or Al functions.

As part of this SI work, 3GPP has released 3GPP Technical Report (TR) 37.817 that describes high-level principles that should be applied for Al-enabled RAN intelligence. This document also includes Figure 3, which is a block diagram of an exemplary framework for RAN intelligence based on AI/ML models . 3GPP TR 37.817 (v1.1 .0) describes the following high-level principles in the context of Figure 3:

• Detailed AI/ML algorithms and models for use cases are implementation specific and out of RAN3 scope.

• Focus on AI/ML functionality and corresponding types of inputs/outputs.

• Input/output and the location of the Model Training and Model Inference function should be studied case by case.

• Focus on the analysis of data needed at the Model T raining function from Data Collection, while the aspects of how the Model Training function uses inputs to train a model are out of RAN3 scope.

• Focus on the analysis of data needed at the Model Inference function from Data Collection, while the aspects of how the Model Inference function uses inputs to derive outputs are out of RAN3 scope. • Where AI/ML functionality resides within the current RAN architecture, depends on deployment and on the specific use cases.

• Model Training and Model Inference functions should be able to request, if needed, specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information. The nature of such information depends on the use case and on the AI/ML algorithm .

• Model Inference function should signal the outputs of the model only to nodes that have explicitly requested them (e.g., via subscription), or nodes that take actions based on the output from Model Inference.

• An AI/ML model used in a Model Inference function has to be initially trained, validated and tested before deployment.

The Data Collection block in Figure 3 is a function that provides input data to Model Training and Model Inference functions (described below). AI/ML algorithm-specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) is not carried out in the Data Coll ection function. Examples of input data include measurements from UEs or different network entities, feedback from the Actor block (described below), and output from an AI/ML model.

The Model Training block in Figure 3 is a function that performs the ML model training, validation, and testing. The testing may generate model performance metrics. The Model Training function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on Training Data provided by the Data Collection function, if required.

The Model Training block includes a Model Deployment/Update procedure that is used to initially deploy a trained, validated, and tested AI/ML model to the Model Inference function , as well as to provide model updates to the Model Inference function. Details of the Model Deployment/Update procedure and the use case-specific AI/ML models transferred via this procedure are out of scope of the Rel-17 SI. The feasibility of single-vendor or multivendor environment has not been studied in the Rel-17 SI.

The Model Inference block in Figure 3 is a function that provides AI/ML model outputs such as predictions or decisions. The Model Inference function may provide model performance feedback to the Model Training function, when applicable. The Model Inference function is also responsible for data preparation (e.g. , data preprocessing and cleaning, formatting, and transformation) based on the Inference Data provided by the Data Collection function, if required. Details of the inference outputs and the model performance feedback are out of scope of the Rel-17 SI.

The Actor block in Figure 3 is a function that receives the output from the Model inference function and triggers or performs corresponding actions. The Actor may trigger actions directed to itself or to other entities. The Actor can also provide Feedback Information that may be needed to derive the Training Data, the Inference Data, or the performance feedback.

3GPP TR 37.817 (v1 .1 .0) also identifies three use cases for RAN Intelligence: network energy savings, load balancing, and mobility optimization. 3GPP TR 37.817 (v1 .1 .0) describes that the following solutions can be used to support AI/ML-based network energy saving:

• AI/ML Model Training is located in the GAM and AI/ML Model Inference is located in the gNB.

• AI/ML Model Training and AI/ML Model Inference are both located in the gNB.

• gNB can continue Model Training based on AI/ML model trained in the 0AM .

In case of CU-DU split architecture, the following solutions are possible:

• AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB-CU.

• AI/ML Model Training and Model Inference are both located in the gNB-CU.

3GPP TR 37.817 (v1 .1 .0) also describes that the following solutions can be used to support AI/ML-based Mobility Optimization:

• AI/ML Model Training function is deployed in 0AM, while the Model Inference function resides within the RAN node (e.g., gNB).

• Both the AI/ML Model Training function and the AI/ML Model Inference function reside within the RAN node (e.g., gNB).

• gNB can continue Model Training based on AI/ML model trained in the 0AM. • For CU-DU split architecture, AI/ML Model Training is located in CU-CP or OAM, and AI/ML Model Inference function is located in CU-CP

3GPP TR 37.817 (v1 .1 .0) also describes that the following solutions can be used to support AI/ML-based Load Balancing:

• AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB.

• AI/ML Model Training and AI/ML Model Inference are both located in the gNB.

• gNB can continue Model Training based on AI/ML model trained in the OAM.

In case of CU-DU split architecture, the following solutions are possible:

• AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB-CU.

• AI/ML Model Training and Model Inference are both located in the gNB-CU.

• Other possible locations of the AI/ML Model Inference are FFS.

3GPP document R3-215244 introduces a Model Management function in the exemplary framework shown in Figure 3. In particular, Figure 4 is a block diagram of another exemplary framework for RAN intelligence based on AI/ML models, including this Model Management functionality. In this framework, model deployment/update should be decided and performed by Model Management instead of by Model Training. Model Management may also host a model repository in which models are stored. Model Management should also control the Model Training function, e.g., by requesting model training and receiving the trained model(s).

Model Management should also support model performance monitoring, which is used to assist and control Model Inference. The model performance feedback from Model Inference should be first sent to Model Management. If the performance is not ideal, Model Management may decide to fallback to a traditional algorithm or change/update the model being used.

Model Management may be hosted by OAM, gNB-CU, or other network entity(ies) depending on the use case. There is a need to clearly define the Model Management function in order to facilitate design and analysis of signaling needed to support the framework shown in Figure 4.

In some cases, UEs may be enabled and/or configured to use AI/ML functionality when operating in a RAN (e.g., NG-RAN). When a UE re-trains, modifies, or updates an AI/ML model that it uses for operation in the RAN, the model behavior (or policy) can change drastically such that the UE's new model produces different results for a given set of inputs than before being re-trained, modified or updated. Such change in behavior can affect the RAN's performance and/or ability to serve the UE. This problem is exacerbated when the RAN is trying to serve many UEs that use different models and/or model versions for the same operations towards the network. Thus, if the RAN is not aware of how, when, and why a UE re-trains, updates, or modifies an AI/ML model and the resulting behavior, the RAN may not be able to serve the UE in the best way.

Additionally, UE operation can be severely affected upon re-training, modifying, or updating an AI/ML model, thereby causing improper behavior. If the RAN then needs to use excessive network resources (e.g., on the radio interface and/or the transport interface) to serve this UE, then this can reduce capacity and resources available for other UEs operating in the RAN. This can result in poor performance for these other UEs.

Currently, however, the RAN does not have adequate information about or control over the life cycle management (LCM) of AI/ML models used by UEs.

Accordingly, embodiments of the present disclosure provide flexible and efficient techniques for a first network node to control the LCM of an AI/ML model executed by a UE. Controlling the LCM can include allowing, instructing, or configuring the UE to re-train, test, verify, validate, modify, enable, disable, replace, revoke, or update the AI/ML model. Additionally, the first network node could, in some cases, provide an AI/ML model to a UE together with instructions, configuration, and/or recommendations for the LCM of the AI/ML model. In some examples, instructions, configuration and/or recommendations for the LCM may comprise any of the following:

• instructions, configurations and/or recommendations to whether or how the UE can or should re-train, modify or update at least an AI/ML model;

• set of conditions to trigger the UE to re-train, modify or update at least an AI/ML model;

• set of conditions to trigger the UE to test, verify, validate, or evaluate at least an AI/ML model;

• set of conditions to trigger the UE to enable, disable, replace or revoke at least an AI/ML model; and/or • instructions, configuration, and/or recommendations for the UE to report whether or how the UE has retrained or modified the AI/ML model.

Embodiments also include complementary methods performed by the UE and by a second network node.

Embodiments can be summarized at a high level as follows. Some embodiments include methods performed by a first network node to control and enable a UE to perform the LCM of an AI/ML model available at or deployed at the UE in a radio communication network. The methods can include transmitting a first message to the UE, wherein the first message includes one or more of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE.

The methods can also include receiving a second message form the UE, where the second message is one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; and

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

As described in more detail below, various embodiments of the methods performed by the first network node include the following:

• different combination of information that can be transmitted/received between the first network node and the UE;

• examples of types of LCM information that can be provided by the first network node to the UE;

• examples of information that the first network node can request the UE to report in association to the LCM of an AI/ML model available and/or deployed in or at the UE; and/or

• examples of information that the UE can request the first network node to provide in association to the LCM of an AI/ML model available and/or deployed in or at the UE.

In some embodiments, the methods can also include receiving a third message from a second network node, the third message including one or more of the following:

• LCM information associated with at least an AI/ML model available and/or deployed in or at the UE;

• an AI/ML model to be available and/or deployed in or at the UE; and

• a request to transmit to the second network node an LCM report associated with at least an AI/ML model available and/or deployed in or at the UE.

In such embodiments, the methods can also include transmitting a FIFITH MESSAGE to the second network node, where the fifth message is an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

Other embodiments include methods performed by a UE operating in a RAN. The methods can include receiving a first message from a first network node and transmitting a second message to the first network node, with the first and second messages having the same characteristics as summarized above in relation to the first network node embodiments. For example, the second message may provide a notification to the first network node that the UE has re-trained or modified an AI/ML model.

Other embodiments include methods performed by a second network node. These methods can include receiving a third message from a first network node and transmitting a fifth message to the first network node, with the fourth message and the fifth message having the same characteristics as summarized above in relation to the first network node embodiments.

In the following description of embodiments, the following groups of terms and/or abbreviations have the same or substantially similar meanings and, as such, are used interchangeably and/or synonymously unless specifically noted or unless a different meaning is clear from a specific context of use:

• "training”, "optimizing”, "optimization”, and "updating” of models;

• "changing” and "modifying” of models, which are used to indicate that a type, structure, parameters, connectivity, etc. of a model is different than it was before the change or modification;

• "model”, "policy”, "algorithm”, "AI/ML model”, "AI/ML policy”, "AI/ML algorithm”. In general, embodiments disclosed herein are applicable to any type of ML used in a RAN. Non-limiting examples include supervised learning, deep learning, reinforcement learning, contextual multi -armed bandit algorithms, autoregression algorithms, etc. or combinations thereof . Such algorithms may exploit functional approximation models, also referred to as AI/ML models, including feedforward neural networks, deep neural networks, recurrent neural networks, convolutional neural networks, etc.

Specific examples of reinforcement learning algorithms include deep reinforcement learning algorithms such as deep Q-network (DQN), proximal policy optimization (PPO), double Q-learning, policy gradient algorithms, off-policy learning algorithms, actor-critic algorithms, and advantage actor-critic algorithms (e.g., A2C, A3C, actorcritic with experience replay, etc.).

The embodiments summarized above will now be described in more detail .

Embodiments of the method performed by the first network node do not restrict the order of the first message transmitted by the first network node and the second message received by the first network node . In some embodiments, the first network node may transmit the first message to the UE prior to receiving a second message from the UE. In this case, the second message is received in response to the first message. As an example, the first message in such embodiments can include any of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE; and

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE.

• a request to transmit to the first network node an LCM report associated with at least an AI/ML model available and/or deployed in or at the UE.

Figure 5 is a signal flow diagram of an exemplary AI/ML model LCM procedure between a UE (520) and a first network node (510), according to these embodiments. As shown in Figure 5, the first message includes LCM information associated with at least an AI/ML model available and/or deployed in or at the UE. Alternately (not shown in Figure 5), the first message can include at least one AI/ML model to be available and/or deployed in or at the UE. In this alternative, the LCM information may be associated with the AI/ML model provided by the first network node to the UE or to another AI/ML model available at or deployed at the UE.

In response to the first message, the first network node may receive from the UE a second message including an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. The LC report can indicate, for example, whether the UE has re-trained or modified the AI/ML model. In this case, the UE may provide such information without any explicit request from the first network node.

Figure 6 is a signal flow diagram of another exemplary AI/ML model LCM procedure between a UE (520) and a first network node (510), according to other embodiments. As shown in Figure 6, the first message includes (or is) an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE. The request may indicate that the UE should provide an LCM report when the UE re-trains or modifies any of the AI/ML models identified by or associated with the request. After retraining, modifying, or updating any of these AI/ML models, the UE sends the second message including the LCM report to the first network node, as requested by the first message.

In some embodiments, the first network node may indicate to the UE (e.g., in the first message) the AI/ML model associated with the LCM request. For example, after providing an AI/ML model to the UE, the first network node requests that the UE provides LCM report(s) with feedback information related to the provisioned AI/ML model.

In some embodiments, the first network node may also provide (e.g., in the first message) LCM information associated with at least one AI/ML model. In this case, the first network node may provide an AI/ML model to the UE with a set of information related to its LCM, and request the UE to report LCM information back to the first network node when the UE re-trains, modifies, or updates the AI/ML model provided by the first network node.

In response to the first message, the first network node may receive a second message including an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. The LCM report can indicate, for example, whether or how the UE has re-trained or modified the AI/ML model(s). In general, the UE provides the LCM in response to an explicit request from the first network node, e.g., in the first message shown in Figure 6.

In other embodiments, however, the first network node may receive the second message from the UE without previously transmitting the first message to the UE. In this case, the first network node may transmit the first message in response to receiving a second message from the UE. Figure 7 is a signal flow diagram of another exemplary AI/ML model LCM procedure between a UE (520) and a first network node (510), according to these embodiments.

As illustrated in Figure 6, the second message may include an LCM request for the first network node to provide LCM information associated with at least one AI/ML model available and/or deployed in or at the UE . In some cases, at least one AI/ML model may have been provided by the first network node to the UE. This is illustrated by the further first message with the dotted line in Figure 7.

The first message sent after the second message in Figure 7 can include LCM information associated with the at least one AI/ML model of the UE that was indicated in the LCM request of the second message. For example, the second message may include an identifier (or an identity) of the AI/ML model for which the UE is requesting LCM information. In such case, the first network node may provide in the responsive first message the LCM information associated with the identifier.

In another example of the embodiments illustrated by Figure 7, the second message from the UE may include a request for the first network node to indicate whether the UE can re-train or modify an AI/ML model (which may or may not have been provided previously by the first network node). In this example, the first message sent after the second message in Figure 7 can include one of the following:

• an indication that the UE cannot re-train or modify the AI/ML model;

• an indication that the UE can re-train or modify the AI/ML model; and

• recommendation or suggestion for the UE to re-train or modify the AI/ML model.

In this example, the second message may include an identifier (or an identity) of the AI/ML model for which the UE is requesting permission to re-train or modify. The first network node can indicate in the responsive first message whether the AI/ML model corresponding to the identifier can be retrained or modified (or make a recommendation to that effect).

The first message transmitted by the first network node to the UE as one or more of:

• a message sent upon establishment (or reestablishment) of a UE connection to the RAN (e.g., an RRCSetup or RRCReestablishment),'

• a message used to transfer information to the UE (e.g., RRC DLlnformationTransfef),'

• a message sent as part of handover execution, as part of the handover command to be sent to the UE (e.g., RRCReconfiguration),'

• broadcast (e.g., SI message or block);

• a paging message;

• a message sent upon completion procedures to setup Non-Access Stratum (NAS) security and/or Access Stratum (AS) security;

• a message sent during a procedure when the UE is transitioned from RRCJNACTIVE to RRC_CONNECTED state (e.g., RRC Resume)

In some embodiments, the LCM request transmitted from the first network node to the UE (e.g., in Figure 6) can include one or more of the following identifiers:

• identifier (or an identity) of at least one AI/ML model associated with the request;

• identifier of a specific version of at least one AI/ML model associated with the request;

In some embodiments, the LCM request transmitted from the first network node to the UE (e.g., in Figure 6) can include one or more of the following specific requests:

• to indicate whether the UE re-trained or modified the identified AI/ML model(s);

• to test, verify, validate, or evaluate the identified AI/ML model(s);

• to enable, disable, revoke, stop, start, or restart the identified AI/ML model(s).

In some embodiments, the LCM request transmitted from the first network node to the UE (e.g., in Figure 6) can include a request to provide information related to features and/or characteristics of the identified AI/ML model(s). Such information could incl ude one or more of the following:

• indication of the AI/ML model structure, such as number of hidden layers, number of hidden units per layer, type of activation functions used in each hidden layer, connectivity degree between layers, etc. Examples of activation functions include hyperbolic tangent function, sinusoidal function, rectified linear unit (“relu”) function, sigmoid function, softmax function, linear function, etc.

• indication of type of neural network (NN) used in the identified AI/ML model. Examples include feed forward NN, convolutional NN, recurrent NN, graph NN, etc.

• indication of number and/or type of input features used to modify the identified AI/ML model(s).

In some embodiments, the LCM request transmitted from the first network node to the UE (e.g., in Figure 6) can include a request to provide information related to re-training of the identified AI/ML model(s). Such information can include one or more of the following:

• indication of loss function used to re-train the identified AI/ML model(s).

• indication of reward function used to re-train the identified AI/ML model(s).

• indication of type of algorithm or optimizer used to re-train the identified AI/ML model(s), such as stochastic gradient based method, gradient based methods, etc.

• information related to type and/or amount of training data used to re-train the identified AI/ML model(s).

• information related to the exploration strategy (e.g., epsilon-greedy exploration) and the associated parameters used to re-train the identified AI/ML model(s).

• list of training parameters that the UE used to re-train the identified AI/ML model(s). Examples of training parameters can include batch size, learning rate, number of training steps, number of training iterations, and data scaling.

In some embodiments, the LCM request transmitted from the first network node to the UE (e.g., in Figure 6) can include a request related to testing, verification, validation, or evaluation of the identified AI/ML model(s). For example, the requested information can include any of the following:

• to test, verify, validate, or evaluate the identified AI/ML model(s) upon after retraining, updating, or modifying the AI/ML model(s).

• for performance of the re-trained AI/ML model (s).

• for one or more performance metrics derived by the UE for the identified AI/ML model(s) after retraining, updating or modifying. In one example, the requested performance metrics could be related to testing, validation, verification, and/or evaluation of the re-trained AI/ML model.

• for one or more performance metrics derived by the UE for the AI/ML model upon retraining, updating , or modifying, based on a set of reference data samples provided by the first network node to the UE. In one example, the requested performance metrics could be related to one or more operation in the group of testing, validation, verification, or evaluation of the re-trained AI/ML model

In some variants, the LCM request can include a reference data set that the UE can use or is instructed to use to test, verify, validate, or evaluate the AI/ML model as requested (e.g., that the requested performance metrics should be based on the set of reference data samples). For example, the reference data set can include a set of reference input-output pairs, where each reference output value (also referred to as "ground truth) represents an output that is expected to occur when the corresponding reference input value is applied to the model for verification.

The following scenario illustrates these embodiments in more detail. After setting up, applying, installing, or otherwise instantiating the AI/ML model, a network node can provide the reference inputs to the AI/ML model and compare the outputs produced to the reference outputs. Based on this comparison, the network node can determine whether the AI/ML model was correctly set up, secured, applied, installed, or instantiated so that it performs as expected.

As another example, the reference data set can include reference state-action pairs. Each reference action represents an expected output of the model or a decision of an AI/ML algorithm using the AI/ML model, when configuring the AI/ML model with the reference state.

In some embodiments, model verification can also be done by a different network node than initially provided the AI/ML model to the UE. For example, if the first network node provided the AI/ML model, a second network node can verify the AI/ML model by providing the reference data set to the UE and comparing the resulting model output. For example, if the second network node decides to re-train the AI/ML model provided by the first network node, the second network node can use the reference data set to determine whether the re-trained model performs as expected (at least within an acceptable range).

In some embodiments, the LCM request from the first network node can also include conditions that trigger the UE to perform the requested actions. The model identifier included in the request can be considered an example of such conditions. As another example, the UE can be requested to disable the output of an AI/ML model during the condition of an active re-training of the AI/ML model.

When the first message include both an AI/ML model and LCM information, the LCM information may be associated with the AI/ML model in the first message or with another AI/ML model available and/or deployed in or at the UE, which may or may not have been provided by the first network node. In some embodiments, the LCM information provided by the first network node to the UE, in association with an AI/ML model available and/or deployed in or at the UE, may include one or more of the following:

• identifier (or an identity) of at least one AI/ML model associated with the request;

• identifier of a specific version of at least one AI/ML model associated with the request;

• indication that the UE can (or cannot) re-train or modify the identified AI/ML model;

• instruction, suggestion, or recommendation for the UE to re-train or modify the identified AI/ML model;

• instruction, suggestion, or recommendation for the UE to test, verify, validate, or evaluate the identified AI/ML model;

• instruction, suggestion, or recommendation for the UE to enable, disable, revoke, stop using, start using, or restart using the identified AI/ML model.

In some embodiments, the LCM information provided by the first network node to the UE may further comprise one or more conditions or events associated with an indication, instruction, suggestion, or recommendation included in the LCM information. Some example conditions or events include the following :

• AI/ML model performance degrades below a threshold (or reference value) or if it falls between two thresholds (or reference values);

• a minimum duration of degraded AI/ML model performance, e.g., below the threshold;

• performance of a radio feature dependent on the AI/ML model degrades below a threshold (or reference value) or if it falls between two thresholds (or reference values); and

• change in distributions of one or more data elements (or input feature) used as input to the AI/ML model.

One example change in distributions is when an average value increases above (or decreases below) a reference value for at least a minimum duration. Another example change is when a standard deviation or variance of at least one input feature increases above a reference value for at least a minimum duration. In any of the above examples, the threshold(s) or reference value(s) can be configured by the first network node . Additionally, in any of the above examples, the first network node can specify a minimum duration of a degradation, e.g., AI/ML model performance below the threshold.

In some embodiments, the LCM information provided by the first network node to the UE can also include one or more instructions, policies, or recommendations related to re-training identified AI/ML model(s). Some examples are given below:

• loss function to be used to re-train the identified AI/ML model(s);

• reward function to be used to re-train the identified AI/ML model(s);

• one or more algorithms or optimizers to be used to re-train the identified AI/ML model(s), such as gradient, stochastic gradient, etc.;

• one or more exploration strategies to be used to re-train the identified AI/ML model (s), such as epsilon- greedy exploration;

• list of training parameters to be used to re-train the identified AI/ML model(s), such as minimum and/or maximum batch size, learning rate, training steps, training iterations, data scaling factors, etc.;

• indication of parts of the identified AI/ML model(s) that the UE can re-train, e.g., which layers of a NN model that the UE can (or cannot) retrain; and

• type and amount of training data used to re-train the identified AI/ML model(s). In some embodiments, the LCM information provided by the first network node to the UE may can also include one or more instructions, policies, or recommendations related to modifying the AI/ML model. Some examples are given below:

• indication that the AI/ML model can (or cannot) be modified;

• indication of one or more type of modifications that are allowed to the identified AI/ML model(s); and

• indication of parts of the identified AI/ML model(s) that the UE can modify, e.g., what layers of an NN- based AI/ML model that the UE can or cannot modify.

The following are some more specific examples of the types of modifications that the LCM information can indicate as allowed (or not allowed) by the UE:

• one or more techniques that can be used to modify the AI/ML model provided by the first network node, such as feed forward NN, convolutional NN, recurrent neural network, graph NN, linear regression, logistic regression, decision tree, etc.;

• number of hidden layers that can be used, e.g., maximum, minimum, or required;

• size of hidden layers that can be used, e.g., maximum number of units per layer, minimum number of units per layer, required number of units per layer, etc.;

• a type of activation function that can be used per layer, such as hyperbolic tangent function, sinusoidal function, rectified linear unit (“relu”) function, sigmoid function, softmax function, linear function, etc.; and

• whether a number and/or type of input features can be changed.

In some embodiments, the LCM information provided by the first network node to the UE can also include one or more conditions or events to be fulfilled for the UE to transmit an indication (e.g., in the second message) that the UE has updated, re-trained, modified, enabled, disabled, stopped, started, restarted, or revoked an AI/ML model provided by the first network node. As an example, such conditions can include changes to the environment in which the AI/ML model is applied. As a more specific example, if the model is used for predictions of radio parameters, events can include changes to radio planning, changes to radio conditions, etc. The following provide some additional examples of events or conditions that can trigger the reporting by the UE :

• Based on reference inputs provided by the first network node, outputs produced by the updated, re -trained and/or modified AI/ML model differ from corresponding reference outputs provided by the first network node by at least a threshold. For example, the reference outputs can be the outputs produced by the AI/ML model provided by the first network node for the same reference inputs.

• Based on a reference data set provided by the first network node or a set of inputs selected by the UE, outputs produced by the updated, re-trained, and/or modified AI/ML model differ from outputs produced by the previous AI/ML model (i.e., prior to retraining) by at least a threshold.

• When model parameters of the updated or re-trained AI/ML model differ from corresponding model parameters of the AI/ML model provided by the first network node by at least a threshold .

• When model parameters of the updated or re-trained AI/ML model differ from corresponding model parameters of the previous AI/ML model by at least a threshold.

• When hyperparameters of the modified AI/ML model differ from corresponding hyperparameters of the AI/ML model provided by the first network node.

• When hyperparameters of the modified AI/ML model differ from corresponding hyperparameters of the previous AI/ML model (i.e., before .

In some embodiments, the LCM information provided by the first network node to the UE can also include an indication of one or more performance metrics that the UE can or should use for testing, verifying, validating, or evaluating the identified AI/ML model(s) upon retraining, updating, or modifying the AI/ML model(s).

As explained above, the second message received by the first network node from the UE may be one of the following:

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

As an example, at least part of the content of the second message can be signaled by extending an existing RRC message, such as UElnformationResponse, ULInformationTransfer, etc. In some embodiments, the second message received by the first network node from the UE includes a request to provide to the LCM information associated with at least one AI/ML model available and/or deployed in or at the UE. The UE may receive an AI/ML model from the first network node (e.g., in the further first message shown in Figure 7) and subsequently request LCM information associated with the earlier-provided AI/ML model (e.g., in the second message shown in Figure 7). In some embodiments, the LCM request transmitted by the UE can include one or more of the following:

• Identifier (or an identity) of at least one AI/ML model associated with the request .

• Identifier of a specific version of at least one AI/ML model associated with the request .

• Request to indicate whether the UE can re-train or modify the identified AI/ML model(s). For example, this request can be associated with a duration of the connection between the UE and the first network node. In another example, it could be a general request for the usage of the AI/ML model in the network.

• Request for one or more conditions or events that trigger retraining, modifying, enabling, disabling, stopping, starting, restarting, and/or revoking the identified AI/ML model(s).

• Request for types of modifications that are allowed to the identified AI/ML model(s), with some example types of modifications listed above in embodiments of the LCM information message.

• Request for instructions, policies, or recommendations related to re-training the AI/ML model, with some example instructions, policies, or recommendations listed above in embodiments of the LCM information message.

• Request for information related to testing, validation, or evaluation of the identified AI/ML model(s) that the UE has re-trained or modified. For example, the UE can request a performance metric and/or a reference data set to be used for these purposes.

In embodiments where the second message comprises an LCM report associated with at least an AI/ML model available and/or deployed in or at the UE. In one example wherein the UE receives an AI/ML model from the first network node via a first message, the LCM report may provide a notification to the first network node that the UE has re-trained or modified an AI/ML model provided by the first network node.

In some embodiments illustrated in Figure 6, the first network node may receive the second message from the UE in response to the first message transmitted by the first network node, in particular an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE. In this case, information provided by the LCM report in the second message may be based on or related to information requested by the first network node by the first message.

In various embodiments, the LCM report received by the first network node from the UE in the second message can include one or more of the following:

• identifier of at least one AI/ML model, which may have been previously provided to the UE by the first network node or by a different network node;

• indication that UE has (or has not) re-trained the identified AI/ML model(s);

• indication that UE will (or will not) re-train the identified AI/ML model(s);

• indication that UE has (or has not) modified the identified AI/ML model(s);

• indication that UE will (or will not) modify the identified AI/ML model(s);

• reason why the AI/ML model will be modified, e.g., performance degradation, excess computation complexity, excess memory consumption, etc.; and

• request to test, verify, validate or evaluate the identified AI/ML model(s).

In some embodiments, the LCM report received by the first network node from the UE in the second message can also include information related to modifications performed by the UE on the identified AI/ML model(s). Such information could include one or more of the following:

• indication of AI/ML model structure, such as number of hidden layers, number of hidden units per layer, type of activation functions (e.g., hyperbolic tangent function, sinusoidal function, rectified linear unit function, sigmoid function, softmax function, linear function, etc.) used in each hidden layer, connectivity degree between layers, etc.; • indication of technique (e.g., NN) used to modify the identified AI/ML model(s), such as feed forward NN, convolutional NN, recurrent NN, graph NN, linear regression, logistic regression, decision trees, etc.; and

• indication of the number and/or type of input features used to modify the identified AI/ML model(s).

In some embodiments, the LCM report received by the first network node from the UE in the second message can also include information related to techniques the UE used for re-training the identified AI/ML model(s), such as loss function, reward function, type of algorithm or optimizer, type and/or amount of training data used, exploration strategy and associated parameters used (e.g., epsilon-greedy exploration), list of training parameter used (e.g., batch size, learning rate, training steps, training iterations, data scaling factors, etc.).

In some embodiments, the LCM report received by the first network node from the UE in the second message can also include information related to testing, validation, or evaluation of the identified AI/ML model(s) that the UE has re-trained or modified. This can include one or more of the following:

• indication of performance of the re-trained AI/ML model, e.g., when tested, verified, validated, or evaluated using a dataset provided by the first network node; and

• Input/output pairs associated with the re-trained AI/ML model.

In some embodiments, the LCM report received by the first network node from the UE in the second message can also include one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified AI/ML model (s). These can include any of the conditions or events mentioned above in the description of the LCM information sent to the UE in the first message. However, it is not necessary that conditions or events reported by the UE in the second message be provided and/or configured by the first network node. This could be the case, for example, if the UE sends the second message before receiving the first message, such as illustrated in Figure 7.

As another example, the condition or event indicated by the UE can be a change in a statistical distribution of one or more input features for the identified AI/ML model(s). This can be detected, for example, based on the mean and/or standard deviation of the input feature(s) increasing above (or decreasing below) corresponding reference value(s) for at least a minimum time period.

In some embodiments, the first network node can also transmit a third message to a second network node, with the third message being one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at a UE

• an LCM report associated with at least one AI/ML model used by a UE

In some embodiments, the first network node can receive a fourth message from the second network node, with the fourth message being one of the following:

• an AI/ML model for deployment in or at a UE ;

• LCM information associated with at least an AI/ML model for a UE; or

• an LCM request associated with at least one AI/ML model available and/or deployed in or at a UE.

In some embodiments, the third message and/or the fourth message can be signaled over an interface between the first and the second network node, such as X2, Xn, F1 , E1, NG, and S1 interfaces that can be part of a 3GPP system.

The sequential order of the third and fourth messages can vary depending on embodiment. In some embodiments, the fourth message may be transmitted by the first network node to the second network node in response to the third message. In other embodiments, the first message may be transmitted by the first network node to the second network node without any prior request from the second network node. Furthermore, a particular UE identified in the third message and/or the fourth message can be connected to the first network node (e.g., to a radio cell of the first network node), in the coverage area of the first network node, or to be handed over to the first network node.

Figure 8 is a signal flow diagram of an exemplary AI/ML model LCM procedure between a UE (820), a first network node (810), and a second network node (830), according to some embodiments of the present disclosure . In these embodiments, the first network node receives from the second network node a fourth message including an LCM request associated with at least one AI/ML model available and/or deployed in or at a UE connected to the first network node. In response to the fourth message, the first network node transmits to the UE a first message including an LCM request associated with the AI/ML model(s) identified by or associated with the fourth message. In some embodiments, the first message may forward to the UE all or part of the LCM request received form the second network node. In some embodiments, the first network node may derive LCM information to send to the UE in the first message based on the information included in the fourth message from the second network node.

As shown in Figure 8, in response to the fourth message, the first network node may transmit to the second network node a third message including an LCM report associated with the AI/ML model(s) identified by or associated with the fourth message.

Figure 9 is a signal flow diagram of another exemplary AI/ML model LCM procedure between a UE (820), a first network node (810), and a second network node (830), according to other embodiments of the present disclosure. In these embodiments, the first network node receives from the second network node a fourth message including LCM information associated with at least one AI/ML model available and/or deployed in or at a UE. In this example, In some embodiments, the first message may forward to the UE (e.g., in the first message) all or part of the LCM information received form the second network node.

Figure 10 is a signal flow diagram of another exemplary AI/ML model LCM procedure between a UE (820), a first network node (810), and a second network node (830), according to other embodiments of the present disclosure. In these embodiments, the first network node initially transmits to the second network node a third message including an LCM request associated with at least one AI/ML model available and/or deployed in or at a UE connected to the first network node. In response, the first network node receives from the second network node a fourth message including LCM information associated with the AI/ML models identified by or associated with the third message. The first network node then forwards all or part of the LCM information to the UE in a first message. If the UE subsequently responds with a second message (including any of the contents discussed above) , the first network node can send the second network node a (further) third message including an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. These AI/ML model(s) can be the same or different than the AI/ML model(s) identified in the LCM request (i.e., the third message sent before receiving the fourth message).

In various embodiments, contents of the LCM request included in the third message (Figure 10) or the fourth message (Figure 8) can include any of the contents described above for the LCM request included in the first message (Figure 6) or the second message (Figure 7).

Other embodiments include exemplary methods performed by a UE operating in a RAN, for LCM of at least one AI/ML model available and/or deployed in or at the UE. These exemplary methods include receiving from a first network node a first message, which corresponds to the first message discussed above in relation to first network node embodiments. These exemplary methods can also include transmitting to the first network node a second message, which corresponds to the second message discussed above in relation to first network node embodiments. The contents of the first and second messages will not be repeated here for the sake of brevity.

In various embodiments, the AI/ML model(s) subject to LCM can be used by the UE for various purposes and/or operations, including any of the following:

• estimating and/or compressing channel state information (CSI), where CSI can include any of rank, channel quality indicator (CQI), precoding matrix indicator (PMI), signal to noise ratio (SNR), signal to interference plus noise ratio (SI NR), reference signal received power (RSRP), etc.;

• beam management;

• positioning;

• link adaptation parameters, such as modulation order, modulation and coding scheme (MCS), etc.;

• energy saving for the UE or for the network (e.g., determining or estimating performance of an energy saving configuration for the UE);

• signal quality; and/or

• estimating UE traffic.

Other embodiments include exemplary methods performed by a second network node, for LCM of at least one AI/ML model available and/or deployed in or at a UE operating in a RAN. These exemplary methods include receiving from a first network node a third message, which corresponds to the third message discussed above in relation to first network node embodiments. These exemplary methods can also include transmitting to the first network node a fourth message, which corresponds to the fourth message discussed above in relation to first network node embodiments. The contents of the third and fourth messages will not be repeated here for the sake of brevity.

In various embodiments, the first and second network nodes can have any of the following arrangements and/or characteristics:

• different RAN nodes (e.g., two gNBs, two eNBs, two en-gNBs, two ng-eNBs, etc.);

• different units/functions of one RAN node (e.g., gNB-CU-CP and gNB-DU, gNB-CU-CP and gNB-CU-UP, etc.);

• the first network node can be a first RAN node (e.g., gNB, eNB, en-gNB, ng-eNB, etc.) and the second network node can be a unit/function of a second RAN node (e.g., gNB-CU-CP);

• use the same Radio Access Technology (RAT, e.g., E-UTRAN, , NG-RAN, , WiFi, etc.) or different RATs (e.g., NR and either E-UTRAN or WiFi);

• part of the same RAN (e.g., E-UTRAN or NG-RAN) or different RANs (e.g., one in NG-RAN and one in E- UTRAN);

• connected via a direct signaling connection (e.g., XnAP) or an indirect signaling connection (e.g., via S1AP or NGAP to/from the CN);

• one of the first and second network nodes can be a RAN node (or unit/function thereof) while the other can be part of an operations/administration/maintenance (CAM) system or a service management and orchestration (SMC) function; and

• one of the first and second network nodes can be a RAN node (or unit/function thereof) while the other can be a CN node or function.

As mentioned above, the third message and/or the fourth message can be signaled over an interface between the first and the second network node, such as X2, Xn, F1 , E1 , NG, and S1 interfaces that can be part of a 3GPP system. The particular interface(s) will depend on the particular types of the first and second network nodes, such as the examples listed above.

Various features of the embodiments described above correspond to various operations illustrated in Figures 11-13, which show exemplary methods {e.g., procedures) for a first network node, a UE, and a second network node, respectively. In other words, various features of the operations described below correspond to various embodiments described above. Furthermore, the exemplary methods shown in Figures 11-13 can be used cooperatively to provide various benefits, advantages, and/or solutions to problems described herein. Although Figures 11-13 show specific blocks in particular orders, the operations of the exemplary methods can be performed in different orders than shown and can be combined and/or divided into blocks having different functionality than shown. Optional blocks or operations are indicated by dashed lines.

In particular, Figure 11 shows an exemplary method (e.g., procedure) for LCM of AI/ML models in UEs operating in a RAN, according to various embodiments of the present disclosure. The exemplary method can be performed by a first network node {e.g, base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, CN node, CAM, SMO, etc.) such as described elsewhere herein.

The exemplary method can include the operations of block 1130, where the first network node can transmit to a UE a first message that includes one of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE. The exemplary method can also include the operations of block 1140, where the first network node can receive from the UE a second message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. In some embodiments, the second message is received in response to transmitting the first message, the second message includes the LCM report, and the first message includes the LCM request or the LCM information. Figures 5-6 show examples of these embodiments.

In some embodiments, the LCM report in the second message includes one of more of the following about the LCM performed by the UE:

• identifier of at least one AI/ML model;

• indication of whether the UE has re-trained the identified at least one AI/ML model;

• indication of whether the UE will re-train the identified at least one AI/ML model;

• indication of whether the UE has modified the identified at least one AI/ML model;

• indication of whether the UE will modify the identified at least one AI/ML model;

• reason why the identified at least one AI/ML model will be modified;

• request to test, verify, validate or evaluate the identified at least one AI/ML model;

• information related to modifications performed by the UE on the identified at least one AI/ML model;

• information related to techniques the UE used for re-training the identified at least one AI/ML model;

• information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and

• one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

In some embodiments, the exemplary method can also include the operations of block 1120, where the first network node can receive from a second network node a fourth message including one of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE .

In such embodiments, the first message is transmitting in response to receiving the fourth message. Figures 8-10 show examples of these embodiments.

In some of these embodiments, the first message is transmitted in response to receiving the fourth message and the first message includes at least part of the information received in the fourth message. In some of these embodiments, the exemplary method can also include the operations of block 11 50, where the first network node can transmit to the second network node a third message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

In some of these embodiments, the third message is transmitted in response to receiving the second message and includes at least part of the information received in the second message. Figures 8-9 show examples of these embodiments.

In other of these embodiments, the fourth message is received in response to transmitting the third message, the third message includes the LCM request, and the fourth message includes the LCM information. Figure 10 shows an example of these embodiments. In such embodiments, the exemplary method can also include the operations of block 1160, where the first network node can transmit to the second network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE . The further third message is transmitted in response to receiving the second message from the UE.

In other embodiments, the first message is transmitted in response to receiving the second message, the second message includes the LCM request, and the first message includes the LCM information or the at least one AI/ML model. Figure 7 shows an example of these embodiments. In some of these embodiments, the exemplary method can also include the operations of block 1150, where the first network node can transmit to the UE a further first message including the at least one AI/ML model. In such case, the second message is received in response to transmitting the further first message and the first message includes the LCM information associated with the at least one AI/ML model.

In various embodiments, any one of the following can apply:

• the first and second network nodes are different network nodes in a RAN; • the first and second network nodes are different units or functions of one network node in a RAN;

• the first and second network nodes are in different RANs; or

• one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMO) function, or part of an operations/administration/ maintenance (OAM) system.

In addition, Figure 12 shows an exemplary method (e.g., procedure) for LCM of artificial AI/ML models, according to various embodiments of the present disclosure. The exemplary method can be performed by UE {e.g., wireless device) operating in a RAN, such as UEs described elsewhere herein.

The exemplary method can include the operations of block 1220, where the UE can receive from the first network node a first message that includes one of the following:

• at least one AI/ML model to be available and/or deployed in or at the UE ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE. The exemplary method can also include the operations of block 1240, where the UE can transmit to the first network node a second message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

In some embodiments, the second message is transmitted in response to receiving the first message, the second message includes the LCM report, and the first message includes the LCM request or the LCM information. Figures 5-6 and 8-10 show examples of these embodiments

In some embodiments, the exemplary method can also include the operations of block 1230, where the UE can perform LCM of the at least one AI/ML model in accordance with the LCM request or the LCM information included in the first message. In such case, the LCM report in the second message includes one of more of the following about the LCM performed by the UE:

• identifier of at least one AI/ML model;

• indication of whether the UE has re-trained the identified at least one AI/ML model;

• indication of whether the UE will re-train the identified at least one AI/ML model;

• indication of whether the UE has modified the identified at least one AI/ML model;

• indication of whether the UE will modify the identified at least one AI/ML model;

• reason why the identified at least one AI/ML model will be modified;

• request to test, verify, validate or evaluate the identified at least one AI/ML model;

• information related to modifications performed by the UE on the identified at least one AI/ML model;

• information related to techniques the UE used for re-training the identified at least one AI/ML model;

• information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and

• one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

In other embodiments, the first message is transmitted in response to receiving the second message, the second message includes the LCM request, and the first message includes the LCM information or the at least one AI/ML model. In some of these embodiments, the exemplary method can also include the operations of block 1210, where the UE can receive from the first network node a further first message including the at least one AI/ML model. The second message is transmitted in response to receiving the further first message and the first message includes the LCM information associated with the at least one AI/ML model. Figure 7 shows an example of these embodiments.

In some embodiments, the exemplary method can also include the operations of block 1250, where the UE can apply the at least one AI/ML model for one or more of the following UE operations in the RAN:

• estimating and/or compressing CSI;

• beam management;

• positioning; • link adaptation;

• estimating UE and/or network energy saving for a UE configuration;

• estimating signal quality; and

• estimating UE traffic.

In addition, Figure 13 shows an exemplary method (e.g., procedure) for LCM of artificial AI/ML models in UEs operating in a RAN, according to various embodiments of the present disclosure. The exemplary method can be performed by a second network node {e.g., base station, eNB, gNB, ng-eNB, etc. or unit/function thereof, CN node, OAM, SMO, etc.) such as described elsewhere herein.

The exemplary method can include the operations of block 1310, where the second network node can transmit to a first network node a fourth message that includes one of the following:

• at least one AI/ML model to be available and/or deployed in or at a UE connected to the first network node ;

• an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

• LCM information associated with at least one AI/ML model available and/or deployed in or at the UE ; and The exemplary method can also include the operations of block 1320, where the second network node can receive from the first network node a third message that includes one of the following:

• an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or

• an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

In some embodiments, the third message is received in response to transmitting the fourth message, the third message includes the LCM report, and the fourth message includes the LCM information or the LCM request. Figures 8-9 show examples of these embodiments. In some of these embodiments, the LCM report in the third message includes one of more of the following about the LCM performed by the UE:

• identifier of at least one AI/ML model;

• indication of whether the UE has re-trained the identified at least one AI/ML model;

• indication of whether the UE will re-train the identified at least one AI/ML model;

• indication of whether the UE has modified the identified at least one AI/ML model;

• indication of whether the UE will modify the identified at least one AI/ML model;

• reason why the identified at least one AI/ML model will be modified;

• request to test, verify, validate, or evaluate the identified at least one AI/ML model;

• information related to modifications performed by the UE on the identified at least one AI/ML model;

• information related to techniques the UE used for re-training the identified at least one AI/ML model;

• information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and

• one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

In other embodiments, the fourth message is transmitted in response to receiving the third message, the third message includes the LCM request, and the fourth message includes the LCM information. Figure 10 shows an example of these embodiments. In some of these embodiments, the exemplary method can also include the operations of block 1330, where the second network node can receive from the first network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE. The further third message is received in response to transmitting the fourth message.

In various embodiments, any one of the following can apply:

• the first and second network nodes are different network nodes in a RAN;

• the first and second network nodes are different units or functions of one network node in a RAN;

• the first and second network nodes are in different RANs; or

• one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a CN node or function, an SMO function, or part of an OAM system.

Although various embodiments are described above in terms of methods, techniques, and/or procedures, the person of ordinary skill will readily comprehend that such methods, techniques, and/or procedures can be embodied by various combinations of hardware and software in various systems, communication devices, computing devices, control devices, apparatuses, non-transitory computer-readable media, computer program products, etc.

Figure 14 shows an example of a communication system 1400 in accordance with some embodiments. In this example, the communication system 1400 includes a telecommunication network 1402 that includes an access network 1404, such as a radio access network (RAN), and a core network 1406, which includes one or more core network nodes 1408. In some embodiments, telecommunication network 1402 can also include one or more Network Management (NM) nodes 1420, which can be part of an operation support system (OSS), a business support system (BSS), and/or an OAM system. The NM nodes can monitor and/or control operations of other nodes in access network 1404 and core network 1406. Although not shown in Figure 14, NM node 1420 can be configured to communicate with other nodes in access network 1404 and core network 1406 for these purposes.

Access network 1404 includes one or more access network nodes, such as network nodes 141 Oa-b (one or more of which may be generally referred to as network nodes 1410), or any other similar 3GPP access node or non- 3GPP access point. Network nodes 1410 facilitate direct or indirect connection of UEs, such as by connecting UEs 1412a-d (one or more of which may be generally referred to as UEs 1412) to the core network 1406 over one or more wireless connections. In some embodiments, access network 1404 can include a service management and orchestration (SMO) system or node 1418, which can monitor and/or control operations of the access network nodes 1410. This arrangement can be used, for example, when access network 1404 utilizes an Open RAN (O-RAN) architecture. SMO system 1418 can be configured to communicate with core network 1406 and/or host 1416, as shown in Figure 14.

Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, communication system 1400 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. Communication system 1400 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.

UEs 1412 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with network nodes 1410 and other communication devices. Similarly, network nodes 1410 are arranged, capable, configured, and/or operable to communicate directly or indirectly with UEs 1412 and/or with other network nodes or equipment in telecommunication network 1402 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in telecommunication network 1402.

In the depicted example, core network 1406 connects network nodes 1410 to one or more hosts, such as host 1416. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts. Core network 1406 includes one more core network nodes (e.g., core network node 1408) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 1408. Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).

Host 1416 may be under the ownership or control of a service provider other than an operator or provider of access network 1404 and/or telecommunication network 1402, and may be operated by the service provider or on behalf of the service provider. Host 1416 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server. As a whole, communication system 1400 of Figure 14 enables connectivity between the UEs, network nodes, and hosts. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (I EEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.

In some examples, telecommunication network 1402 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 1402 may support network slicing to provide different logical networks to different devices that are connected to telecommunication network 1402. For example, the telecommunications network 1402 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)ZMassive loT services to yet further UEs.

In some examples, UEs 1412 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to access network 1404 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from access network 1404. Additionally, a UE may be configured for operating in single- or multi-RAT or multi-standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e., being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).

In the example, hub 1414 communicates with access network 1404 to facilitate indirect communication between one or more UEs (e.g., UE 1412c and/or 1412d) and network nodes (e.g., network node 1410b). In some examples, hub 1414 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs. For example, hub 1414 may be a broadband router enabling access to core network 1406 for the UEs. As another example, hub 1414 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 1410, or by executable code, script, process, or other instructions in hub 1414. As another example, hub 1414 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data. As another example, hub 1414 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, hub 1414 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which hub 1414 then provides to the UE either directly, after performing local processing, and/or after adding additional local content. In still another example, hub 1414 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.

Hub 1414 may have a constant/persistent or intermittent connection to the network node 1410b. Hub 1414 may also allow for a different communication scheme and/or schedule between hub 1414 and UEs (e.g., UE 1412c and/or 1412d), and between hub 1414 and core network 1406. In other examples, hub 1414 is connected to core network 1406 and/or one or more UEs via a wired connection. Moreover, hub 1414 may be configured to connect to an M2M service provider over access network 1404 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with network nodes 1410 while still connected via hub 1414 via a wired or wireless connection. In some embodiments, hub 1414 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 1410b. In other embodiments, hub 1414 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 1410b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.

Figure 15 shows a UE 1500 in accordance with some embodiments. Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc. Other examples include any UE identified by 3GPP, including a narrow band internet of things (NB-loT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.

A UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to- infrastructure (V2I), or vehicle-to-everything (V2X). In other examples, a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).

UE 1500 includes processing circuitry 1502 that is operatively coupled via a bus 1504 to an input/output interface 1506, a power source 1508, a memory 1510, a communication interface 1512, and/or any other component, or any combination thereof. Certain UEs may utilize all or a subset of the components shown in Figure 15. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.

Processing circuitry 1502 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in memory 1510. Processing circuitry 1502 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above. For example, processing circuitry 1502 may include multiple central processing units (CPUs).

In the example, input/output interface 1506 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices. Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. An input device may allow a user to capture information into UE 1500. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof. An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.

In some embodiments, power source 1508 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. Power source 1508 may further include power circuitry for delivering power from power source 1508 itself, and/or an external power source, to the various parts of UE 1500 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of power source 1508. Power circuitry may perform any formatting, converting, or other modification to the power from power source 1508 to make the power suitable for the respective components of UE 1500 to which power is supplied.

Memory 1510 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth. In one example, memory 1510 includes one or more application programs 1514, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1516. Memory 1510 may store, for use by UE 1500, any of a variety of various operating systems or combinations of operating systems.

Memory 1510 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof. The UICC may for example be an embedded UICC (eUlCC), integrated UICC (IUICC) or a removable UICC commonly known as ‘SIM card.' Memory 1510 may allow UE 1500 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in memory 1510, which may be or comprise a device-readable storage medium.

Processing circuitry 1502 may be configured to communicate with an access network or other network using communication interface 1512. Communication interface 1512 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1522. Communication interface 1512 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network). Each transceiver may include a transmitter 1518 and/or a receiver 1520 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth). Moreover, transmitter 1518 and receiver 1520 may be coupled to one or more antennas (e.g., antenna 1522) and may share circuit components, software or firmware, or alternatively be implemented separately.

In the illustrated embodiment, communication functions of communication interface 1512 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11 , Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.

Regardless of the type of sensor, a UE may provide an output of data captured by its sensors, through its communication interface 1512, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., an alert is sent when moisture is detected), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).

As another example, a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change. For example, the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.

A UE, when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare. Non-limiting examples of such an loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot. A UE in the form of an loT device comprises circuitry and/or software in dependence of the intended application of the loT device in addition to other components as described in relation to UE 1500 shown in Figure 15.

As yet another specific example, in an loT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node. The UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the UE may implement the 3GPP NB-loT standard. In other scenarios, a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.

In practice, any number of UEs may be used together with respect to a single use case. For example, a first UE might be or be integrated in a drone and provide the drone's speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone. When the user makes changes from the remote controller, the first UE may adjust the throttle on the drone (e.g., by controlling an actuator) to increase or decrease the drone's speed. The first and/or the second UE can also include more than one of the functionalities described above. For example, a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.

Figure 16 shows a network node 1600 in accordance with some embodiments. Examples of network nodes include, but are not limited to, access points (e.g., radio access points) and base stations (e.g., radio base stations, Node Bs, eNBs, gNBs, etc.).

Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).

Other examples of network nodes include multiple transmission point (multi-TRP) 5G access nodes, multistandard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi- cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).

Network node 1600 includes processing circuitry 1602, memory 1604, communication interface 1606, and power source 1608. Network node 1600 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which network node 1600 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, network node 1600 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memory 1604 for different RATs) and some components may be reused (e.g., a same antenna 1610 may be shared by different RATs). Network node 1600 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1600, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1600. Processing circuitry 1602 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1600 components, such as memory 1604, to provide network node 1600 functionality.

In some embodiments, processing circuitry 1602 includes a system on a chip (SOC). In some embodiments, processing circuitry 1602 includes one or more of radio frequency (RF) transceiver circuitry 1612 and baseband processing circuitry 1614. In some embodiments, RF transceiver circuitry 1612 and baseband processing circuitry 1614 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1612 and baseband processing circuitry 1614 may be on the same chip or set of chips, boards, or units.

Memory 1604 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 1602. Memory 1604 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions (collectively denoted computer program product 1604a) capable of being executed by processing circuitry 1602 and utilized by network node 1600. Memory 1604 may be used to store any calculations made by processing circuitry 1602 and/or any data received via communication interface 1606. In some embodiments, processing circuitry 1602 and memory 1604 is integrated.

Communication interface 1606 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, communication interface 1606 comprises port(s)/terminal(s) 1616 to send and receive data, for example to and from a network over a wired connection. Communication interface 1606 also includes radio front-end circuitry 1618 that may be coupled to, or in certain embodiments a part of, antenna 1610. Radio front-end circuitry 1618 comprises filters 1620 and amplifiers 1622. Radio front-end circuitry 1618 may be connected to antenna 1610 and processing circuitry 1602. The radio front-end circuitry may be configured to condition signals communicated between antenna 1610 and processing circuitry 1602. Radio front-end circuitry 1618 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. Radio frontend circuitry 1618 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1620 and/or amplifiers 1622. The radio signal may then be transmitted via antenna 1610. Similarly, when receiving data, antenna 1610 may collect radio signals which are then converted into digital data by radio front-end circuitry 1618. The digital data may be passed to processing circuitry 1602. In other embodiments, the communication interface may comprise different components and/or different combinations of components.

In certain alternative embodiments, network node 1600 does not include separate radio front-end circuitry 1618; instead, processing circuitry 1602 includes radio front-end circuitry and is connected to antenna 1610. Similarly, in some embodiments, all or some of RF transceiver circuitry 1612 is part of communication interface 1606. In other embodiments, communication interface 1606 includes one or more ports or terminals 1616, radio front-end circuitry 1618, and RF transceiver circuitry 1612, as part of a radio unit (not shown), and communication interface 1606 communicates with the baseband processing circuitry 1614, which is part of a digital unit (not shown).

Antenna 1610 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 1610 may be coupled to radio front-end circuitry 1618 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, antenna 1610 is separate from network node 1600 and connectable to network node 1600 through an interface or port.

Antenna 1610, communication interface 1606, and/or processing circuitry 1602 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, antenna 1610, communication interface 1606, and/or processing circuitry 1602 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.

Power source 1608 provides power to the various components of network node 1600 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source 1608 may further comprise, or be coupled to, power management circuitry to supply the components of network node 1600 with power for performing the functionality described herein. For example, network node 1600 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of power source 1608. As a further example, power source 1608 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.

Embodiments of network node 1600 may include additional components beyond those shown in Figure 16 for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node 1600 may include user interface equipment to allow input of information into network node 1600 and to allow output of information from network node 1600. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 1600.

Figure 17 is a block diagram of a host 1700, which may be an embodiment of host 1416 of Figure 14, in accordance with various aspects described herein. As used herein, host 1700 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm. Host 1700 may provide one or more services to one or more UEs.

Host 1700 includes processing circuitry 1702 that is operatively coupled via a bus 1704 to an input/output interface 1706, a network interface 1708, a power source 1710, and a memory 1712. Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 15 and 16, such that the descriptions thereof are generally applicable to the corresponding components of host 1700.

Memory 1712 may include one or more computer programs including one or more host application programs 1714 and data 1716, which may include user data, e.g., data generated by a UE for host 1700 or data generated by host 1700 for a UE. Embodiments of host 1700 may utilize only a subset or all of the components shown. Host application programs 1714 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (WC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAG, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems). Host application programs 1714 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, host 1700 may select and/or indicate a different host for over-the-top services for a UE. Host application programs 1714 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.

Figure 18 is a block diagram illustrating a virtualization environment 1800 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1800 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host. Further, in embodiments in which the virtual node does not require radio connectivity (e.g., a core network node or host), then the node may be entirely virtualized.

Applications 1802 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment 1800 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.

Hardware 1804 includes processing circuitry, memory that stores software and/or instructions (collectively denoted computer program product 1804a) executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1806 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1808a-b (one or more of which may be generally referred to as VMs 1808), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. Virtualization layer 1806 may present a virtual operating platform that appears like networking hardware to VMs 1808.

VMs 1808 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1806. Different embodiments of the instance of a virtual appliance 1802 may be implemented on one or more of VMs 1808, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.

In the context of NFV, a VM 1808 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 1808, and that part of hardware 1804 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 1808 on top of hardware 1804 and corresponds to application 1802.

Hardware 1804 may be implemented in a standalone network node with generic or specific components. Hardware 1804 may implement some functions via virtualization. Alternatively, hardware 1804 may be part of a larger cluster of hardware (e.g., such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1810, which, among others, oversees lifecycle management of applications 1802. In some embodiments, hardware 1804 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 1812 which may alternatively be used for communication between hardware nodes and radio units.

Figure 19 shows a communication diagram of a host 1902 communicating via a network node 1904 with a UE 1906 over a partially wireless connection in accordance with some embodiments. Example implementations, in accordance with various embodiments, of the UE (such as a UE 1412a of Figure 14 and/or UE 1500 of Figure 15), network node (such as network node 1410a of Figure 14 and/or network node 1600 of Figure 16), and host (such as host 1416 of Figure 14 and/or host 1700 of Figure 17) discussed in the preceding paragraphs will now be described with reference to Figure 19.

Like host 1700, embodiments of host 1902 include hardware, such as a communication interface, processing circuitry, and memory. Host 1902 also includes software, which is stored in or accessible by host 1902 and executable by the processing circuitry. The software includes a host application that may be operable to provide a service to a remote user, such as UE 1906 connecting via an over-the-top (OTT) connection 1950 extending between UE 1906 and host 1902. In providing the service to the remote user, a host application may provide user data which is transmitted using OTT connection 1950.

Network node 1904 includes hardware enabling it to communicate with host 1902 and UE 1906. Connection 1960 may be direct or pass through a core network (like core network 1406 of Figure 14) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks. For example, an intermediate network may be a backbone network or the Internet.

UE 1906 includes hardware and software, which is stored in or accessible by UE 1906 and executable by the UE's processing circuitry. The software includes a client application, such as a web browser or operator-specific "app” that may be operable to provide a service to a human or non-human user via UE 1906 with the support of host 1902. In host 1902, an executing host application may communicate with the executing client application via OTT connection 1950 terminating at UE 1906 and host 1902. In providing the service to the user, the UE's client application may receive request data from the host's host application and provide user data in response to the request data. OTT connection 1950 may transfer both the request data and the user data. The UE's client application may interact with the user to generate the user data that it provides to the host application through OTT connection 1950.

OTT connection 1950 may extend via a connection 1960 between host 1902 and network node 1904 and via a wireless connection 1970 between network node 1904 and UE 1906 to provide the connection between host 1902 and UE 1906. Connection 1960 and wireless connection 1970, over which OTT connection 1950 may be provided, have been drawn abstractly to illustrate the communication between host 1902 and UE 1906 via network node 1904, without explicit reference to any intermediary devices and the precise routing of messages via these devices.

As an example of transmitting data via OTT connection 1950, in step 1908, host 1902 provides user data, which may be performed by executing a host application. In some embodiments, the user data is associated with a particular human user interacting with UE 1906. In other embodiments, the user data is associated with a UE 1906 that shares data with host 1902 without explicit human interaction. In step 1910, host 1902 initiates a transmission carrying the user data towards UE 1906. Host 1902 may initiate the transmission responsive to a request transmitted by UE 1906. The request may be caused by human interaction with UE 1906 or by operation of the client application executing on UE 1906. The transmission may pass via network node 1904, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 1912, network node 1904 transmits to UE 1906 the user data that was carried in the transmission that host 1902 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 1914, UE 1906 receives the user data carried in the transmission, which may be performed by a client application executed on UE 1906 associated with the host application executed by host 1902.

In some examples, UE 1906 executes a client application which provides user data to host 1902. The user data may be provided in reaction or response to the data received from host 1902. Accordingly, in step 1916, UE 1906 may provide user data, which may be performed by executing the client application. In providing the user data, the client application may further consider user input received from the user via an input/output interface of UE 1906. Regardless of the specific manner in which the user data was provided, UE 1906 initiates, in step 1918, transmission of the user data towards host 1902 via network node 1904. In step 1920, in accordance with the teachings of the embodiments described throughout this disclosure, network node 1904 receives user data from UE 1906 and initiates transmission of the received user data towards host 1902. In step 1922, host 1902 receives the user data carried in the transmission initiated by UE 1906.

One or more of the various embodiments improve the performance of OTT services provided to UE 1906 using OTT connection 1950, in which the wireless connection 1970 forms the last segment. More precisely, embodiments described herein provide flexible and efficient techniques for a first network node to control the LCM of an AI/ML model executed by a UE, e.g., that is connected to the first network node. In this manner, embodiments can mitigate and/or avoid improper UE behavior, excessive use of network resources, and reduction in capacity to serve other UEs, which otherwise can be due to spurious, incorrect, and/or undesirable UE modifications and training of AI/ML models. When embodiments are utilized in this manner, OTT services will experience improved network performance, which increases the value of such OTT services to end users and service providers.

In an example scenario, factory status information may be collected and analyzed by host 1902. As another example, host 1902 may process audio and video data which may have been retrieved from a UE for use in creating maps. As another example, host 1902 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights). As another example, host 1902 may store surveillance video uploaded by a UE. As another example, host 1902 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs. As other examples, host 1902 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.

In some examples, a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring OTT connection 1950 between host 1902 and UE 1906, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of host 1902 and/or UE 1906. In some embodiments, sensors (not shown) may be deployed in or in association with other devices through which OTT connection 1950 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities. The reconfiguring of OTT connection 1950 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of network node 1904. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by host 1902. The measurements may be implemented in that software causes messages to be transmitted, in particular empty or 'dummy' messages, using OTT connection 1950 while monitoring propagation times, errors, etc.

The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art.

The term unit, as used herein, can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.

Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processor (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.

As described herein, device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor. Furthermore, functionality of a device or apparatus can be implemented by any combination of hardware and software. A device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other. Moreover, devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs . It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

In addition, certain terms used in the present disclosure, including the specification and drawings, can be used synonymously in certain instances (e.g., "data” and "information”). It should be understood, that although these terms (and/or other terms that can be synonymous to one another) can be used synonymously herein, there can be instances when such words can be intended to not be used synonymously.

Embodiments of the techniques and apparatus described herein also include, but are not limited to, the following enumerated examples:

A1 . A method performed by a first network node for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the method comprising: transmitting to a UE a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receiving from the UE a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

A2. The method of embodiment A1 , wherein: the second message is received in response to transmitting the first message; the second message includes the LCM report; and the first message includes the LCM request or the LCM information.

A2a. The method of embodiment A2, wherein the LCM report in the second message includes one of more of the following about the LCM performed by the UE: identifier of at least one AI/ML model; indication of whether the UE has re-trained the identified at least one AI/ML model; indication of whether the UE will re-train the identified at least one AI/ML model; indication of whether the UE has modified the identified at least one AI/ML model; indication of whether the UE will modify the identified at least one AI/ML model; reason why the identified at least one AI/ML model will be modified; request to test, verify, validate or evaluate the identified at least one AI/ML model; information related to modifications performed by the UE on the identified at least one AI/ML model; information related to techniques the UE used for re-training the identified at least one AI/ML model; information related to testing, validation, or evaluation of the identified at least one AI/ML model that the

UE has re-trained or modified; and one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

A3. The method of any of embodiments A1-A2a, further comprising receiving from a second network node a fourth message including one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or LCM information associated with at least one AI/ML model available and/or deployed in or at the UE ; and wherein the first message is transmitting in response to receiving the fourth message.

A4. The method of embodiment A3, wherein: the first message is transmitted in response to receiving the fourth message; and the first message includes at least part of the information received in the fourth message;

A5. The method of any of embodiments A3-A4, further comprising transmitting to the second network node a third message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE ; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

A6. The method of embodiment A5, wherein: the third message is transmitted in response to receiving the second message; and the third message includes at least part of the information received in the second message.

A7. The method of embodiment A5, wherein: the fourth message is received in response to transmitting the third message; the third message includes the LCM request; and the fourth message includes the LCM information.

A8. The method of embodiment A7, further comprising transmitting to the second network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE, wherein the further third message is transmitted in response to receiving the second message from the UE.

A9. The method of embodiment A1 , wherein: the first message is transmitted in response to receiving the second message; the second message includes the LCM request; and the first message includes the LCM information or the at least one AI/ML model.

A10. The method of embodiment A9, further comprising transmitting to the UE a further first message including the at least one AI/ML model, wherein: the second message is received in response to transmitting the further first message, and the first message includes the LCM information associated with the at least one AI/ML model.

A11. The method of any of embodiments A1-A10, wherein one of the following applies: the first and second network nodes are different network nodes in a RAN; the first and second network nodes are different units or functions of one network node in a RAN; the first and second network nodes are in different RANs; or one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMC) function, or part of an operations/administration/maintenance (CAM) system.

B1. A method performed by a user equipment (UE), operating in a radio access network (RAN), for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, the method comprising: receiving from a first network node a first message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at the UE; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and transmitting to the first network node a second message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

B2. The method of embodiment B1 , wherein: the second message is transmitted in response to receiving the first message; the second message includes the LCM report; and the first message includes the LCM request or the LCM information.

B3. The method of embodiment B2, further comprising performing LCM of the at least one AI/ML model in accordance with the LCM request or the LCM information included in the first message, wherein the LCM report includes one of more of the following about the LCM performed by the UE: identifier of at least one AI/ML model; indication of whether the UE has re-trained the identified at least one AI/ML model; indication of whether the UE will re-train the identified at least one AI/ML model; indication of whether the UE has modified the identified at least one AI/ML model; indication of whether the UE will modify the identified at least one AI/ML model; reason why the identified at least one AI/ML model will be modified; request to test, verify, validate, or evaluate the identified at least one AI/ML model; information related to modifications performed by the UE on the identified at least one AI/ML model; information related to techniques the UE used for re-training the identified at least one AI/ML model; information related to testing, validation, or evaluation of the identified at least one AI/ML model that the

UE has re-trained or modified; and one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

B4. The method of embodiment B1 , wherein: the first message is transmitted in response to receiving the second message; the second message includes the LCM request; and the first message includes the LCM information or the at least one AI/ML model.

B5. The method of embodiment B4, further comprising receiving from the first network node a further first message including the at least one AI/ML model, wherein: the second message is transmitted in response to receiving the further first message, and the first message includes the LCM information associated with the at least one AI/ML model.

B6. The method of any of embodiments B1-B5, further comprising applying the at least one AI/ML model for one or more of the following UE operations in the RAN: channel state information (CSI) estimation and/or compression; beam management; positioning; link adaptation; estimating UE and/or network energy saving for a UE configuration; signal quality estimation; and

UE traffic estimation. C1 . A method performed by a second network node for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the method comprising: transmitting to a first network node a fourth message that includes one of the following: at least one AI/ML model to be available and/or deployed in or at a UE connected to the first network node; an LCM request associated with at least one AI/ML model available and/or deployed in or at the UE; or

LCM information associated with at least one AI/ML model available and/or deployed in or at the UE; and receiving from the first network node a third message that includes one of the following: an LCM request associated with at least an AI/ML model available and/or deployed in or at the UE; or an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE.

C2. The method of embodiment C1 , wherein: the third message is received in response to transmitting the fourth message; the third message includes the LCM report; and the fourth message includes the LCM information or the LCM request.

C3. The method of embodiment C2, wherein the LCM report in the third message includes one of more of the following about the LCM performed by the UE: identifier of at least one AI/ML model; indication of whether the UE has re-trained the identified at least one AI/ML model; indication of whether the UE will re-train the identified at least one AI/ML model; indication of whether the UE has modified the identified at least one AI/ML model; indication of whether the UE will modify the identified at least one AI/ML model; reason why the identified at least one AI/ML model will be modified; request to test, verify, validate or evaluate the identified at least one AI/ML model; information related to modifications performed by the UE on the identified at least one AI/ML model; information related to techniques the UE used for re-training the identified at least one AI/ML model; information related to testing, validation, or evaluation of the identified at least one AI/ML model that the UE has re-trained or modified; and one or more conditions or events that triggered the UE to re-train, modify, stop using, or disable the identified at least one AI/ML model.

C4. The method of embodiment C1 , wherein: the fourth message is transmitted in response to receiving the third message; the third message includes the LCM request; and the fourth message includes the LCM information.

C5. The method of embodiment C4, further comprising receiving from the first network node a further third message that includes an LCM report associated with at least one AI/ML model available and/or deployed in or at the UE, wherein the further third message is received in response to transmitting the fourth message.

C6. The method of any of embodiments C1-C5, wherein one of the following applies: the first and second network nodes are different network nodes in a RAN; the first and second network nodes are different units or functions of one network node in a RAN; the first and second network nodes are in different RANs; or one of the first and second network nodes is a RAN node and the other of the first and second network nodes is one of the following: a core network (CN) node or function, a service management and orchestration (SMO) function, or part of an operations/administration/ maintenance (OAM) system.

D1 . A first network node configured for life cycle management (LCM) of artificial intel ligence/machi ne learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the first network node comprising: communication interface circuitry configured to communicate with UEs and with at least a second network node; and processing circuitry operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to perform operations corresponding to any of the methods of embodiments A1 -A11 .

D2. A first network node configured for life cycle management (LCM) of artificial intel ligence/machi ne learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the first network node being configured to perform operations corresponding to any of the methods of embodiments A1 -A11.

D3. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry associated with a first network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the first network node to perform operations corresponding to any of the methods of embodiments A1 -A11 .

D4. A computer program product comprising computer-executable instructions that, when executed by processing circuitry associated with a first network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the first network node to perform operations corresponding to any of the methods of embodiments A1-A11 .

E1 . A user equipment (UE) configured to operate in a radio access network (RAN) and for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, the UE comprising: communication interface circuitry configured to communicate with at least a first network node; and processing circuitry operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to perform operations corresponding to any of the methods of embodiments B1 -B6.

E2. A user equipment (UE) configured to operate in a radio access network (RAN) and for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, the UE being further configured to perform operations corresponding to any of the methods of embodiments B1 -B6.

E3. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry associated with a user equipment (UE) configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, configure the UE to perform operations corresponding to any of the methods of embodiments B1-B6.

E4. A computer program product comprising computer-executable instructions that, when executed by processing circuitry associated with a user equipment (UE) configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models, configure the UE to perform operations corresponding to any of the methods of embodiments B1-B6.

F1 . A second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the second network node comprising: communication interface circuitry configured to communicate with UEs and with at least a first network node; and processing circuitry operatively coupled to the communication interface circuitry, whereby the processing circuitry and the communication interface circuitry are configured to perform operations corresponding to any of the methods of embodiments C1-C6.

F2. A second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), the second network node being configured to perform operations corresponding to any of the methods of embodiments C1 -C6.

F3. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry associated with a second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the second network node to perform operations corresponding to any of the methods of embodiments C1-C6.

F4. A computer program product comprising computer-executable instructions that, when executed by processing circuitry associated with a second network node configured for life cycle management (LCM) of artificial intelligence/machine learning (AI/ML) models in user equipment (UEs) operating in a radio access network (RAN), configure the second network node to perform operations corresponding to any of the methods of embodiments CIGS.