Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONVERGED CHARGING FOR EDGE ENABLING RESOURCE USAGE AND APPLICATION CONTEXT TRANSFER
Document Type and Number:
WIPO Patent Application WO/2022/174073
Kind Code:
A1
Abstract:
Various embodiments herein are directed to solutions for converged charging for edge enabling infrastructure resource usage, application context transfer, and aggregated fifth-generation system (5GS) usage. Other embodiments may be disclosed or claimed.

Inventors:
YAO YIZHI (US)
CHOU JOEY (US)
Application Number:
PCT/US2022/016174
Publication Date:
August 18, 2022
Filing Date:
February 11, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTEL CORP (US)
International Classes:
H04L12/14; H04L43/06; H04L67/289; H04L67/63
Foreign References:
EP3245766B12020-08-26
US20180351824A12018-12-06
Other References:
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Charging management; Study on charging aspects of edge computing (Release 17)", 3GPP STANDARD; TECHNICAL REPORT; 3GPP TR 28.815, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, no. V0.4.0, 10 February 2021 (2021-02-10), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , pages 1 - 29, XP051975360
"3 Generation Partnership Project; Technical Specification Group Services and System Aspects; Architecture for enabling Edge Applications; (Release 17)", 3GPP STANDARD; TECHNICAL SPECIFICATION; 3GPP TS 23.558, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, no. V1.3.0, 4 February 2021 (2021-02-04), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , pages 1 - 134, XP051975338
SAMSUNG: "pCR 28.814 Use Case of Edge Performance Assurance", 3GPP DRAFT; S5-205037, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG5, no. e-meeting ;20201012 - 20201021, 2 October 2020 (2020-10-02), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , XP051938806
Attorney, Agent or Firm:
STARKOVICH, Alex D. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An apparatus comprising: memory to store virtualized resource (VR) usage measurement report information; and processing circuitry, coupled with the memory, to: retrieve the VR usage measurement report information from the memory, the VR usage measurement report information associated with a virtualized network function (VNF) or VNF component (VNFC) instance for an edge application server (EAS); and generate performance data associated with EAS resource usage based on the VR usage measurement report information.

2. The apparatus of claim 1, wherein the VR usage measurement report information is received from a network functions virtualization (NFV) management and orchestration (MANO) system.

3. The apparatus of claim 1, wherein the performance data is associated with usage of an edge-enabling infrastructure resource supporting the EAS.

4. The apparatus of claim 3, wherein the performance data associated with usage of the edge-enabling infrastructure resource includes an indication of: a data volume transferred for the EAS, a virtual CPU usage of the EAS, a virtual memory usage of the EAS, a virtual disk usage of the EAS, a the virtual storage of the EAS.

5. The apparatus of claim 3, wherein the processing circuitry is further to: determine that a quota associated with the edge-enabling infrastructure resource has been met; in response to the determination that the quota associated with the edge-enabling infrastructure resource has been met, send a request to a provisioning management services (MnS) producer to disable an operational state of the EAS; and receive, from the provisioning MnS producer, a response indicating a result of disabling the operational state of the EAS.

6. The apparatus of claim 1, wherein the processing circuitry is further to send a performance data report to a charging enablement function (CEF) that includes an indication of the generated performance data.

7. The apparatus of claim 6, wherein the performance data report is sent to the CEF via a notifyFileReady file data reporting notification in response to an indication from the CEF that file data reporting is to be used.

8. The apparatus of claim 6, wherein the performance data report is sent to the CEF via a reportStreamData operation in response to an indication from the CEF that streaming data reporting is to be used.

9. The apparatus of claim 1, wherein the processing circuitry is further to: generate charging data related to the performance data; send a charging data request to a charging function (CHF) that includes an indication of the charging data; and receive a charging data response from the CF1F that indicates a result of the charging data request.

10. The apparatus of any of claims 1-9, wherein the apparatus comprises a performance assurance MnS producer supporting a charging trigger function (CTF).

11. One or more computer-readable media storing instructions that, when executed by one or more processors, cause a performance assurance management services (MnS) producer to: receive virtualized resource (VR) usage measurement report information from a network functions virtualization (NFV) management and orchestration (MANO) system, wherein the VR usage measurement report information is associated with a virtualized network function (VNF) or VNF component (VNFC) instance for an edge application server (EAS); and generate performance data associated with EAS resource usage based on the VR usage measurement report information.

12. The one or more computer-readable media of claim 11, wherein the performance data is associated with usage of an edge-enabling infrastructure resource supporting the EAS.

13. The one or more computer-readable media of claim 12, wherein the performance data associated with usage of the edge-enabling infrastructure resource includes an indication of: a data volume transferred for the EAS, a virtual CPU usage of the EAS, a virtual memory usage of the EAS, a virtual disk usage of the EAS, a the virtual storage of the EAS.

14. The one or more computer-readable media of claim 12, wherein the memory further stores instructions to: determine that a quota associated with the edge-enabling infrastructure resource has been met; in response to the determination that the quota associated with the edge-enabling infrastructure resource has been met, send a request to a provisioning management services (MnS) producer to disable an operational state of the EAS; and receive, from the provisioning MnS producer, a response indicating a result of disabling the operational state of the EAS.

15. The one or more computer-readable media of claim 11, wherein the memory further stores instructions to send a performance data report to a charging enablement function (CEF) that includes an indication of the generated performance data.

16. The one or more computer-readable media of claim 15, wherein the performance data report is sent to the CEF via a notifyFileReady file data reporting notification in response to an indication from the CEF that file data reporting is to be used.

17. The one or more computer-readable media of claim 15, wherein the performance data report is sent to the CEF via a reportStreamData operation in response to an indication from the CEF that streaming data reporting is to be used.

18. The one or more computer-readable media of claim 11, wherein the processing circuitry is further to: generate charging data related to the performance data; send a charging data request to a charging function (CHF) that includes an indication of the charging data; and receive a charging data response from the CHF that indicates a result of the charging data request.

19. One or more computer-readable media storing instructions that, when executed by one or more processors, cause an edge enabling server (EES) to: receive, from an edge enabler client (EEC), an application context relocation request; generate charging data based on the application context relocation request; send a charging data request that includes the charging data to a charging function (CHF); and receive a charging data response to the charging data request from the CHF.

20. The one or more computer-readable media of claim 19, wherein the media further stores instructions to send an application context relocation notify message that indicates the charging data request was accepted to an edge application server (EAS).

21. The one or more computer-readable media of claim 19, wherein the charging data request is a first charging data request, the charging data request is a first charging data response, and wherein the media further stores instructions to: send an application context relocation complete message to the EEC; send a second charging data request associated with the application context relocation complete message to the CHF; and receive a second charging data response to the second charging data request from the dTF that indicates an update and closing associated with the first charge data request.

22. One or more computer-readable media storing instructions that, when executed by one or more processors, cause a charge enablement function (CEF) to: receive, from a performance assurance management services (MnS) producer, a performance data report that includes an indication of generated performance data; generate charging data related to the received performance data; send a charging data request to a charging function (CE1F) that includes an indication of the charging data; and receive a charging data response from the CHF that indicates a result of the charging data request.

Description:
CONVERGED CHARGING FOR EDGE ENABLING RESOURCE USAGE AND APPLICATION CONTEXT TRANSFER

CROSS REFERENCE TO RELATED APPLICATION The present application claims priority to U.S. Provisional Patent Application No. 63/149,515, which was filed February 15, 2021.

FIELD

Various embodiments generally may relate to the field of wireless communications. For example, some embodiments may relate to solutions for converged charging for edge enabling infrastructure resource usage, application context transfer, and aggregated fifth-generation system (5GS) usage.

BACKGROUND

Edge computing has been supported in 5GS in the third-generation partnership project (3GPP) since Rel-15, and the edge application enabling architecture and solution, as wells as the fifth-generation core (5GC) enhancements are being studied and defined in Rel-17. Some basic mechanisms of possible solutions for edge enabling infrastructure resources charging are captured in the TR 28.815, v. 0.3.0, 2020-12-04, however, the detailed procedures for charging for edge enabling infrastructure resources are not yet specified. Embodiments of the present disclosure address these and other issues.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.

Figure 1 illustrates an example of a 5GS architecture (non-roaming) in accordance with various embodiments.

Figure 2 illustrates an example of an architecture for enabling edge applications in accordance with various embodiments.

Figure 3 illustrates an example of charging for usage of edge enabling infrastructure resources (with an MnS producer supporting a CTF) in accordance with various embodiments. Figure 4 illustrates an example of charging for usage of edge enabling infrastructure resources (with a CEF enabling charging for an MnS producer) in accordance with various embodiments.

Figure 5 illustrates an example of charging for EAS application context transfer in accordance with various embodiments.

Figure 6 illustrates an example of a 5G data connectivity converged charging architecture in accordance with various embodiments.

Figure 7A, 7B, and 7C illustrate an example of a 5GS Inter-provider charging for edge computing (aggregation of SMF reported usage) in accordance with various embodiments.

Figure 8 illustrates an example of a 5GS inter-provider charging with support of an MnS producer for aggregated usage in accordance with various embodiments.

Figure 9 illustrates an example of a CHF obtaining aggregated 5GS usage from an MnS producer in accordance with various embodiments.

Figure 10 schematically illustrates a wireless network in accordance with various embodiments.

Figure 11 schematically illustrates components of a wireless network in accordance with various embodiments.

Figure 12 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.

Figures 13, 14, and 15 depict examples of procedures for practicing the various embodiments discussed herein.

DETAILED DESCRIPTION

The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrases “A or B” and “A/B” mean (A), (B), or (A and B).

As introduced above, edge computing has been supported in 5GS in the third-generation partnership project (3 GPP) since Rel-15, and the edge application enabling architecture and solution, as wells as the fifth-generation core (5GC) enhancements are being studied and defined in Rel-17. Figure 1 illustrates an example of a 5GS architecture (non-roaming) specified in TS 23.501, v. 16.7.0, 2020-12-17. Figure 2 illustrates an example of an architecture for enabling edge applications as specified in TS 23.558, v. 1.2.0, 2020-12-07.

As also noted above, the detailed procedures for charging for edge enabling infrastructure resources are not yet specified. In the solution for EAS application context transfer charging described in clause 7.3.4.6 of TR 28.815, the charging only happens for step 1 when S-EES receives the Application Context Relocation Request messages from EEC, however it does not consider the success or failure of the procedure which should be considered for charging.

The UC, potential requirements and key issues on inter-provider charging for 5GS are documented in the TR 28.815. The solution based on aggregation of 5GS usage by CHF, and aggregation by Billing domain are captured. However both solutions have some drawbacks. For example, aggregation of 5GS usage by CHF requires the CF1F to be responsible for charging for the whole PLMN. Additionally, aggregation of 5GS usage by Billing domain does not support online charging.

Among other things, embodiments of the present disclosure are directed to solutions for converged charging for edge enabling infrastructure resource usage, application context transfer, and aggregated 5GS usage. The following solutions are provided for charging for edge enabling infrastructure resources and services.

Edge Enabling Infrastructure Resources Charging

The charging for edge enabling infrastructure resources could be supported with CTF embedded in the MnS producer or with CEF to enable the charging for MnS producer. In some embodiments, such solutions are only applicable to the case that the EAS is implemented as VNF. An example of a solution with CTF embedded in the MnS producer is illustrated in Figure 4 and described below.

Referring now to the example in Figure 3, this solution uses performance data to report the usage of edge enabling infrastructure resources. The steps of the process in Figure 3 are as follows: 1) VR measurement report for VNF/VNFC: performance assurance MnS producer receives on or more VR usage measurement reports for VNF/VNFC instances related to the EAS from an ETSI NFV MANO system (see ETSI GS NFV-IFA 027).

2) Generate performance data for resource usage for EAS: performance assurance MnS producer generates performance data (measurements or KPI) about usage of edge enabling infrastructure resource supporting the EAS based on the VR usage measurements received from ETSI NFV MANO system. The performance data about edge enabling infrastructure resource could be related to data volume transferred for the EAS, virtual CPU usage of the EAS, virtual memory usage of the EAS, virtual disk usage of the EAS, or the virtual storage of the EAS.

2ch-a) Charging Data Request [Event]: performance assurance MnS Producer generates charging data related to the collected performance data and sends the charging data request for the CHF to process the related charging data for CDR generation purpose.

2ch-b) Create CDR: the CHF stores received information, and creates a CDR related to the event.

2ch-c) Charging Data Response [Event]: The CHF informs the performance assurance MnS Producer on the result of the request.

3a) Request to disable the operational state of the EAS: if the quota of the edge enabling infrastructure resource for an EAS is used out, the performance assurance MnS producer requests the provisioning MnS producer to disable the operational state of the EAS (by invoking modifyMOIAttributes operation).

3b) Response to disable the operational state of the EAS: the provisioning MnS producer provides the result of disabling operational state of the EAS to the performance assurance MnS producer.

4a) Request to stop VNF/VNFC instances of the EAS: the provisioning MnS producer requests ETSI NFV MANO system to stop VNF/VNFC instances of EAS (by invoking OperateVnfRequest operation, see ETSI GS NFV-IFA 008 [y]).

4b) Response to stop VNF/VNFC instances of the EAS: ETSI NFV MANO system provides the result of stopping VNF VNFC instances of the EAS to the provisioning MnS producer.

An example of a process providing a solution with CEF enabling charging for an MnS producer is illustrated in Figure 4. In this example, this solution uses performance data to report the usage of edge enabling infrastructure resources. The CHF needs to consume the MnS to create measurement j ob and subscribes to notifications for the performance data file reporting when the file data reporting mechanism is chosen (see TS 28.550 and TS 28.532). 1) VR measurement report for VNF/VNFC: performance assurance MnS producer receives on or more VR usage measurement reports for VNF/VNFC instances related to the EAS from an ETSI NFV MANO system.

2) Generate performance data for resource usage for EAS: performance assurance MnS producer generates the performance data (measurements or KPI) about usage of edge enabling infrastructure resource supporting the EAS based on the VR usage measurements received from ETSI NFV MANO system, the MnS producer reports the performance data to the CEF. The performance data about edge enabling infrastructure resource could be related to data volume transferred for the EAS, virtual CPU usage of the EAS, virtual memory usage of the EAS, virtual disk usage of the EAS, or the virtual storage of the EAS.

3) Performance data report: the performance assurance MnS producer reports the performance data to the CEF according the reporting method selected by the CEF for the measurement job. The performance data could be reported by a notifyFileReady notification if the file data reporting method is selected, or by the reportStreamData operation following the successful streaming connection establishment with the CEF if the streaming data reporting method is selected.

3ch-a) Charging Data Request [Event]: The CEF generates charging data related to the collected performance data and sends the charging data request for the CHF to process the related charging data for CDR generation purpose.

3ch-b) Create CDR: the CHF stores received information, and creates a CDR related to the event.

13ch-c) Charging Data Response [Event]: The CHF informs the CEF on the result of the request.

4a) Request to disable the operational state of the EAS: if the quota of the edge enabling infrastructure resource for an EAS is used out, the CEF requests the provisioning MnS producer to disable the operational state of the EAS (by invoking modifyMOIAttributes operation).

4b) Response to disable the operational state of the EAS: the provisioning MnS producer provides the result of disabling operational state of the EAS to the CEF.

5a) Request to stop VNF/VNFC instances of the EAS: the provisioning MnS producer requests ETSI NFV MANO system to stop VNF/VNFC instances of EAS (e.g., by invoking an OperateVnfRequest operation).

5b) Response to stop VNF/VNFC instances of the EAS: ETSI NFV MANO system provides the result of stopping VNF/VNFC instances of the EAS to the provisioning MnS producer. EAS Application Context Transfer Charging

An example of a process providing a solution for EAS application context transfer charging is illustrated in Figure 5. The steps of the process in this example are as follows:

1. The EEC sends the Application Context Relocation request to the S-EES. lch-a) Charging Data Request: The S-EES generates charging data related to the

Application Context Relocation request and sends the charging data request to the CHF for request granting and CDR opening purpose. lch-b) Create CDR: the CHF stores received information, checks the account balance and opens a CDR related to the Application Context Relocation procedure. lch-c) Charging Data Response: The CHF informs the S-EES on the result of the request.

2. If the request was granted, the S-EES sends Application Context Relocation Notify to the S-EAS.

3. The S-EAS sends Application Context Relocation Complete to the S-EES.

4. The S-EES sends Application Context Relocation Complete to the EEC.

4ch-a) Charging Data Request [Event]: The S-EES generates charging data related to the Application Context Relocation Complete and sends the charging data request to the CHF for CDR update and closing purpose.

4ch-b) Update and close CDR: the CHF stores received information, updates and closes the CDR related to the Application Context Relocation procedure.

Inter-provider-based Charging

Some embodiments may provide solutions to support the inter-provider charging and related requirements for REQ-CH_EC_5GS_SP-01, REQ-CH_EC_5GS_SP-02 and Key Issue #lb are based on the 5G data connectivity converged charging architecture defined in TS 32.255, with the extension of CHF to support inter-provider charging for 5GS supporting edge computing per edge application and/or per EDN. An example of this architecture is illustrated in Figure 6.

Solution #2a: Aggregation of SMF reported usage by CHF

In the example of the process to implement this solution (illustrated in Figures 7A, 7B, and 7C), the total data volume transferred for the edge application in the 5GS is obtained by CHF from each individual UE usage reported by SMF. The CHF of MNO is configured with quota and reporting threshold for 5GS inter-provider charging. The steps of the process in Figures 7A-7C are as follows: 1) UE starts using an edge application: Triggers is are according to TS 32.255 and TS 32.290.

2) Charging Data Request [Initial, Quota Requested]: the SMF sends the request to the CHF to reserve a number of units if determined in step 2.

3) Account, Rating, Reservation Control: the CHF rates the request, checks if corresponding funds can be reserved on the edge application’s account balance. If the account has sufficient funds, the CHF performs the corresponding reservation.

4) Open CDR: based on policies, the CHF opens a CDR related to the service for edge application.

5) Charging Data Response [Initial, Quota Granted] : the CHF grants the reserved number of units to SMF.

6) Granted Units Supervision: The SMF monitors the consumption of the granted units.

7) UE edge application usage ongoing: the SMF continues to deliver the service.

8) Quota management Trigger: A Trigger associated to Quota management is met. Units determination is performed when applicable.

9) Charging Data Request [Update, Quota Requested]: the SMF sends the request to the CHF, to be granted with more unit for the service to continue, and also for reporting the used units.

10) Account, Rating, Reservation Control: same as step 4, with the option to also deduct the funds corresponding to the usage on the account balance.

11) Update CDR: based on policies, the CHF updates the CDR with charging data related to the service.

12) Charging Data Response [Update, Quota Granted]: The CHF grants quota to SMF for the service, with the reserved number of units.

13) Service delivery ongoing: the NF (CTF) continues to deliver the service.

14) UE stops using the edge application: the SMF is requested to end the service delivery and does this.

15) Charging Data Request [Termination]: the SMF sends the request to the CHF, for charging data related to the service termination with the final consumed units.

16) Account, Rating Control: the CHF performs the service termination process which involve using the reported charging data to rate the usage and deduct the funds corresponding to the usage on the account balance.

17) Close CDR: based on policies, the CHF closes the CDR with charging data related to the service termination and the last reported units. 18) Charging Data Response [Termination]: The CHF informs the SMF on the result of the request.

Solution #2b: Obtaining the aggregated usage from MnS producer

Figure 8 illustrates an example of 5GS inter-provider charging with support of an MnS producer for aggregated usage in accordance with some embodiments. In this solution, the total 5GS usage of the PLMN for an edge application or an EDN is obtained by CHF from MnS producer by consuming the MnS. So the charging architecture needs to be extended with support of MnS producer.

This solution follows the same procedure as solution #2a for the interactions between SMF and CHF, with the exception that the CHF obtains the aggregated 5GS usage for an edge application or an EDN from MnS producer regularly as described below, and uses the obtained total usage from MnS producer for account, rating and reservation control in step 3 and step 3 of solution #2a.

Figure 9 illustrates an example of a process where a CHF obtains aggregated 5GS usage from an MnS producer. In this example:

1) Performance data is ready in MnS producer: The performance data of aggregated 5GS usage for an edge application or EDN is ready.

2) Performance data report: the MnS producer reports the performance data of aggregated 5GS usage to CFIF.

SYSTEMS AND IMPLEMENTATIONS

Figures 10-12 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.

Figure 10 illustrates a network 1000 in accordance with various embodiments. The network 1000 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems, or the like.

The network 1000 may include a UE 1002, which may include any mobile or non-mobile computing device designed to communicate with a RAN 1004 via an over-the-air connection. The UE 1002 may be communicatively coupled with the RAN 1004 by a Uu interface. The UE 1002 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, IoT device, etc.

In some embodiments, the network 1000 may include a plurality of UEs coupled directly with one another via a sidelink interface. The UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.

In some embodiments, the UE 1002 may additionally communicate with an AP 1006 via an over-the-air connection. The AP 1006 may manage a WLAN connection, which may serve to offload some/all network traffic from the RAN 1004. The connection between the UE 1002 and the AP 1006 may be consistent with any IEEE 802.11 protocol, wherein the AP 1006 could be a wireless fidelity (Wi-Fi®) router. In some embodiments, the UE 1002, RAN 1004, and AP 1006 may utilize cellular- WLAN aggregation (for example, LWA/LWIP). Cellular- WLAN aggregation may involve the UE 1002 being configured by the RAN 1004 to utilize both cellular radio resources and WLAN resources.

The RAN 1004 may include one or more access nodes, for example, AN 1008. AN 1008 may terminate air-interface protocols for the UE 1002 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and LI protocols. In this manner, the AN 1008 may enable data/voice connectivity between CN 1020 and the UE 1002. In some embodiments, the AN 1008 may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network, which may be referred to as a CRAN or virtual baseband unit pool. The AN 1008 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, TRP, etc. The AN 1008 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.

In embodiments in which the RAN 1004 includes a plurality of ANs, they may be coupled with one another via an X2 interface (if the RAN 1004 is an LTE RAN) or an Xn interface (if the RAN 1004 is a 5G RAN). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.

The ANs of the RAN 1004 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 1002 with an air interface for network access. The UE 1002 may be simultaneously connected with a plurality of cells provided by the same or different ANs of the RAN 1004. For example, the UE 1002 and RAN 1004 may use carrier aggregation to allow the UE 1002 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell. In dual connectivity scenarios, a first AN may be a master node that provides an MCG and a second AN may be secondary node that provides an SCG. The first/second ANs may be any combination of eNB, gNB, ng-eNB, etc.

The RAN 1004 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.

In V2X scenarios the UE 1002 or AN 1008 may be or act as a RSU, which may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE. An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services. The components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.

In some embodiments, the RAN 1004 may be an LTE RAN 1010 with eNBs, for example, eNB 1012. The LTE RAN 1010 may provide an LTE air interface with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on CSI-RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands.

In some embodiments, the RAN 1004 may be an NG-RAN 1014 with gNBs, for example, gNB 1016, or ng-eNBs, for example, ng-eNB 1018. The gNB 1016 may connect with 5G-enabled UEs using a 5GNR interface. The gNB 1016 may connect with a 5G core through an NG interface, which may include an N2 interface or an N3 interface. The ng-eNB 1018 may also connect with the 5G core through an NG interface, but may connect with a UE via an LTE air interface. The gNB 1016 and the ng-eNB 1018 may connect with each other over an Xn interface.

In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 1014 and a UPF 1048 (e ., N3 interface), and anNG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN1014 and an AMF 1044 (e.g., N2 interface).

The NG-RAN 1014 may provide a 5G-NR air interface with the following characteristics: variable SCS, CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.

In some embodiments, the 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 1002 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 1002, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 1002 with different amount of frequency resources (for example, PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 1002 and in some cases at the gNB 1016. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.

The RAN 1004 is communicatively coupled to CN 1020 that includes network elements to provide various functions to support data and telecommunications services to customers/subscribers (for example, users of UE 1002). The components of the CN 1020 may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 1020 onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN 1020 may be referred to as a network slice, and a logical instantiation of a portion of the CN 1020 may be referred to as a network sub-slice.

In some embodiments, the CN 1020 may be an LTE CN 1022, which may also be referred to as an EPC. The LTE CN 1022 may include MME 1024, SGW 1026, SGSN 1028, HSS 1030, PGW 1032, and PCRF 1034 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the LTE CN 1022 may be briefly introduced as follows.

The MME 1024 may implement mobility management functions to track a current location of the UE 1002 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.

The SGW 1026 may terminate an SI interface toward the RAN and route data packets between the RAN and the LTE CN 1022. The SGW 1026 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.

The SGSN 1028 may track a location of the UE 1002 and perform security functions and access control. In addition, the SGSN 1028 may perform inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 1024; MME selection for handovers; etc. The S3 reference point between the MME 1024 and the SGSN 1028 may enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active states.

The HSS 1030 may include a database for network users, including subscription-related information to support the network entities’ handling of communication sessions. The HSS 1030 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. An S6a reference point between the HSS 1030 and the MME 1024 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the LTE CN 1020.

The PGW 1032 may terminate an SGi interface toward a data network (DN) 1036 that may include an application/content server 1038. The PGW 1032 may route data packets between the LTE CN 1022 and the data network 1036. The PGW 1032 may be coupled with the SGW 1026 by an S5 reference point to facilitate user plane tunneling and tunnel management. The PGW 1032 may further include a node for policy enforcement and charging data collection (for example, PCEF). Additionally, the SGi reference point between the PGW 1032 and the data network 1036 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. The PGW 1032 may be coupled with a PCRF 1034 via a Gx reference point.

The PCRF 1034 is the policy and charging control element of the LTE CN 1022. The PCRF 1034 may be communicatively coupled to the app/content server 1038 to determine appropriate QoS and charging parameters for service flows. The PCRF 1032 may provision associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI. In some embodiments, the CN 1020 may be a 5GC 1040. The 5GC 1040 may include an AUSF 1042, AMF 1044, SMF 1046, UPF 1048, NSSF 1050, NEF 1052, NRF 1054, PCF 1056, UDM 1058, and AF 1060 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the 5GC 1040 may be briefly introduced as follows.

The AUSF 1042 may store data for authentication of UE 1002 and handle authentication- related functionality. The AUSF 1042 may facilitate a common authentication framework for various access types. In addition to communicating with other elements of the 5GC 1040 over reference points as shown, the AUSF 1042 may exhibit an Nausf service-based interface.

The AMF 1044 may allow other functions of the 5GC 1040 to communicate with the UE 1002 and the RAN 1004 and to subscribe to notifications about mobility events with respect to the UE 1002. The AMF 1044 may be responsible for registration management (for example, for registering UE 1002), connection management, reachability management, mobility management, lawful interception of AMF -related events, and access authentication and authorization. The AMF 1044 may provide transport for SM messages between the UE 1002 and the SMF 1046, and act as a transparent proxy for routing SM messages. AMF 1044 may also provide transport for SMS messages between UE 1002 and an SMSF. AMF 1044 may interact with the AUSF 1042 and the UE 1002 to perform various security anchor and context management functions. Furthermore, AMF 1044 may be a termination point of a RAN CP interface, which may include or be an N2 reference point between the RAN 1004 and the AMF 1044; and the AMF 1044 may be a termination point of NAS (Nl) signaling, and perform NAS ciphering and integrity protection. AMF 1044 may also support NAS signaling with the UE 1002 over an N3 IWF interface.

The SMF 1046 may be responsible for SM (for example, session establishment, tunnel management between UPF 1048 and AN 1008); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 1048 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 1044 over N2 to AN 1008; and determining SSC mode of a session. SM may refer to management of a PDU session, and a PDU session or “session” may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 1002 and the data network 1036.

The UPF 1048 may act as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 1036, and a branching point to support multi-homed PDU session. The UPF 1048 may also perform packet routing and forwarding, perform packet inspection, enforce the user plane part of policy rules, lawfully intercept packets (UP collection), perform traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF- to-QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering. UPF 1048 may include an uplink classifier to support routing traffic flows to a data network.

The NSSF 1050 may select a set of network slice instances serving the UE 1002. The NSSF 1050 may also determine allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF 1050 may also determine the AMF set to be used to serve the UE 1002, or a list of candidate AMFs based on a suitable configuration and possibly by querying the NRF 1054. The selection of a set of network slice instances for the UE 1002 may be triggered by the AMF 1044 with which the UE 1002 is registered by interacting with the NSSF 1050, which may lead to a change of AMF. The NSSF 1050 may interact with the AMF 1044 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown). Additionally, the NSSF 1050 may exhibit an Nnssf service-based interface.

The NEF 1052 may securely expose services and capabilities provided by 3 GPP network functions for third party, internal exposure/re-exposure, AFs (e.g., AF 1060), edge computing or fog computing systems, etc. In such embodiments, the NEF 1052 may authenticate, authorize, or throttle the AFs. NEF 1052 may also translate information exchanged with the AF 1060 and information exchanged with internal network functions. For example, the NEF 1052 may translate between an AF-Service-ldentifier and an internal 5GC information. NEF 1052 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 1052 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 1052 to other NFs and AFs, or used for other purposes such as analytics. Additionally, the NEF 1052 may exhibit an Nnef service- based interface.

The NRF 1054 may support service discovery functions, receive NF discovery requests from NF instances, and provide the information of the discovered NF instances to the NF instances. NRF 1054 also maintains information of available NF instances and their supported services. As used herein, the terms “instantiate,” “instantiation,” and the like may refer to the creation of an instance, and an “instance” may refer to a concrete occurrence of an object, which may occur, for example, during execution of program code. Additionally, the NRF 1054 may exhibit the Nnrf service-based interface.

The PCF 1056 may provide policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 1056 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 1058. In addition to communicating with functions over reference points as shown, the PCF 1056 exhibit an Npcf service-based interface.

The UDM 1058 may handle subscription-related information to support the network entities’ handling of communication sessions, and may store subscription data of UE 1002. For example, subscription data may be communicated via an N8 reference point between the UDM 1058 and the AMF 1044. The UDM 1058 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 1058 and the PCF 1056, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 1002) fortheNEF 1052. TheNudr service-based interface may be exhibited by the UDR 221 to allow the UDM 1058, PCF 1056, and NEF 1052 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM- FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 1058 may exhibit the Nudm service-based interface.

The AF 1060 may provide application influence on traffic routing, provide access to NEF, and interact with the policy framework for policy control.

In some embodiments, the 5GC 1040 may enable edge computing by selecting operator/3 rd party services to be geographically close to a point that the UE 1002 is attached to the network. This may reduce latency and load on the network. To provide edge-computing implementations, the 5GC 1040 may select a UPF 1048 close to the UE 1002 and execute traffic steering from the UPF 1048 to data network 1036 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 1060. In this way, the AF 1060 may influence UPF (re)selection and traffic routing. Based on operator deployment, when AF 1060 is considered to be a trusted entity, the network operator may permit AF 1060 to interact directly with relevant NFs. Additionally, the AF 1060 may exhibit an Naf service-based interface.

The data network 1036 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application/content server 1038.

Figure 11 schematically illustrates a wireless network 1100 in accordance with various embodiments. The wireless network 1100 may include a UE 1102 in wireless communication with an AN 1104. The UE 1102 and AN 1104 may be similar to, and substantially interchangeable with, like-named components described elsewhere herein.

The UE 1102 may be communicatively coupled with the AN 1104 via connection 1106. The connection 1106 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.

The UE 1102 may include a host platform 1108 coupled with a modem platform 1110. The host platform 1108 may include application processing circuitry 1112, which may be coupled with protocol processing circuitry 1114 of the modem platform 1110. The application processing circuitry 1112 may run various applications for the UE 1102 that source/sink application data. The application processing circuitry 1112 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations

The protocol processing circuitry 1114 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1106. The layer operations implemented by the protocol processing circuitry 1114 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.

The modem platform 1110 may further include digital baseband circuitry 1116 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1114 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.

The modem platform 1110 may further include transmit circuitry 1118, receive circuitry 1120, RF circuitry 1122, and RF front end (RFFE) 1124, which may include or connect to one or more antenna panels 1126. Briefly, the transmit circuitry 1118 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry 1120 may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry 1122 may include a low-noise amplifier, a power amplifier, power tracking components, etc.; RFFE 1124 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry 1118, receive circuitry 1120, RF circuitry 1122, RFFE 1124, and antenna panels 1126 (referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.

In some embodiments, the protocol processing circuitry 1114 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.

A UE reception may be established by and via the antenna panels 1126, RFFE 1124, RF circuitry 1122, receive circuitry 1120, digital baseband circuitry 1116, and protocol processing circuitry 1114. In some embodiments, the antenna panels 1126 may receive a transmission from the AN 1104 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1126.

A UE transmission may be established by and via the protocol processing circuitry 1114, digital baseband circuitry 1116, transmit circuitry 1118, RF circuitry 1122, RFFE 1124, and antenna panels 1126. In some embodiments, the transmit components of the UE 1104 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1126.

Similar to the UE 1102, the AN 1104 may include a host platform 1128 coupled with a modem platform 1130. The host platform 1128 may include application processing circuitry 1132 coupled with protocol processing circuitry 1134 of the modem platform 1130. The modem platform may further include digital baseband circuitry 1136, transmit circuitry 1138, receive circuitry 1140, RF circuitry 1142, RFFE circuitry 1144, and antenna panels 1146. The components of the AN 1104 may be similar to and substantially interchangeable with like- named components of the UE 1102. In addition to performing data transmission/reception as described above, the components of the AN 1108 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.

Figure 12 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, Figure 12 shows a diagrammatic representation of hardware resources 1200 including one or more processors (or processor cores) 1210, one or more memory/storage devices 1220, and one or more communication resources 1230, each of which may be communicatively coupled via a bus 1240 or other interface circuitry. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 1202 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1200.

The processors 1210 may include, for example, a processor 1212 and a processor 1214. The processors 1210 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio- frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.

The memory/storage devices 1220 may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices 1220 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc.

The communication resources 1230 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1204 or one or more databases 1206 or other network elements via a network 1208. For example, the communication resources 1230 may include wired communication components (e.g., for coupling via USB, Ethernet, etc.), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components.

Instructions 1250 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1210 to perform any one or more of the methodologies discussed herein. The instructions 1250 may reside, completely or partially, within at least one of the processors 1210 (e.g., within the processor’s cache memory), the memory/storage devices 1220, or any suitable combination thereof. Furthermore, any portion of the instructions 1250 may be transferred to the hardware resources 1200 from any combination of the peripheral devices 1204 or the databases 1206. Accordingly, the memory of processors 1210, the memory/storage devices 1220, the peripheral devices 1204, and the databases 1206 are examples of computer-readable and machine-readable media.

EXAMPLE PROCEDURES

In some embodiments, the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, of Figures 10-12, or some other figure herein, may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof. One such process is depicted in Figure 13. For example, process 1300 may include, at 1305, retrieving virtualized resource (VR) usage measurement report information from a memory, the VR usage measurement report information associated with a virtualized network function (VNF) or VNF component (VNFC) instance for an edge application server (EAS). The process further includes, at 1310, generating performance data associated with EAS resource usage based on the VR usage measurement report information.

Another such process is illustrated in Figure 14. In this example, the process 1400 includes, at 1405, receiving virtualized resource (VR) usage measurement report information from a network functions virtualization (NFV) management and orchestration (MANO) system, wherein the VR usage measurement report information is associated with a virtualized network function (VNF) or VNF component (VNFC) instance for an edge application server (EAS). The process further includes, at 1410, generating performance data associated with EAS resource usage based on the VR usage measurement report information.

Another such process is illustrated in Figure 15. In this example, the process 1500 includes, at 1505, receiving, from an edge enabler client (EEC), an application context relocation request. The process further includes, at 1510, generating charging data based on the application context relocation request. The process further includes, at 1515, sending a charging data request that includes the charging data to a charging function (CHF). The process further includes, at 1520, receiving a charging data response to the charging data request from the CHF.

For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.

EXAMPLES

Example 1 may include a method of a CHF supported by one or more processors, and is configured to:

Receive from an entity the Charging Data Request related to charging for edge enabling infrastructure resources or application context transfer;

Process the Charging Data Request; Send Charging Data Response to the entity;

Example 2 may include the method of example 1 or some other example herein, wherein the Charging Data Request is received from a MnS producer.

Example 3 may include the method of example 1 or some other example herein, wherein the Charging Data Request is received from a CEF.

Example 4 may include the method of examples 1 to 3 or some other example herein, wherein the Charging Data Request is to report the usage of the infrastructure resources for EAS.

Example 5 may include the method of examples 3 and 4 or some other example herein, wherein the CEF gets the performance measurement related to usage of infrastructure resources for EAS from a MnS producer.

Example 6 may include the method of examples 4 and 5 or some other example herein, wherein the usage of edge enabling infrastructure resource is data volume transferred for the EAS, virtual CPU usage of the EAS, virtual memory usage of the EAS, virtual disk usage of the EAS, or the virtual storage of the EAS.

Example 7 may include the method of example 1 or some other example herein, wherein the Charging Data Request is received from a EES.

Example 8 may include the method of example 1 and example 7 or some other example herein, wherein the Charging Data Request is for an EAS application context transfer request.

Example 9 may include the method of example 1 to example 8 or some other example herein, wherein the CHF processes the charging data request for checking account balance, create a CDR, open a CDR, update a CDR and/or close a CDR.

Example 10 may include EES supported by one or more processors, is configured to:

Send a Charging Data Request to a CF1F;

Receive a Charging Data Response from the CHF.

Example 11 may include the method of example 8 or some other example herein, wherein the Charging Data Request is for an EAS application context relocation request or a completed application context relocation procedure.

Example 12 may include MnS producer for performance assurance supported by one or more processors, is configured to:

Receive VR (virtualized resource) usage related measurements from ETSI NFV MANO system;

Generate the performance data for VR usage for EAS;

Send a Charging Data Request to a CHF ;

Receive a Charging Data Response from the CHF. Send a request to provisioning MnS producer to disable the operational state of the EAS;

Receive the response from provisioning MnS producer on the result of disabling the operational state of the EAS.

Example 13 may include the method of example 12 or some other example herein, wherein for performance assurance the MnS producer sends the Report the performance data to CEF.

Example 14 may include CEF supported by one or more processors, is configured to:

Receive the performance data related to VR usage from MnS for performance assurance,

Send a Charging Data Request to a CHF;

Receive a Charging Data Response from the CHF.

Example 15 may include provisioning MnS producer supported by one or more processors, is configured to:

Receive a request to disable the operational state of an EAS;

Disable the operational state of the EAS;

Send a response indicating the result of disabling the operational state of the

EAS.

Send a request to ETSI NFV MANO system to stop the VNF/VNFC instances for the EAS;

Received the response from ETSI NFV MANO system with the result of stopping the VNF/VNFC instances for the EAS.

Example 16 may include the method of example 15 or some other example herein, wherein the request to disable the operational state of an EAS is received from an MnS producer for performance assurance.

Example 17 may include the method of example 15 or some other example herein, wherein the request to disable the operational state of an EAS is received from an CEF.

Example 18 may include CHF supported by one or more processors, is configured to:

Receive the aggregated 5GS usage of a PLMN for an edge application or an EDN from a MnS producer;

Process the received usage for account, rating and reservation control for the inter providing charging for 5GS usage supporting edge computing.

Example XI includes a method comprising: receiving, from an entity, a charging data request related to charging for edge enabling infrastructure resources or application context transfer; processing the charging data request to check an account balance, create a charging data record (CDR), open a CDR, update a CDR, or close a CDR; and transmitting charging data response to the entity;

Example X2 includes the method of example XI or some other example herein, wherein the entity is a management services (MnS) producer or charging enablement function (CEF).

Example X3 includes the method of example X2 or some other example herein, wherein the entity is a CEF that receives a performance measurement related to usage of infrastructure resources for EAS from a MnS producer.

Example X4 includes the method of example XI or some other example herein, wherein the charging data request is to report the usage of infrastructure resources for an edge application server (EAS).

Example X5 includes the method of example XI or some other example herein, wherein the infrastructure resource is a data volume transferred for the EAS, a virtual CPU usage of the EAS, a virtual memory usage of the EAS, a virtual disk usage of the EAS, or virtual storage of the EAS.

Example Y1 includes an apparatus comprising: memory to store virtualized resource (VR) usage measurement report information; and processing circuitry, coupled with the memory, to: retrieve the VR usage measurement report information from the memory, the VR usage measurement report information associated with a virtualized network function (VNF) or VNF component (VNFC) instance for an edge application server (EAS); and generate performance data associated with EAS resource usage based on the VR usage measurement report information.

Example Y2 includes the apparatus of example Y1 or some other example herein, wherein the VR usage measurement report information is received from a network functions virtualization (NFV) management and orchestration (MANO) system.

Example Y3 includes the apparatus of example Y1 or some other example herein, wherein the performance data is associated with usage of an edge-enabling infrastructure resource supporting the EAS.

Example Y4 includes the apparatus of example Y3 or some other example herein, wherein the performance data associated with usage of the edge-enabling infrastructure resource includes an indication of: a data volume transferred for the EAS, a virtual CPU usage of the EAS, a virtual memory usage of the EAS, a virtual disk usage of the EAS, a the virtual storage of the EAS. Example Y5 includes the apparatus of example Y3 or some other example herein, wherein the processing circuitry is further to: determine that a quota associated with the edge-enabling infrastructure resource has been met; in response to the determination that the quota associated with the edge-enabling infrastructure resource has been met, send a request to a provisioning management services (MnS) producer to disable an operational state of the EAS; and receive, from the provisioning MnS producer, a response indicating a result of disabling the operational state of the EAS.

Example Y6 includes the apparatus of example Y1 or some other example herein, wherein the processing circuitry is further to send a performance data report to a charging enablement function (CEF) that includes an indication of the generated performance data.

Example Y7 includes the apparatus of example Y6 or some other example herein, wherein the performance data report is sent to the CEF via a notifyFileReady file data reporting notification in response to an indication from the CEF that file data reporting is to be used.

Example Y8 includes the apparatus of example Y6 or some other example herein, wherein the performance data report is sent to the CEF via a reportStreamData operation in response to an indication from the CEF that streaming data reporting is to be used.

Example Y9 includes the apparatus of example Y1 or some other example herein, wherein the processing circuitry is further to: generate charging data related to the performance data; send a charging data request to a charging function (CHF) that includes an indication of the charging data; and receive a charging data response from the CF1F that indicates a result of the charging data request.

Example Y10 includes the apparatus of any of examples Y1-Y9 or some other example herein, wherein the apparatus comprises a performance assurance MnS producer supporting a charging trigger function (CTF).

Example Y11 includes one or more computer-readable media storing instructions that, when executed by one or more processors, cause a performance assurance management services (MnS) producer supporting a charging trigger function (CTF) to: receive virtualized resource (VR) usage measurement report information from a network functions virtualization (NFV) management and orchestration (MANO) system, wherein the VR usage measurement report information is associated with a virtualized network function (VNF) or VNF component (VNFC) instance for an edge application server (EAS); and generate performance data associated with EAS resource usage based on the VR usage measurement report information.

Example Y12 includes the one or more computer-readable media of example Y11 or some other example herein, wherein the performance data is associated with usage of an edge enabling infrastructure resource supporting the EAS.

Example Y13 includes the one or more computer-readable media of example Y12 or some other example herein, wherein the performance data associated with usage of the edge enabling infrastructure resource includes an indication of: a data volume transferred for the EAS, a virtual CPU usage of the EAS, a virtual memory usage of the EAS, a virtual disk usage of the EAS, a the virtual storage of the EAS.

Example Y14 includes the one or more computer-readable media of example Y12 or some other example herein, wherein the memory further stores instructions to: determine that a quota associated with the edge-enabling infrastructure resource has been met; in response to the determination that the quota associated with the edge-enabling infrastructure resource has been met, send a request to a provisioning management services (MnS) producer to disable an operational state of the EAS; and receive, from the provisioning MnS producer, a response indicating a result of disabling the operational state of the EAS.

Example Y15 includes the one or more computer-readable media of example Y11 or some other example herein, wherein the memory further stores instructions to send a performance data report to a charging enablement function (CEF) that includes an indication of the generated performance data.

Example Y16 includes the one or more computer-readable media of example Y15 or some other example herein, wherein the performance data report is sent to the CEF via a notifyFileReady file data reporting notification in response to an indication from the CEF that file data reporting is to be used.

Example Y17 includes the one or more computer-readable media of example Y15 or some other example herein, wherein the performance data report is sent to the CEF via a reportStreamData operation in response to an indication from the CEF that streaming data reporting is to be used.

Example Y18 includes the one or more computer-readable media of example Y11 or some other example herein, wherein the processing circuitry is further to: generate charging data related to the performance data; send a charging data request to a charging function (CHF) that includes an indication of the charging data; and receive a charging data response from the CHF that indicates a result of the charging data request.

Example Y19 includes one or more computer-readable media storing instructions that, when executed by one or more processors, cause an edge enabling server (EES) to: receive, from an edge enabler client (EEC), an application context relocation request; generate charging data based on the application context relocation request; send a charging data request that includes the charging data to a charging function (CHF); and receive a charging data response to the charging data request from the CHF

Example Y20 includes the one or more computer-readable media of example Y19 or some other example herein, wherein the media further stores instructions to send an application context relocation notify message that indicates the charging data request was accepted to an edge application server (EAS).

Example Y21 includes the one or more computer-readable media of example Y19 or some other example herein, wherein the charging data request is a first charging data request, the charging data request is a first charging data response, and wherein the media further stores instructions to: send an application context relocation complete message to the EEC; send a second charging data request associated with the application context relocation complete message to the CHF; and receive a second charging data response to the second charging data request from the CHF that indicates an update and closing associated with the first charge data request.

Example Y22 includes one or more computer-readable media storing instructions that, when executed by one or more processors, cause a charge enablement function (CEF) to: receive, from a performance assurance management services (MnS) producer, a performance data report that includes an indication of generated performance data; generate charging data related to the received performance data; send a charging data request to a charging function (CHF) that includes an indication of the charging data; and receive a charging data response from the CHF that indicates a result of the charging data request. Example Z01 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-18, XI -X5, Y1-Y22, or any other method or process described herein.

Example Z02 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-18, X1-X5, Yl- Y22, or any other method or process described herein.

Example Z03 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-18, XI- X5, Yl- Y22, or any other method or process described herein.

Example Z04 may include a method, technique, or process as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions or parts thereof.

Example Z05 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions thereof.

Example Z06 may include a signal as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions or parts thereof.

Example Z07 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions or parts thereof, or otherwise described in the present disclosure.

Example Z08 may include a signal encoded with data as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions or parts thereof, or otherwise described in the present disclosure.

Example Z09 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions or parts thereof, or otherwise described in the present disclosure.

Example Z10 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-18, X1-X5, Yl- Y22, or portions thereof.

Example Z11 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-18, XI -X5, Yl- Y22, or portions thereof.

Example Z12 may include a signal in a wireless network as shown and described herein.

Example Z13 may include a method of communicating in a wireless network as shown and described herein.

Example Z14 may include a system for providing wireless communication as shown and described herein.

Example Z15 may include a device for providing wireless communication as shown and described herein. Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.

Abbreviations

Unless used differently herein, terms, definitions, and abbreviations may be consistent with terms, definitions, and abbreviations defined in 3GPP TR 21.905 nΐό.0.0 (2019-06). For the purposes of the present document, the following abbreviations may apply to the examples and embodiments discussed herein.

3 GPP Third AP Application BRAS Broadband Generation 35 Protocol, Antenna Remote Access

Partnership Port, Access Point 70 Server

Project API Application BSS Business 4G Fourth Programming Interface Support System Generation APN Access Point BS Base Station 5G Fifth 40 Name BSR Buffer Status Generation ARP Allocation and 75 Report 5GC 5G Core Retention Priority BW Bandwidth network ARQ Automatic BWP Bandwidth Part AC Repeat Request C-RNTI Cell

Application 45 AS Access Stratum Radio Network

Client ASP 80 Temporary

ACK Application Service Identity

Acknowledgem Provider CA Carrier ent Aggregation,

ACID 50 ASN.1 Abstract Syntax Certification

Application Notation One 85 Authority Client Identification AUSF Authentication CAPEX CAPital AF Application Server Function Expenditure Function AWGN Additive CBRA Contention

AM Acknowledged 55 White Gaussian Based Random Mode Noise 90 Access

AMBRAggregate BAP Backhaul CC Component Maximum Bit Rate Adaptation Protocol Carrier, Country AMF Access and BCH Broadcast Code, Cryptographic Mobility 60 Channel Checksum

Management BER Bit Error Ratio 95 CCA Clear Channel Function BFD Beam Assessment AN Access Failure Detection CCE Control Network BLER Block Error Channel Element

ANR Automatic 65 Rate CCCH Common Neighbour Relation BPSK Binary Phase 100 Control Channel Shift Keying CE Coverage Enhancement CDM Content COTS Commercial C-RNTI Cell Delivery Network Off-The-Shelf RNTI CDMA Code- CP Control Plane, CS Circuit Division Multiple Cyclic Prefix, Switched Access 40 Connection 75 CSAR Cloud Service

CFRA Contention Free Point Archive Random Access CPD Connection CSI Channel- State CG Cell Group Point Descriptor Information CGF Charging CPE Customer CSI-IM CSI

Gateway Function 45 Premise 80 Interference CHF Charging Equipment Measurement

Function CPICHCommon Pilot CSI-RS CSI

Cl Cell Identity Channel Reference Signal CID Cell-ID (e g., CQI Channel CSI-RSRP CSI positioning method) 50 Quality Indicator 85 reference signal CIM Common CPU CSI processing received power Information Model unit, Central CSI-RSRQ CSI CIR Carrier to Processing Unit reference signal Interference Ratio C/R received quality CK Cipher Key 55 Command/Resp 90 CSI-SINR CSI CM Connection onse field bit signal-to-noise and Management, CRAN Cloud Radio interference

Conditional Access ratio Mandatory Network, Cloud CSM A Carrier Sense CMAS Commercial 60 RAN 95 Multiple Access Mobile Alert Service CRB Common CSMA/CA CSMA CMD Command Resource Block with collision CMS Cloud CRC Cyclic avoidance Management System Redundancy Check CSS Common CO Conditional 65 CRI Channel -State 100 Search Space, Cell- Optional Information specific Search

CoMP Coordinated Resource Space Multi-Point Indicator, CSI-RS CTF Charging CORESET Control Resource Trigger Function Resource Set 70 Indicator 105 CTS Clear-to-Send CW Codeword 35 DSL Domain ECSP Edge CWS Contention Specific Language. Computing Service Window Size Digital 70 Provider D2D Device-to- Subscriber Line EDN Edge Device DSLAM DSL Data Network DC Dual 40 Access Multiplexer EEC Edge Connectivity, Direct DwPTS Enabler Client Current Downlink Pilot 75 EECID Edge

DCI Downlink Time Slot Enabler Client Control E-LAN Ethernet Identification

Information 45 Local Area Network EES Edge DF Deployment E2E End-to-End Enabler Server Flavour ECCA extended clear 80 EESID Edge

DL Downlink channel Enabler Server DMTF Distributed assessment, Identification Management Task 50 extended CCA EHE Edge Force ECCE Enhanced Hosting Environment

DPDK Data Plane Control Channel 85 EGMF Exposure Development Kit Element, Governance DM-RS, DMRS Enhanced CCE Management

Demodulation 55 ED Energy Function Reference Signal Detection EGPRS DN Data network EDGE Enhanced 90 Enhanced DNN Data Network Datarates for GSM GPRS Name Evolution EIR Equipment

DNAI Data Network 60 (GSM Evolution) Identity Register Access Identifier EAS Edge eLAA enhanced

Application Server 95 Licensed Assisted

DRB Data Radio EASID Edge Access, Bearer Application Server enhanced LAA

DRS Discovery 65 Identification EM Element Reference Signal ECS Edge Manager DRX Discontinuous Configuration Server 100 eMBB Enhanced Reception Mobile

Broadband EMS Element 35 E-UTRA Evolved FCCH Frequency Management System UTRA 70 Correction CHannel eNB evolved NodeB, E-UTRAN Evolved FDD Frequency E-UTRAN Node B UTRAN Division Duplex EN-DC E- EV2X Enhanced V2X FDM Frequency UTRA-NR Dual 40 F1AP FI Application Division

Connectivity Protocol 75 Multiplex EPC Evolved Packet Fl-C FI Control FDM A F requency Core plane interface Division Multiple

EPDCCH Fl-U FI User plane Access enhanced 45 interface FE Front End

PDCCH, enhanced FACCH Fast 80 FEC Forward Error Physical Associated Control Correction

Downlink Control CHannel FFS For Further Cannel FACCH/F Fast Study

EPRE Energy per 50 Associated Control FFT Fast Fourier resource element Channel/Full 85 Transformation

EPS Evolved Packet rate feLAA further System FACCH/H Fast enhanced Licensed

EREG enhanced REG, Associated Control Assisted enhanced resource 55 Channel/Half Access, further element groups rate 90 enhanced LAA ETSI European FACH Forward Access FN Frame Number

Telecommunica Channel FPGA Field- tions Standards FAUSCH Fast Programmable Gate Institute 60 Uplink Signalling Array

ETWS Earthquake and Channel 95 FR Frequency Tsunami Warning FB Functional Range

System Block FQDN Fully eUICC embedded FBI Feedback Qualified Domain UICC, embedded 65 Information Name Universal FCC Federal 100 G-RNTI GERAN Integrated Circuit Communications Radio Network Card Commission Temporary

Identity GERAN GSM Global System 70 HSDPA High

GSM EDGE for Mobile Speed Downlink RAN, GSM EDGE Communication Packet Access Radio Access s, Groupe Special HSN Hopping Network 40 Mobile Sequence Number

GGSN Gateway GPRS GTP GPRS 75 HSPA High Speed Support Node Tunneling Protocol Packet Access

GLONASS GTP -U GPRS HSS Home

GLObal'naya Tunnelling Protocol Subscriber Server NAvigatsionnay 45 for User Plane HSUPA High a Sputnikovaya GTS Go To Sleep 80 Speed Uplink Packet Sistema (Engl.: Signal (related Access

Global Navigation to WUS) HTTP Hyper Text Satellite GUMMEI Globally Transfer Protocol System) 50 Unique MME HTTPS Hyper gNB Next Identifier 85 Text Transfer Protocol Generation NodeB GUTI Globally Secure (https is gNB-CU gNB- Unique Temporary http/ 1.1 over centralized unit, Next UE Identity SSL, i.e. port 443) Generation 55 HARQ Hybrid ARQ, I-Block

NodeB Hybrid 90 Information centralized unit Automatic Block gNB -DU gNB- Repeat Request ICCID Integrated distributed unit, Next HANDO Handover Circuit Card Generation 60 HFN HyperFrame Identification

NodeB Number 95 IAB Integrated distributed unit HHO Hard Handover Access and GNSS Global HLR Home Location Backhaul Navigation Satellite Register ICIC Inter-Cell System 65 HN Home Network Interference

GPRS General Packet HO Handover 100 Coordination Radio Service HPLMN Home ID Identity, GPSI Generic Public Land Mobile identifier

Public Subscription Network Identifier IDFT Inverse Discrete 35 IMPI IP Multimedia ISO International Fourier Private Identity 70 Organisation for

Transform IMPU IP Multimedia Standardisation IE Information PUblic identity ISP Internet Service element IMS IP Multimedia Provider IBE In-Band 40 Subsystem IWF Interworking- Emission IMSI International 75 Function IEEE Institute of Mobile I-WLAN Electrical and Subscriber Interworking

Electronics Identity WLAN Engineers 45 IoT Internet of Constraint IEI Information Things 80 length of the

Element IP Internet convolutional

Identifier Protocol code, USIM

IEIDL Information Ipsec IP Security, Individual key Element 50 Internet Protocol kB Kilobyte (1000

Identifier Data Security 85 bytes)

Length IP-CAN IP- kbps kilo-bits per

IETF Internet Connectivity Access second Engineering Task Network Kc Ciphering key Force 55 IP-M IP Multicast Ki Individual

IF Infrastructure IPv4 Internet 90 subscriber

IM Interference Protocol Version 4 authentication Measurement, IPv6 Internet key

Intermodulation Protocol Version 6 KPI Key , IP Multimedia 60 IR Infrared Performance Indicator

IMG IMS IS In Sync 95 KQI Key Quality Credentials IRP Integration Indicator IMEI International Reference Point KSI Key Set Mobile ISDN Integrated Identifier

Equipment 65 Services Digital ksps kilo-symbols

Identity Network 100 per second

IMGI International ISIM IM Services KVM Kernel Virtual mobile group identity Identity Module Machine LI Layer 1 35 LTE Long Term 70 Broadcast and

(physical layer) Evolution Multicast Ll-RSRP Layer 1 LWA LTE-WLAN Service reference signal aggregation MBSFN received power LWIP LTE/WLAN Multimedia L2 Layer 2 (data 40 Radio Level 75 Broadcast link layer) Integration with multicast

L3 Layer 3 IPsec Tunnel service Single (network layer) LTE Long Term Frequency

LAA Licensed Evolution Network Assisted Access 45 M2M Machine-to- 80 MCC Mobile Country LAN Local Area Machine Code Network MAC Medium Access MCG Master Cell

LADN Local Control Group Area Data Network (protocol MCOT Maximum LBT Listen Before 50 layering context) 85 Channel Talk MAC Message Occupancy

LCM LifeCycle authentication code Time Management (security/ encry pti on MCS Modulation and LCR Low Chip Rate context) coding scheme LCS Location 55 MAC-A MAC 90 MDAF Management Services used for Data Analytics

LCID Logical authentication Function

Channel ID and key MDAS Management

LI Layer Indicator agreement Data Analytics LLC Logical Link 60 (TSG T WG3 context) 95 Service Control, Low Layer MAC-IMAC used for MDT Minimization of Compatibility data integrity of Drive Tests LPLMN Local signalling messages ME Mobile PLMN (TSG T WG3 context) Equipment LPP LTE 65 MANO 100 MeNB master eNB Positioning Protocol Management MER Message Error LSB Least and Orchestration Ratio Significant Bit MBMS MGL Measurement

Multimedia Gap Length MGRP Measurement 35 Access Communication Gap Repetition CHannel 70 s Period MPUSCH MTC MU-MIMO Multi

MIB Master Physical Uplink Shared User MIMO Information Block, Channel MWUS MTC Management 40 MPLS Multiprotocol wake-up signal, MTC Information Base Label Switching 75 WUS MIMO Multiple Input MS Mobile Station NACK Negative Multiple Output MSB Most Acknowledgement MLC Mobile Significant Bit NAI Network Location Centre 45 MSC Mobile Access Identifier MM Mobility Switching Centre 80 NAS Non-Access Management MSI Minimum Stratum, Non- Access MME Mobility System Stratum layer Management Entity Information, NCT Network MN Master Node 50 MCH Scheduling Connectivity MNO Mobile Information 85 Topology

Network Operator MSID Mobile Station NC-JT Non MO Measurement Identifier coherent Joint Object, Mobile MSIN Mobile Station Transmission

Originated 55 Identification NEC Network MPBCH MTC Number 90 Capability

Physical Broadcast MSISDN Mobile Exposure CHannel Subscriber ISDN NE-DC NR-E-

MPDCCH MTC Number UTRA Dual

Physical Downlink 60 MT Mobile Connectivity Control Terminated, Mobile 95 NEF Network CHannel Termination Exposure Function

MPDSCH MTC MTC Machine-Type NF Network

Physical Downlink Communication Function Shared 65 s NFP Network CHannel mMTCmassive MTC, 100 Forwarding Path

MPRACH MTC massive NFPD Network

Physical Random Machine-Type Forwarding Path Descriptor NFV Network NPRACH 70 S-NNSAI Single- Functions Narrowband NSSAI

Virtualization Physical Random NSSF Network Slice NFVI NFV Access CHannel Selection Function Infrastructure 40 NPUSCH NW Network NFVO NFV Narrowband 75 NWUSNarrowband Orchestrator Physical Uplink wake-up signal, NG Next Shared CHannel Narrowband WUS Generation, Next Gen NPSS Narrowband NZP Non-Zero NGEN-DC NG- 45 Primary Power RAN E-UTRA-NR Synchronization 80 O&M Operation and Dual Connectivity Signal Maintenance NM Network NSSS Narrowband ODU2 Optical channel Manager Secondary Data Unit - type 2 NMS Network 50 Synchronization OFDM Orthogonal Management System Signal 85 Frequency Division N-PoP Network Point NR New Radio, Multiplexing of Presence Neighbour Relation OFDMA NMIB, N-MIB NRF NF Repository Orthogonal Narrowband MIB 55 Function Frequency Division NPBCH NRS Narrowband 90 Multiple Access

Narrowband Reference Signal OOB Out-of-band

Physical NS Network OO S Out of

Broadcast Service Sync

CHannel 60 NSA Non- Standalone OPEX OPerating

NPDCCH operation mode 95 EXpense

Narrowband NSD Network OSI Other System

Physical Service Descriptor Information

Downlink NSR Network OSS Operations Control CHannel 65 Service Record Support System NPDSCH NSSAINetwork Slice 100 OTA over-the-air

Narrowband Selection PAPR Peak-to-

Physical Assistance Average Power

Downlink Information Ratio Shared CHannel PAR Peak to PDN Packet Data POC PTT over Average Ratio 35 Network, Public Cellular PBCH Physical Data Network 70 PP, PTP Point-to- Broadcast Channel PDSCH Physical Point PC Power Control, Downlink Shared PPP Point-to-Point Personal Channel Protocol

Computer 40 PDU Protocol Data PRACH Physical PCC Primary Unit 75 RACH Component Carrier, PEI Permanent PRB Physical Primary CC Equipment resource block PCell Primary Cell Identifiers PRG Physical PCI Physical Cell 45 PFD Packet Flow resource block ID, Physical Cell Description 80 group Identity P-GW PDN Gateway ProSe Proximity

PCEF Policy and PHICH Physical Services, Charging hybrid-ARQ indicator Proximity-

Enforcement 50 channel Based Service

Function PHY Physical layer 85 PRS Positioning

PCF Policy Control PLMN Public Land Reference Signal Function Mobile Network PRR Packet

PCRF Policy Control PIN Personal Reception Radio and Charging Rules 55 Identification Number PS Packet Services Function PM Performance 90 PSBCH Physical

PDCP Packet Data Measurement Sidelink Broadcast Convergence PMI Precoding Channel

Protocol, Packet Matrix Indicator PSDCH Physical Data Convergence 60 PNF Physical Sidelink Downlink Protocol layer Network Function 95 Channel PDCCH Physical PNFD Physical PSCCH Physical Downlink Control Network Function Sidelink Control Channel Descriptor Channel

PDCP Packet Data 65 PNFR Physical PSSCH Physical Convergence Protocol Network Function 100 Sidelink Shared Record Channel

PSCell Primary SCell PSS Primary RAB Radio Access Link Control Synchronization 35 Bearer, Random 70 layer Signal Access Burst RLC AM RLC

PSTN Public Switched RACH Random Access Acknowledged Mode

Telephone Network Channel RLC UM RLC

PT-RS Phase-tracking RADIUS Remote Unacknowledged reference signal 40 Authentication Dial 75 Mode

PTT Push-to-Talk In User Service RLF Radio Link PUCCH Physical RAN Radio Access Failure

Uplink Control Network RLM Radio Link Channel RAND RANDom Monitoring

PUSCH Physical 45 number (used for 80 RLM-RS

Uplink Shared authentication) Reference Channel RAR Random Access Signal for RLM

QAM Quadrature Response RM Registration Amplitude RAT Radio Access Management

Modulation 50 Technology 85 RMC Reference QCI QoS class of RAU Routing Area Measurement Channel identifier Update RMSI Remaining QCL Quasi co- RB Resource block, MSI, Remaining location Radio Bearer Minimum

QFI QoS Flow ID, 55 RBG Resource block 90 System QoS Flow group Information

Identifier REG Resource RN Relay Node QoS Quality of Element Group RNC Radio Network Service Rel Release Controller

QPSK Quadrature 60 REQ REQuest 95 RNL Radio Network (Quaternary) Phase RF Radio Layer Shift Keying Frequency RNTI Radio Network QZSS Quasi-Zenith RI Rank Indicator Temporary Satellite System RIV Resource Identifier

RA-RNTI Random 65 indicator value 100 ROHC RObust Header

Access RNTI RL Radio Link Compression RLC Radio Link RRC Radio Resource Control, Radio Control, Radio Resource Control 35 S-RNTI SRNC 70 SCS Subcarrier layer Radio Network Spacing

RRM Radio Resource Temporary SCTP Stream Control Management Identity Transmission RS Reference S-TMSI SAE Protocol Signal 40 Temporary Mobile 75 SDAP Service Data

RSRP Reference Station Adaptation Signal Received Identifier Protocol, Power SA Standalone Service Data

RSRQ Reference operation mode Adaptation Signal Received 45 SAE System 80 Protocol layer Quality Architecture SDL Supplementary

RSSI Received Signal Evolution Downlink Strength SAP Service Access SDNF Structured Data

Indicator Point Storage Network

RSU Road Side Unit 50 SAPD Service Access 85 Function RSTD Reference Point Descriptor SDP Session Signal Time SAPI Service Access Description Protocol difference Point Identifier SDSF Structured Data RTP Real Time SCC Secondary Storage Function Protocol 55 Component Carrier, 90 SDU Service Data

RTS Ready-To-Send Secondary CC Unit RTT Round Trip SCell Secondary Cell SEAF Security Time SCEF Service Anchor Function

Rx Reception, Capability Exposure SeNB secondary eNB Receiving, Receiver 60 Function 95 SEPP Security Edge

S1AP SI Application SC-FDMA Single Protection Proxy Protocol Carrier Frequency SFI Slot format

Sl-MME SI for Division indication the control plane Multiple Access SFTD Space- Sl-U SI for the user 65 SCG Secondary Cell 100 Frequency Time plane Group Diversity, SFN

S-GW Serving SCM Security and frame timing Gateway Context difference

Management SFN System Frame SoC System on Chip Signal based Number SON Self-Organizing Reference

SgNB Secondary gNB Network Signal Received SGSN Serving GPRS SpCell Special Cell Power Support Node 40 SP-CSI-RNTISemi- 75 SS-RSRQ S-GW Serving Persistent CSI RNTI Synchronization Gateway SPS Semi-Persistent Signal based SI System Scheduling Reference Information SQN Sequence Signal Received SI-RNTI System 45 number 80 Quality Information RNTI SR Scheduling SS-SINR SIB System Request Synchronization Information Block SRB Signalling Signal based Signal SIM Subscriber Radio Bearer to Noise and Identity Module 50 SRS Sounding 85 Interference Ratio SIP Session Reference Signal SSS Secondary Initiated Protocol SS Synchronization Synchronization SiP System in Signal Signal Package SSB Synchronization SSSG Search Space SL Sidelink 55 Signal Block 90 Set Group SLA Service Level SSID Service Set SSSIF Search Space Agreement Identifier Set Indicator SM Session SS/PBCH Block SST Slice/Service Management SSBRI SS/PBCH Types SMF Session 60 Block Resource 95 SU-MIMO Single Management Function Indicator, User IMO SMS Short Message Synchronization SUL Supplementary Service Signal Block Uplink

SMSF SMS Function Resource TA Timing SMTC SSB-based 65 Indicator 100 Advance, Tracking Measurement Timing SSC Session and Area Configuration Service TAC Tracking Area SN Secondary Continuity Code Node, Sequence SS-RSRP TAG Timing Number 70 Synchronization 105 Advance Group TAI TPMI Transmitted UDSF Unstructured

Tracking Area Precoding Matrix Data Storage Network Identity Indicator Function

TAU Tracking Area TR Technical UICC Universal Update 40 Report 75 Integrated Circuit

TB Transport Block TRP, TRxP Card TBS Transport Block Transmission UL Uplink Size Reception Point UM

TBD To Be Defined TRS Tracking Unacknowledge TCI Transmission 45 Reference Signal 80 d Mode Configuration TRx Transceiver UML Unified

Indicator TS Technical Modelling Language

TCP Transmission Specifications, UMTS Universal

Communication Technical Mobile

Protocol 50 Standard 85 Telecommunica

TDD Time Division TTI Transmission tions System Duplex Time Interval UP User Plane

TDM Time Division Tx Transmission, UPF User Plane

Multiplexing Transmitting, Function

TDMATime Division 55 Transmitter 90 URI Uniform

Multiple Access U-RNTI UTRAN Resource Identifier

TE Terminal Radio Network URL Uniform

Equipment Temporary Resource Locator

TEID Tunnel End Identity URLLC Ultra-

Point Identifier 60 UART Universal 95 Reliable and Low

TFT Traffic Flow Asynchronous Latency

Template Receiver and USB Universal Serial

TMSI Temporary Transmitter Bus

Mobile UCI Uplink Control USIM Universal

Subscriber 65 Information 100 Sub scrib er Identity

Identity UE User Equipment Module

TNL Transport UDM Unified Data USS UE-specific Network Layer Management search space TPC Transmit Power UDP User Datagram Control 70 Protocol UTRA UMTS 35 VoIP Voice-over-IP, Terrestrial Radio Voice-over- Internet Access Protocol UTRAN VPLMN Visited Universal Public Land Mobile

Terrestrial Radio 40 Network Access VPN Virtual Private Network Network

UwPTS Uplink VRB Virtual Pilot Time Slot Resource Block V2I Vehicle-to- 45 WiMAX Infrastruction Worldwide V2P Vehicle-to- Interoperability Pedestrian for Microwave V2V Vehicle-to- Access Vehicle 50 WLANWireless Local

V2X Vehicle-to- Area Network everything WMAN Wireless VIM Virtualized Metropolitan Area Infrastructure Manager Network VL Virtual Link, 55 WPANWireless VLAN Virtual LAN, Personal Area Network Virtual Local Area X2-C X2-Control Network plane VM Virtual X2-U X2-User plane Machine 60 XML extensible

VNF Virtualized Markup Network Function Language

VNFFG VNF XRES EXpected user Forwarding Graph RESponse VNFFGD VNF 65 XOR exclusive OR

Forwarding Graph ZC Zadoff-Chu Descriptor ZP Zero Power

VNFMVNF Manager Terminology

For the purposes of the present document, the following terms and definitions are applicable to the examples and embodiments discussed herein.

The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc , that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.

The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer- executable instructions, such as program code, software modules, and/or functional processes. Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”

The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like. The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.

The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.

The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.

The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.

The term “resource” as used herein refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.

The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.

The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.

The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or link, and/or the like.

The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content.

The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration.

The term “SSB” refers to an SS/PBCH block. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.

The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.

The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.

The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC CO NECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.

The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA /.

The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.