Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NETWORK FUNCTION INSTANCE DISCOVERY
Document Type and Number:
WIPO Patent Application WO/2023/227934
Kind Code:
A1
Abstract:
Discovery service network equipment (14) in a communication network (10) receives a request (18) for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions (20). Responsive to the request (10), the discovery service network equipment (14) transmits a response (22) that includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20). The one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable. In some embodiments, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance can be or will be available.

Inventors:
KARAPANTELAKIS ATHANASIOS (SE)
WOERNDLE PETER (DE)
FIKOURAS IOANNIS (SE)
Application Number:
PCT/IB2022/060401
Publication Date:
November 30, 2023
Filing Date:
October 28, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L67/51
Foreign References:
US20160381148A12016-12-29
Other References:
HUAWEI: "Solution for Conveying Overload Information via NRF", vol. CT WG4, no. Xi'an, P.R.China; 20190408 - 20190412, 12 April 2019 (2019-04-12), XP051706751, Retrieved from the Internet [retrieved on 20190412]
"3rd Generation Partnership Project; Technical Specification Group Core Network and Terminals; Study on Load and Overload Control of 5GC Service Based Interfaces; (Release 16)", 3GPP STANDARD; TECHNICAL REPORT; 3GPP TR 29.843, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, no. V1.0.0, 7 March 2019 (2019-03-07), pages 1 - 23, XP051722728
Attorney, Agent or Firm:
LEONARD, Justin (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A method performed by discovery service network equipment (14) in a communication network (10), the method comprising: receiving (600) a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20); and responsive (610) to the request (18), transmitting a response (22) that includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available.

2. The method of claim 1 , wherein the one or more unavailable NF instances (26U) include one or more uninitialized NF instances that are unavailable because the one or more uninitialized NF instances have not been initialized, wherein initialization of an NF instance (26) includes setting initial values of parameters of the NF instance (26), and wherein, for each of the one or more uninitialized NF instances, the information (24) about the uninitialized NF instance indicates when the uninitialized NF instance can be or will be available by indicating when the uninitialized NF instance can be or will be initialized.

3. The method of any of claims 1-2, wherein the one or more unavailable NF instances (26U) include one or more unorchestrated NF instances that are unavailable because the one or more unorchestrated NF instances have not been orchestrated, wherein orchestration of an NF instance (26) includes allocating resources for the NF instance (26), and wherein, for each of the one or more unorchestrated NF instances, the information (24) about the unorchestrated NF instance indicates when the unorchestrated NF instance can be or will be available by indicating when the unorchestrated NF instance can be or will be orchestrated.

4. The method of any of claims 1-3, wherein the one or more unavailable NF instances (26U) include one or more overloaded NF instances that are unavailable because the one or more overloaded NF instances lack capacity, and wherein, for each of the one or more overloaded NF instances, the information (24) about the overloaded NF instance indicates when the overloaded NF instance can be or will be available by indicating when the overloaded NF instance is expected to have capacity.

5. The method of any of claims 1-4, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) includes a promise parameter that indicates when the unavailable NF instance (26U) can be or will be available by indicating a duration of time after which the unavailable NF instance (26U) is promised to be available.

6. The method of any of claims 1-5, further comprising, for each of the one or more unavailable NF instances (26U), retrieving the information (24) that indicates when the unavailable NF instance (26U) can be or will be available from a distributed database (32).

7. The method of claim 6, wherein the distributed database (32) is a permissioned distributed ledger.

8. The method of any of claims 6-7, wherein the distributed ledger includes one or more records for each of one or more vendors (30) of the communication network (10), wherein a record for a vendor (30) indicates, for each of one or more types (36) of NFs, when the vendor (30) can or will make an instance of the type (36) of NF available.

9. The method of claim 8, wherein the record for a vendor (30) further indicates one or more of: one or more types (36) of NFs that the vendor (30) is able to orchestrate; an endpoint to send a request for the vendor (30) to orchestrate an NF instance (26); a geographical location at which the vendor (30) orchestrates an NF instance (26); and a cost to orchestrate an NF instance (26), wherein the cost is an amount of money or an amount of power.

10. The method of any of claims 1-9, further comprising: training a model at the network equipment to predict when any given unavailable NF instance (26U) can be or will be available; and for each of the one or more unavailable NF instances (26U), using the trained model to determine when the unavailable NF instance (26U) can be or will be available.

11 . The method of any of claims 1 -10, wherein the one or more input filter criterions (20) include one or more of: an NF type criterion specifying a set of one or more NF types (36), wherein an NF instance (26) satisfies the NF type criterion if the NF instance (26) is an instance of an NF that has any of the one or more NF types (36); a location criterion specifying a set of one or more locations, wherein an NF instance (26) satisfies the location criterion if the NF instance (26) is deployed at any of the one or more locations; and a cost criterion specifying a maximum cost, wherein an NF instance (26) satisfies the cost criterion if the cost of the NF instance (26) is less than or equal to the maximum cost, wherein the maximum cost is a maximum amount of money or a maximum amount of power; a vendor criterion specifying a set of one or more vendors (30), wherein an NF instance (26) satisfies the vendor criterion if the NF instance (26) is provided by any of the one or more vendors (30); and a capability criterion specifying a set of one or more capabilities, wherein an NF instance (26) satisfies the capability criterion if the NF instance (26) has any of the one or more capabilities.

12. The method of any of claims 1-11 , further comprising: based on the request (18), discovering (605) a set of one or more candidate NF instances that satisfy the one or more input filter criterions (20); and selecting a candidate NF instance from the discovered set to provide a service to a service consumer, wherein the one or more unavailable NF instances (26U) comprise the selected candidate NF instance; wherein the response (22) includes information (24) about the selected candidate NF instance.

13. The method of claim 12, wherein selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of: how soon the candidate NF instance can be or will be available; and/or how soon the candidate NF instance is needed to be available; and/or a cost of the candidate NF instance.

14. The method of any of claims 12-13, further comprising training a reinforcement learning model to predict which candidate NF instance to select to yield a maximum reward specified in terms of NF instance availability, latency, and/or throughput, wherein selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance that yields a maximum reward according to the reinforcement learning model.

15. The method of any of claims 12-14, further comprising dynamically initiating orchestration of the selected candidate NF instance.

16. The method of any of claims 1 -15, wherein the discovery service network equipment (14) implements an NF Repository Function, NRF or a Service Communication Proxy, SCP.

17. A method performed by network equipment (16) configured to operate as a service consumer or as a proxy for a service consumer, the method comprising: receiving (700) a response (22) to a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20), wherein the response (22) includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available.

18. The method of claim 17, wherein the one or more unavailable NF instances (26U) include one or more uninitialized NF instances that are unavailable because the one or more uninitialized NF instances have not been initialized, wherein initialization of an NF instance includes setting initial values of parameters of the NF instance, and wherein, for each of the one or more uninitialized NF instances, the information (24) about the uninitialized NF instance indicates when the uninitialized NF instance can be or will be available by indicating when the uninitialized NF instance can be or will be initialized.

19. The method of any of claims 17-18, wherein the one or more unavailable NF instances (26U) include one or more unorchestrated NF instances that are unavailable because the one or more unorchestrated NF instances have not been orchestrated, wherein orchestration of an NF instance includes allocating resources for the NF instance, and wherein, for each of the one or more unorchestrated NF instances, the information (24) about the unorchestrated NF instance indicates when the unorchestrated NF instance can be or will be available by indicating when the unorchestrated NF instance can be or will be orchestrated.

20. The method of any of claims 17-19, wherein the one or more unavailable NF instances (26U) include one or more overloaded NF instances that are unavailable because the one or more overloaded NF instances lack capacity, and wherein, for each of the one or more overloaded NF instances, the information (24) about the overloaded NF instance indicates when the overloaded NF instance can be or will be available by indicating when the overloaded NF instance is expected to have capacity.

21 . The method of any of claims 17-20, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) includes a promise parameter that indicates when the unavailable NF instance (26U) can be or will be available by indicating a duration of time after which the unavailable NF instance (26U) is promised to be available.

22. The method of any of claims 17-21 , wherein the one or more input filter criterions (20) include one or more of: an NF type criterion specifying a set of one or more NF types (36), wherein an NF instance (26) satisfies the NF type criterion if the NF instance (26) is an instance of an NF that has any of the one or more NF types (36); a location criterion specifying a set of one or more locations, wherein an NF instance (26) satisfies the location criterion if the NF instance (26) is deployed at any of the one or more locations; and a cost criterion specifying a maximum cost, wherein an NF instance (26) satisfies the cost criterion if the cost of the NF instance (26) is less than or equal to the maximum cost, wherein the maximum cost is a maximum amount of money or a maximum amount of power; a vendor criterion specifying a set of one or more vendors (30), wherein an NF instance (26) satisfies the vendor criterion if the NF instance (26) is provided by any of the one or more vendors (30); and a capability criterion specifying a set of one or more capabilities, wherein an NF instance (26) satisfies the capability criterion if the NF instance (26) has any of the one or more capabilities.

23. The method of any of claims 17-22, further comprising selecting (710) an NF instance (26) from the one or more NF instances (26) to provide a service to the service consumer, wherein selecting the NF instance (26U) comprises selecting the NF instance (26) as a function of: how soon the NF instance (26) can be or will be available; and/or how soon the NF instance (26) is needed to be available; and/or a cost of the NF instance (26).

24. The method of claim 23, further comprising dynamically (720) initiating orchestration of the selected NF instance.

25. The method of any of claims 23-24, further comprising training a reinforcement learning model to predict which NF instance (26) to select to yield a maximum reward specified in terms of NF instance availability, latency, and/or throughput, and wherein the method further comprises selecting, from the one or more NF instances (26), an NF instance (26) to provide a service to the service consumer, wherein selecting the NF instance (26) comprises selecting the NF instance (26) that yields a maximum reward according to the reinforcement learning model.

26. A method performed by vendor equipment (34) of a vendor (30) for a communication network (10), the method comprising: determining (800), for each of one or more types (36) of network functions, NFs, when the vendor (30) can or will make an instance of the type (36) of NF available; and adding (810), to a distributed database (32), a record that indicates, for each of the one or more types (36) of NFs, when the vendor (30) can or will make an instance of the type (36) of NF available.

27. The method of claim 26, wherein, for each of the one or more types (36) of NFs, the record indicates: how long orchestration of an instance of the type (36) of NF takes; and/or when an instance of the type (36) of NF is expected to have capacity for providing a service to a service consumer; and/or a promise parameter indicating a duration of time after which an instance of the type (36) of NF is promised to be available.

28. The method of any of claims 26-27, wherein the record further indicates, for each of the one or more types (36) of NFs, one or more of: an endpoint to send a request for the vendor (30) to orchestrate an instance of the type (36) of NF; a geographical location at which the vendor (30) orchestrates an instance of the type (36) of NF; and a cost to orchestrate an instance of the type (36) of NF, wherein the cost is an amount of money or an amount of power.

29. The method of any of claims 26-28, wherein the distributed database (32) is a permissioned distributed ledger.

30. Discovery service network equipment (14) in a communication network (10), the discovery service network equipment (14) configured to: receive a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20); and responsive to the request (18), transmit a response (22) that includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available.

31 . The discovery service network equipment (14) of claim 30, configured to perform the method of any of claims 2-16.

32. Network equipment (16) configured to operate as a service consumer or as a proxy for a service consumer, the network equipment (16) configured to: receive a response (22) to a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20), wherein the response (22) includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available.

33. The network equipment (16) of claim 32, configured to perform the method of any of claims 18-25.

34. Vendor equipment (34) of a vendor (30) for a communication network (10), the vendor equipment (34) configured to: determine, for each of one or more types (36) of network functions, NFs, when the vendor (30) can or will make an instance of the type (36) of NF available; and add, to a distributed database (32), a record that indicates, for each of the one or more types (36) of NFs, when the vendor (30) can or will make an instance of the type (36) of NF available.

35. The vendor equipment (34) of claim 34, configured to perform the method of any of claims 27-29.

36. A computer program comprising instructions which, when executed by at least one processor of discovery service network equipment (14), causes the discovery service network equipment (14) to perform the method of any of claims 1-16.

37. A computer program comprising instructions which, when executed by at least one processor of network equipment (16) configured to operate as a service consumer or as a proxy for a service consumer, causes the network equipment (16) to perform the method of any of claims 17-25.

38. A computer program comprising instructions which, when executed by at least one processor of vendor equipment (34) of a vendor (30) for a communication network (10), causes the vendor equipment (34) to perform the method of any of claims 26-29.

39. A carrier containing the computer program of any of claims 36-38, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.

40. A system in a communication network (10), the system comprising discovery service network equipment (14) configured to: receive a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20); and responsive to the request (18), transmit a response (22) that includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available; and network equipment (16) configured to operate as a service consumer or as a proxy for a service consumer, wherein the network equipment (16) is configured to receive the response (22).

41 . The system of claim 40, wherein the discovery service network equipment (14) is configured to perform the method of any of claims 2-16 and/or the network equipment (16) is configured to perform the method of any of claims 18-25.

42. Discovery service network equipment (14) in a communication network (10), the discovery service network equipment (14) comprising: communication circuity (920); and processing circuitry (910) configured to: receive, via the communication circuitry, a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20); and responsive to the request (18), transmit, via the communication circuitry, a response (22) that includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available.

43. The discovery service network equipment (14) of claim 42, wherein the processing circuitry (910) is configured to perform the method of any of claims 2-16.

44. Network equipment (16) configured to operate as a service consumer or as a proxy for a service consumer, the network equipment (16) comprising communication circuitry (1020); and processing circuitry (1010) configured to receive, via the communication circuitry (1020), a response (22) to a request (18) for discovery of one or more network function, NF, instances (26) that satisfy one or more input filter criterions (20), wherein the response (22) includes information (24) about one or more NF instances (26) satisfying the one or more input filter criterions (20), wherein the one or more NF instances (26) satisfying the one or more input filter criterions (20) include one or more unavailable NF instances (26U) that are unavailable, wherein, for each of the one or more unavailable NF instances (26U), the information (24) about the unavailable NF instance (26U) indicates when the unavailable NF instance (26U) can be or will be available.

45. The network equipment (16) of claim 44, the processing circuitry (1010) configured to perform the method of any of claims 18-25.

46. Vendor equipment (34) of a vendor (30) for a communication network (10), the vendor equipment (34) comprising: communication circuitry (1120); and processing circuitry (1110) configured to: determine, for each of one or more types (36) of network functions, NFs, when the vendor (30) can or will make an instance of the type (36) of NF available; and add, to a distributed database (32), a record that indicates, for each of the one or more types (36) of NFs, when the vendor (30) can or will make an instance of the type (36) of NF available.

47. The vendor equipment (34) of claim 46, the processing circuitry (1110) configured to perform the method of any of claims 27-29.

Description:
NETWORK FUNCTION INSTANCE DISCOVERY

TECHNICAL FIELD

The present application relates generally to a communication network, and related more particularly to network function instance discovery in such a network.

BACKGROUND

The next generation (5G) core network (CN) uses a service-based architecture that leverages service-based interactions between CN network functions (NFs). NFs in this regard enable other authorized NFs to access their services. Alternatively or in addition to predefined interfaces being defined between network elements, an instance of an NF needing to consume a service of a certain type queries a so-called network repository function (NRF) to discover and communicate with an instance of another NF that provides that certain type of service.

In particular, NFs can take on a provider role as a provider of a service (NFp) and/or a consumer role as a consumer of a service (NFc). An instance of an NFp starts and registers itself to the NRF. This registration allows the NRF to be aware that the instance of the NFp exists. At a later point, an instance of an NFc that needs to use a specific service runs a procedure called discovery towards the NRF. In case the NRF has at least one registered instance of the NFp that matches this discovery request, the NRF provides the instance of the NFc with information needed to set up communication with a discovered instance of the NFp. This information may be for example the IP address and port of the NFp instance. If more than one NFp instance matches the discovery request, such that there are multiple candidate NFp instances capable of providing the service to the NFc instance, the NRF may respond with information indicating those multiple candidate NFp instances, so that the NFc instance can choose from among them.

The service-based architecture advantageously enables greater flexibility and speed in the development of new CN services, as it becomes possible to connect to other components without introducing new interfaces. The service-based architecture nonetheless introduces challenges in some contexts, such as where NFps for mission-critical communication applications are hosted in cloud datacenters, where NFps are hosted in public clouds with unmanaged transport connections towards the NFc, and/or where NFps are accessed through or hosted by non-terrestrial networks. In these and other contexts, ensuring NFp instance availability to fulfill discovery requests requires the network operator to pre-provision enough NFp instances to meet the highest demand and therefore requires considerable computational resources and expense. Failure to pre-provision enough NFp instances means that some discovery requests go unfulfilled and communication network performance suffers. SUMMARY

According to some embodiments herein, a communication network accommodates for the unavailability of network function (NF) instances to fulfill a discovery request. Rather than simply responding to a discovery request with notification that no or only limited NF instances are available to fulfill the request, some embodiments herein respond to the discovery request with information about when NF instance(s) can be or will be available. By enabling the communication network to fulfill discovery requests on the basis of the communication network’s ability to make NF instance(s) available in the future (rather than only on the basis of the current availability of NF instance(s)), some embodiments herein allow the communication network to dynamically orchestrate NF instance(s) on-the-fly as needed to meet actual demand. This in turn relieves the communication network from having to pre-provision NF instances on the basis of expected demand. Some embodiments herein therefore advantageously promote more sustainable use of computational resources and expense, without compromising discovery request fulfillment and communication network performance.

More particularly, embodiments herein include a method performed by discovery service network equipment in a communication network. The method comprises receiving a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. The method also comprises, responsive to the request, transmitting a response that includes information about one or more NF instances satisfying the one or more input filter criterions. The one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the one or more unavailable NF instances include one or more uninitialized NF instances that are unavailable because the one or more uninitialized NF instances have not been initialized. In some embodiments, initialization of an NF instance includes setting initial values of parameters of the NF instance. In some embodiments, for each of the one or more uninitialized NF instances, the information about the uninitialized NF instance indicates when the uninitialized NF instance can be or will be available by indicating when the uninitialized NF instance can be or will be initialized.

In some embodiments, the one or more unavailable NF instances include one or more unorchestrated NF instances that are unavailable because the one or more unorchestrated NF instances have not been orchestrated. In some embodiments, orchestration of an NF instance includes allocating resources for the NF instance. In some embodiments, for each of the one or more unorchestrated NF instances, the information about the unorchestrated NF instance indicates when the unorchestrated NF instance can be or will be available by indicating when the unorchestrated NF instance can be or will be orchestrated. In some embodiments, the one or more unavailable NF instances include one or more overloaded NF instances that are unavailable because the one or more overloaded NF instances lack capacity. In some embodiments, for each of the one or more overloaded NF instances, the information about the overloaded NF instance indicates when the overloaded NF instance can be or will be available by indicating when the overloaded NF instance is expected to have capacity.

In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance includes a promise parameter that indicates when the unavailable NF instance can be or will be available by indicating a duration of time after which the unavailable NF instance is promised to be available.

In some embodiments, the method further comprises, for each of the one or more unavailable NF instances, retrieving the information that indicates when the unavailable NF instance can be or will be available from a distributed database. In one or more of these embodiments, the distributed database is a permissioned distributed ledger. In one or more of these embodiments, the distributed ledger includes one or more records for each of one or more vendors of the communication network. In some embodiments, a record for a vendor indicates, for each of one or more types of NFs, when the vendor can or will make an instance of the type of NF available. In one or more of these embodiments, the record for a vendor further indicates at least one or more types of NFs that the vendor is able to orchestrate. Alternatively, in one or more of these embodiments, the record for a vendor further indicates at least an endpoint to send a request for the vendor to orchestrate an NF instance. Alternatively, in one or more of these embodiments, the record for a vendor further indicates at least a geographical location at which the vendor orchestrates an NF instance. Alternatively, in one or more of these embodiments, the record for a vendor further indicates at least a cost to orchestrate an NF instance. In some embodiments, the cost is an amount of money or an amount of power

In some embodiments, the method further comprises training a model at the discovery service network equipment to predict when any given unavailable NF instance can be or will be available. The method further comprises, for each of the one or more unavailable NF instances, using the trained model to determine when the unavailable NF instance can be or will be available.

In some embodiments, the one or more input filter criterions include at least an NF type criterion specifying a set of one or more NF types. In some embodiments, an NF instance satisfies the NF type criterion if the NF instance is an instance of an NF that has any of the one or more NF types. Alternatively, the one or more input filter criterions include at least a location criterion specifying a set of one or more locations. In some embodiments, an NF instance satisfies the location criterion if the NF instance is deployed at any of the one or more locations. Alternatively, the one or more input filter criterions include at least a cost criterion specifying a maximum cost. In some embodiments, an NF instance satisfies the cost criterion if the cost of the NF instance is less than or equal to the maximum cost, wherein the maximum cost is a maximum amount of money or a maximum amount of power. Alternatively, the one or more input filter criterions include at least a vendor criterion specifying a set of one or more vendors. In some embodiments, an NF instance satisfies the vendor criterion if the NF instance is provided by any of the one or more vendors. Alternatively, the one or more input filter criterions include at least a capability criterion specifying a set of one or more capabilities. In some embodiments, an NF instance satisfies the capability criterion if the NF instance has any of the one or more capabilities.

In some embodiments, the method further comprises, based on the request, discovering a set of one or more candidate NF instances that satisfy the one or more input filter criterions. The method further comprises selecting a candidate NF instance from the discovered set to provide a service to a service consumer. In some embodiments, the one or more unavailable NF instances comprise the selected candidate NF instance. In some embodiments, the response includes information about the selected candidate NF instance. In one or more of these embodiments, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of how soon the candidate NF instance can be or will be available. Additionally or alternatively, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of how soon the candidate NF instance is needed to be available. Additionally or alternatively, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of a cost of the candidate NF instance. In one or more of these embodiments, the method further comprises training a reinforcement learning model to predict which candidate NF instance to select to yield a maximum reward specified in terms of NF instance availability, latency, and/or throughput. In some embodiments, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance that yields a maximum reward according to the reinforcement learning model. In one or more of these embodiments, the method further comprises dynamically initiating orchestration of the selected candidate NF instance.

In some embodiments, the discovery service network equipment implements an NF Repository Function, NRF or a Service Communication Proxy, SCP.

Other embodiments herein include a method performed by network equipment configured to operate as a service consumer or as a proxy for a service consumer. The method comprises receiving a response to a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. The response includes information about one or more NF instances satisfying the one or more input filter criterions. The one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the one or more unavailable NF instances include one or more uninitialized NF instances that are unavailable because the one or more uninitialized NF instances have not been initialized. In some embodiments, initialization of an NF instance includes setting initial values of parameters of the NF instance. In some embodiments, for each of the one or more uninitialized NF instances, the information about the uninitialized NF instance indicates when the uninitialized NF instance can be or will be available by indicating when the uninitialized NF instance can be or will be initialized.

In some embodiments, the one or more unavailable NF instances include one or more unorchestrated NF instances that are unavailable because the one or more unorchestrated NF instances have not been orchestrated. In some embodiments, orchestration of an NF instance includes allocating resources for the NF instance. In some embodiments, for each of the one or more unorchestrated NF instances, the information about the unorchestrated NF instance indicates when the unorchestrated NF instance can be or will be available by indicating when the unorchestrated NF instance can be or will be orchestrated.

In some embodiments, the one or more unavailable NF instances include one or more overloaded NF instances that are unavailable because the one or more overloaded NF instances lack capacity. In some embodiments, for each of the one or more overloaded NF instances, the information about the overloaded NF instance indicates when the overloaded NF instance can be or will be available by indicating when the overloaded NF instance is expected to have capacity.

In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance includes a promise parameter that indicates when the unavailable NF instance can be or will be available by indicating a duration of time after which the unavailable NF instance is promised to be available.

In some embodiments, the one or more input filter criterions include at least an NF type criterion specifying a set of one or more NF types. In some embodiments, an NF instance satisfies the NF type criterion if the NF instance is an instance of an NF that has any of the one or more NF types. Alternatively, the one or more input filter criterions include at least a location criterion specifying a set of one or more locations. In some embodiments, an NF instance satisfies the location criterion if the NF instance is deployed at any of the one or more locations. Alternatively, the one or more input filter criterions include at least a cost criterion specifying a maximum cost. In some embodiments, an NF instance satisfies the cost criterion if the cost of the NF instance is less than or equal to the maximum cost. In some embodiments, the maximum cost is a maximum amount of money or a maximum amount of power. Alternatively, the one or more input filter criterions include at least a vendor criterion specifying a set of one or more vendors. In some embodiments, an NF instance satisfies the vendor criterion if the NF instance is provided by any of the one or more vendors. Alternatively, the one or more input filter criterions include at least a capability criterion specifying a set of one or more capabilities. In some embodiments, an NF instance satisfies the capability criterion if the NF instance has any of the one or more capabilities.

In some embodiments, the method further comprises selecting an NF instance from the one or more NF instances to provide a service to the service consumer. In some embodiments, selecting the NF instance comprises selecting the NF instance as a function of how soon the NF instance can be or will be available. Additionally or alternatively, selecting the NF instance comprises selecting the NF instance as a function of how soon the NF instance is needed to be available. Additionally or alternatively, selecting the NF instance comprises selecting the NF instance as a function of a cost of the NF instance. In one or more of these embodiments, the method further comprises dynamically initiating orchestration of the selected NF instance. In one or more of these embodiments, the method further comprises training a reinforcement learning model to predict which NF instance to select to yield a maximum reward specified in terms of NF instance availability, latency, and/or throughput. In some embodiments, the method further comprises selecting, from the one or more NF instances, an NF instance to provide a service to the service consumer. In some embodiments, selecting the NF instance comprises selecting the NF instance that yields a maximum reward according to the reinforcement learning model.

Other embodiments herein include a method performed by vendor equipment of a vendor for a communication network. The method comprises determining, for each of one or more types of network functions, NFs, when the vendor can or will make an instance of the type of NF available. The method also comprises adding, to a distributed database, a record that indicates, for each of the one or more types of NFs, when the vendor can or will make an instance of the type of NF available.

In some embodiments, for each of the one or more types of NFs, the record indicates how long orchestration of an instance of the type of NF takes. Additionally or alternatively, for each of the one or more types of NFs, the record indicates when an instance of the type of NF is expected to have capacity for providing a service to a service consumer. Additionally or alternatively, for each of the one or more types of NFs, the record indicates a promise parameter indicating a duration of time after which an instance of the type of NF is promised to be available.

In some embodiments, the record further indicates, for each of the one or more types of NFs, at least an endpoint to send a request for the vendor to orchestrate an instance of the type of NF. Alternatively, the record further indicates, for each of the one or more types of NFs, at least a geographical location at which the vendor orchestrates an instance of the type of NF. Alternatively, the record further indicates, for each of the one or more types of NFs, at least a cost to orchestrate an instance of the type of NF. In some embodiments, the cost is an amount of money or an amount of power.

In some embodiments, the distributed database is a permissioned distributed ledger.

Other embodiments herein include discovery service network equipment in a communication network. The discovery service network equipment is configured to receive a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. The discovery service network equipment is also configured to, responsive to the request, transmit a response that includes information about one or more NF instances satisfying the one or more input filter criterions. In some embodiments, the one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the discovery service network equipment is configured to perform the steps describe above for discovery service network equipment.

Other embodiments herein include network equipment configured to operate as a service consumer or as a proxy for a service consumer. The network equipment is configured to receive a response to a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. In some embodiments, the response includes information about one or more NF instances satisfying the one or more input filter criterions. In some embodiments, the one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the network equipment is configured to perform the steps describe above for network equipment.

Other embodiments herein include vendor equipment of a vendor for a communication network. The vendor equipment is configured to determine, for each of one or more types of network functions, NFs, when the vendor can or will make an instance of the type of NF available. The vendor equipment is also configured to add, to a distributed database, a record that indicates, for each of the one or more types of NFs, when the vendor can or will make an instance of the type of NF available.

In some embodiments, the vendor equipment is configured to perform the steps describe above for vendor equipment.

Other embodiments herein include a computer program comprising instructions which, when executed by at least one processor of discovery service network equipment, causes the discovery service network equipment to perform the steps describe above for discovery service network equipment. Other embodiments herein include a computer program comprising instructions which, when executed by at least one processor of network equipment configured to operate as a service consumer or as a proxy for a service consumer, causes the network equipment to perform the steps describe above for network equipment. Other embodiments herein include a computer program comprising instructions which, when executed by at least one processor of vendor equipment of a vendor for a communication network, causes the vendor equipment to perform the steps describe above for vendor equipment. In one or more of these embodiments a carrier containing the computer program is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.

Other embodiments herein include a system in a communication network. The system comprises discovery service network equipment configured to receive a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. The discovery service network equipment is also configured to responsive to the request, transmit a response that includes information about one or more NF instances satisfying the one or more input filter criterions. In some embodiments, the one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available. The system also comprises network equipment configured to operate as a service consumer or as a proxy for a service consumer. In some embodiments, the network equipment is configured to receive the response.

In some embodiments, the discovery service network equipment is configured to perform the steps described above for discovery service network equipment. Additionally or alternatively, the network equipment is configured to perform the steps described above for network equipment.

Other embodiments herein include discovery service network equipment in a communication network. The discovery service network equipment comprises communication circuitry and processing circuitry. The processing circuitry is configured to receive, via the communication circuitry, a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. The processing circuitry is also configured to, responsive to the request, transmit, via the communication circuitry, a response that includes information about one or more NF instances satisfying the one or more input filter criterions. In some embodiments, the one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the processing circuitry is configured to perform the steps described above for discovery service network equipment. Other embodiments herein include network equipment. The network equipment comprises communication circuitry and processing circuitry. The processing circuitry is configured to receive, via the communication circuitry, a response to a request for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions. In some embodiments, the response includes information about one or more NF instances satisfying the one or more input filter criterions. In some embodiments, the one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances that are unavailable. In some embodiments, for each of the one or more unavailable NF instances, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the processing circuitry is configured to perform the steps described above for network equipment.

Other embodiments herein include vendor equipment. The vendor equipment comprises communication circuitry and processing circuitry. The processing circuitry is configured to determine, for each of one or more types of network functions, NFs, when the vendor can or will make an instance of the type of NF available. The processing circuitry is also configured to add, to a distributed database, a record that indicates, for each of the one or more types of NFs, when the vendor can or will make an instance of the type of NF available.

In some embodiments, the processing circuitry is configured to perform the steps described above for vendor equipment.

Of course, the present invention is not limited to the above features and advantages. Indeed, those skilled in the art will recognize additional features and advantages upon reading the following detailed description, and upon viewing the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 is a block diagram of a communication network according to some embodiments.

Figure 2 is a block diagram of network function (NF) discovery according to some embodiments.

Figure 3 is a block diagram of NF discovery that exploits a distributed ledger according to some embodiments.

Figure 4 is a block diagram of a distributed ledger according to some embodiments.

Figures 5A-5D are call flow diagrams for NF discovery and orchestration according to some embodiments.

Figure 6 is a logic flow diagram of a method performed by discovery service network equipment according to some embodiments.

Figure 7 is a logic flow diagram of a method performed by network equipment according to some embodiments. Figure 8 is a logic flow diagram of a method performed by vendor equipment according to some embodiments.

Figure 9 is a block diagram of discovery service network equipment according to some embodiments.

Figure 10 is a block diagram of network equipment according to some embodiments.

Figure 11 is a block diagram of vendor equipment according to some embodiments.

Figure 12 is a block diagram of a communication system in accordance with some embodiments.

Figure 13 is a block diagram of a user equipment according to some embodiments.

Figure 14 is a block diagram of a network node according to some embodiments.

Figure 15 is a block diagram of a host according to some embodiments.

Figure 16 is a block diagram of a virtualization environment according to some embodiments.

DETAILED DESCRIPTION

Figure 1 shows a communication network 10 according to some embodiments. The communication network 10 provides communication service to one or more communication devices 12, e.g., in the form of user equipment (UE). In one embodiment, for example, the communication network 10 is a 5G network that provides wireless communication service to one or more wireless communication devices.

The communication network 10 has a service-based architecture that leverages servicebased interactions between network functions (NFs). An NF provides a service to other authorized NFs that consume that service. An NF may thereby take on a producer role as a provider of a service (NF service producer) and/or a consumer role as a consumer of a service (NF service consumer). Different types of NFs provide and/or consume different types of services. In a 5G network, for instance, different types of NFs in the control plane include an access and mobility management function (AMF), a session management function (SMF), a policy control function (PCF), an authentication server function (AUSF), a unified data management (UDM) function, etc.

Each NF may be implemented by network equipment either as a network element on dedicated hardware, as a software instance running on dedicated hardware, or as a virtualized function instantiated on an appropriate platform, e.g., on a cloud infrastructure. The communication network 10 may implement one or more instances of each type of NF. Different NF instances may provide a service for different consumers of that service, e.g., in order to balance the load across the different NF instances.

In this context, Figure 1 shows network equipment 16 that is configured to operate as a service consumer or as a proxy for a service consumer (e.g., implementing a Service Communication Proxy, SCP). The network equipment 16 uses discovery service network equipment 14 to discover NF instance(s). The network equipment 16 as shown in this regard transmits a discovery request 18 to the discovery service network equipment 14. The discovery request 18 is a request for discovery of NF instance(s) that satisfy one or more input filter criterions 20, e.g., as included in the discovery request 18. The input filter criterion(s) 20 may for instance include an NF type criterion specifying a set of one or more NF types, in which case an NF instance satisfies the NF type criterion if the NF instance is an instance of an NF that has any of the NF type(s) in the set.

In response to the discovery request 18, the discovery service network equipment 14 attempts to discover NF instance(s) that satisfy the input filter criterion(s) 20. The discovery service network equipment 14 then transmits a response 22 to the discovery request 18, indicating the results (if any) of the discovery attempt. The response 22 in particular includes information 24 about discovered NF instance(s) 26 (if any) that satisfy the input filter criterion(s) 20. The information 24 about a discovered NF instance may for example include, or be embodied within, a profile for the discovered NF instance. In these and other embodiments, the information 24 about a discovered NF instance may indicate an identifier of the NF instance, a type of the NF instance, a status of the NF instance, network slice information for the NF instance, one or more Internet Protocol (IP) addresses of the NF instance, or other information describing the NF instance. Regardless, if more than one NF instance 26 is discovered, the network equipment 16 may use the information 24 in the response 22 to determine which of the discovered NF instances 26 to select. In this sense, then, the discovered NF instance(s) 26 may constitute candidate(s) for selection by the network equipment 16.

In some embodiments, the discovered NF instance(s) 26 include available NF instance(s) 26A that satisfy the input filter criterion(s) 20 and that are available. In one such embodiment, an available NF instance 26 is available in the sense that the NF instance is available to provide a service to an NF service consumer. This may mean, for example, that the NF instance has been instantiated and/or initialized. Instantiation here may refer to the creation of the NF instance, and initialization may include setting initial values of parameters of the NF instance. Alternatively or additionally, an NF instance is available to provide a service if the NF instance has the capacity to do so, e.g., in terms of enough processing and/or memory resources available to provide the service, given the current load on the NF instance.

Notably, the discovered NF instance(s) 26 herein may alternatively or additionally include unavailable NF instance(s) 26U that satisfy the input filter criterion(s) 20 but that are unavailable. The unavailable NF instance(s) 26U may for example be unavailable because the NF instance(s) 26U have not yet been instantiated and/or initialized, or because the NF instance(s) 26U are currently overloaded and therefore lack the capacity to provide a service. Accordingly, even though unavailable NF instance(s) 26U are unavailable to provide a service according to the discovery request 18, the discovery service network equipment 14 herein nonetheless includes information 24 about the unavailable NF instance(s) 26U in the response 22 to the discovery request 18. The discovery service network equipment 14 may do so in anticipation that the unavailable NF instance(s) 26U can be or will be available in the future, e.g., meaning that the unavailable NF instance(s) 26U may still represent viable candidate(s) for selection by the network equipment 16.

In fact, in some embodiments, the response 22 to the discovery request 18 includes availability timing information 28 for the unavailable NF instance(s) 26U. For each of the unavailable NF instance(s) 26U, the availability timing information 28 indicates when the unavailable NF instance 26U can be or will be available, e.g., in terms of when the unavailable NF instance 26 can be or will be instantiated or initialized or when the unavailable NF instance 26 is expected to have capacity. This way, the network equipment 16 can take the timing of an NF instance’s availability into account when deciding whether or not to select that NF instance. To bolster the network equipment’s confidence in this regard, the availability timing information 28 in some embodiments may rise to the level of a promise of when an unavailable NF instance 26U will be available. In one such embodiment, the availability timing information 28 for an unavailable NF instance 26U constitutes a promise parameter that indicates a promise of when the unavailable NF instance 26U will be available, e.g., by indicating a duration of time after which the unavailable NF instance 26U is promised to be available.

As shown in Figure 1 , for example, an unavailable NF instance 26U is unavailable as of a time T1 when the discovery service network equipment 14 receives the discovery request 18. The discovery service network equipment 14 nonetheless determines that the unavailable NF instance 26U can be or will be available at time T3. The discovery service network equipment 14 correspondingly transmits a response 22 to the discovery request 18 at time T2, and includes in the response 22 a promise parameter indicating a duration D of time after which the unavailable NF instance 26U is promised to be available. In the example of Figure 1 , this duration D of time is equal to T3 minus T2, such that the response 22 indicates the time T3 when the unavailable NF instance 26U is promised to be available, relative to the time T2 when the response 22 is transmitted.

The discovery service network equipment 14 thereby accommodates for the unavailability of discovered NF instance(s) to fulfill the discovery request 18. Rather than simply responding to the discovery request 18 with little to no NF instances that are available to fulfill the discovery request 18, the discovery service network equipment 14 may respond to the discovery request 18 with information 28 about when unavailable NF instance(s) 26U can be or will be available. The discovery service network equipment 14 according to embodiments herein may therefore fulfill the discovery request 18 on the basis of the communication network’s ability to make NF instance(s) 26U available in the future, rather than only on the basis of the current availability of NF instance(s).

By fulfilling discovery requests in this way, some embodiments herein prove particularly effective for facilitating dynamic orchestration of NF instance(s) on-the-fly as needed to meet varying demand. This in turn relieves the communication network 10 from having to pre-provision NF instances on the basis of expected demand. Some embodiments herein therefore advantageously promote more sustainable use of computational/memory resources and expense, without compromising discovery request fulfillment and communication network performance.

Figure 1 shows one example in a context where NF instance(s) may be dynamically orchestrated, e.g., as virtualized function(s) instantiated on cloud infrastructure(s). As shown, the communication network 10 has one or more vendors 30-1 ...30-M, generally referred to as vendor(s) 30. The vendor(s) 30 are each vendors of NF instance(s) for the communication network 30. In one embodiment, for example, a vendor provides the communication network 10 with NF instance(s) in the form of virtualized functions instantiated on the vendor’s cloud infrastructure. In fact, the vendor(s) 30 in some embodiments may dynamically orchestrate such NF instance(s) for the communication network 10 on-the-fly, as the need for those NF instance(s) arises, rather than having to pre-provision as many NF instance(s) as are expected to be needed.

In these and other embodiments, then, an NF instance may or may not be available at any given time, e.g., at least from a certain vendor. An NF instance may be unavailable from a certain vendor, for example, if the NF instance has not yet been orchestrated by the vendor, e.g., where orchestration of the NF instance includes the vendor allocating resources for the NF instance. Indeed, in embodiments where a vendor 30 dynamically orchestrates an NF instance responsive to demand, the vendor 30 may wait to orchestrate the NF instance until an NF instance of that type is in requested.

In this context, the discovery service network equipment 14 according to some embodiments may include in the response 22 to a discovery request 18 information 24 about NF instance(s) 26U that are unavailable because the NF instance(s) 26U have not been orchestrated. And this information 24 may include availability timing information 28 that indicates when the NF instance(s) 26U can be or will be orchestrated.

In some embodiments, the timing of when the NF instance(s) 26U can be or will be orchestrated is effectively reported by the vendor(s) 30-1 ...30-M to the discovery service network equipment 14. In one embodiment, for example, each vendor 30 reports, for each of one or more types of NFs, when the vendor 30 can or will orchestrate an instance of the type of NF, e.g., in terms of a duration of time needed by the vendor 30 to orchestrate an instance of that NF type. The discovery service network equipment 14 can then relay that timing to the network equipment 16 in the response 22 to a discovery request 18.

Some embodiments herein exploit a distributed database 32 for this purpose. As shown in Figure 1 , the distributed database 32 is distributed at least in part between the discovery service network equipment 14 and vendor(s) 30-1 ...30-M. In some embodiments, the distributed database 32 is a permissioned distributed ledger. A permissioned distributed ledger as used herein is a consensus of replicated, shared, and synchronized data geographically spread across multiple sites. That is, the data is shared, at least in part, across the multiple sites in order that the data be synchronized and replicated at each of the sites, e.g., according to a consensus protocol. A permissioned distributed ledger may for instance be implemented in the form of a blockchain, e.g., where records are appended to the blockchain in blocks, with each block containing a cryptographic hash of the previous block. In these and other embodiments, the permissioned distributed ledger (or simply, distributed ledger for short) may be immutable, e.g., in the practical sense that changing the consensus of data would require extreme computational effort and collaboration. Regardless of the particular form of the distributed database 32, the distributed database 32 may be permissioned in the sense that the protocol for forming the consensus is controlled by select participant(s) that have permission to do so, e.g., based on a proof-of-stake protocol.

In any event, in the example of Figure 1 , the distributed database 32 in some embodiments is a consensus of replicated, shared, and synchronized data geographically spread across multiple sites that include the discovery service network equipment 14 and the vendor(s) 30. And the data that is replaced, shared, and synchronized includes availability timing information 28 for NF instance(s) from the vendor(s) 30.

In these and other embodiments, the vendor(s) 30 include respective vendor equipment 34-1 ...34-M, generally referred to as vendor equipment 34. Each vendor’s vendor equipment 34 adds record(s) to the distributed database 32 indicating when the vendor can or will make NF instance(s) of certain types available. The record(s) may for example indicate, for each of one or more types of NFs, how long orchestration of an instance of that type of NF takes, when an instance of that type of NF is expected to have capacity for providing a service, and/or a promise parameter indicating a duration of time after which an instance of that type of NF is promised to be available.

Generally, then, vendor equipment 34-1 for vendor 30-1 determines, for each of one or more types of NFs, when the vendor 30-1 can or will make an instance of that type of NF available, e.g., by orchestrating and initializing that type of NF or by freeing capacity for that type of NF. The vendor equipment 34-1 then adds, to the distributed database 32, a record that indicates, for each of the one or more types of NFs, when the vendor 30-1 can or will make an instance of that type of NF available. Figure 1 in this regard shows that such record(s) of the distributed database 32 may indicate availability timing information 28-1 ...28-X for respective NF instance types 36-1 .. ,36-X. Vendor equipment 34-X for vendor 30-X may likewise add record(s) to the distributed database 32 for indicating, for each of one or more types of NFs, when the vendor 30-X can or will make an instance of that type of NF available.

In these and other embodiments, the discovery service network equipment 14 consults the distributed database 32 upon receiving a discovery request 18. For each unavailable NF instance 26U to be indicated in the response 22 to the discovery request 18, the discovery service network equipment 14 retrieves availability timing information 28 for that type of NF from the distributed database 32, indicating when the unavailable NF instance 26 can be or will be available. If the discovery request 18 requests discovery of a certain type of NF, for example, the discovery service network equipment 14 retrieves availability timing information that each vendor 30-1 ...30-M has recorded in the distributed database 32 for that certain type of NF. The response 22 may thereby indicate availability timing information for respective vendors, so as to indicate when each vendor can or will make an instance of that certain type of NF available.

Provided with such a response 22, the network equipment 16 can select which of the discovered NF instance(s) 26 is to provide the service to the service consumer, based on how soon each of the unavailable NF instance(s) 26U can be or will be available. The selection may accordingly also depend on how soon the service consumer needs to be provided with the service, i.e. , how soon the NF instance is needed to be available. In one embodiment, upon selection of an unavailable NF instance 26U, the network equipment 16 dynamically initiates orchestration of that unavailable NF instance 26U. The network equipment 16 may do so for example by sending an orchestration request to the vendor 30 that is to provide the selected NF instance 26U, e.g., where the endpoint for sending the orchestration request is indicated in the distributed database 32.

In other embodiments, though, the discovery service network equipment 14 itself selects which discovered NF instance(s) 26 is to provide the service to the service consumer. The discovery service network equipment 14 may perform this selection in a way similar to that described above for the network equipment 16, e.g., based on how soon each of the unavailable NF instance(s) 26U can be or will be available and/or how soon the service consumer needs to be provided with the service. In this case, then, the response 22 includes information about the selected NF instance 26. Moreover, the discovery service network equipment 14 may be the entity that dynamically initiates orchestration of the selected NF instance.

Some embodiments exploit machine learning to select which discovered NF instance is to provide the service to the service consumer. For example, one embodiment uses reinforcement learning for the selection, as performed by the network equipment 16 or the discovery service network equipment 14. In this case, the network equipment 16 or the discovery service network equipment 14 trains a reinforcement learning model to predict which candidate NF instance to select to yield a maximum reward, e.g., specified in terms of NF instance availability, latency, and/or throughput. Selection thereby involves selecting the candidate NF instance that yields a maximum reward according to the trained reinforcement learning model.

Note, too, that although some embodiments exploit a distributed database 32 for determining when any given unavailable NF instance can be or will be available, other embodiments herein exploit machine learning for this purpose. For example, in some embodiments, the discovery service network equipment 14 uses machine learning to train a model at the discovery service network equipment 14 to predict when any given unavailable NF instance 26U (of a certain type) can be or will be available. Then, for each of the unavailable NF instance(s) 26U discovered, the discovery service network equipment 14 uses the trained model to determine when the unavailable NF instance 26U can be or will be available. The response 22 to the discovery request 18 may then indicate this determination from the machine learning model.

Note further that, although embodiments above were described with a focus on availability timing information for unavailable NF instance(s) 26U, the discovery and/or selection of NF instance(s) may be based on one or more other factors or criterions as well. In some embodiments, for example, each vendor 30 may add record(s) to the distributed database 32 indicating, e.g., on an NF type by NF type basis, a geographical location at which the vendor 30 orchestrates an NF instance and/or a cost to orchestrate an NF instance. Here, the cost may be an amount of money or an amount of power/energy. In this case, then, the discovery and/or selection of NF instance(s) may be based not only on availability timing information but also on NF instance geographical location and/or cost.

For example, in one embodiment, the input filter criterion(s) 40 may include an NF type criterion, a location criterion, a cost criterion, a vendor criterion, or any combination thereof. Here, the NF type criterion specifies a set of one or more NF types, where an NF instance satisfies the NF type criterion if the NF instance is an instance of an NF that has any of the one or more NF types. The location criterion specifies a set of one or more locations, where an NF instance satisfies the location criterion if the NF instance is deployed at any of the one or more locations. The cost criterion specifies a maximum cost, where an NF instance satisfies the cost criterion if the cost of the NF instance is less than or equal to the maximum cost, e.g., in terms of a maximum amount of money or a maximum amount of power. And the vendor criterion specifies a set of one or more vendors, where an NF instance satisfies the vendor criterion if the NF instance is provided by any of the one or more vendors. In another embodiment, the input filter criterion(s) 40 may also include a capability criterion specifies a set of one or more capabilities, where an NF instance satisfies the capability criterion if the NF instance has any of the one or more capabilities. In any event, the discovery service network equipment 14 may discover NF instance(s) 26 that meet such input filter criterion(s) 40.

Alternatively or additionally, the discovery service network equipment 14 or network equipment 16 may perform NF instance selection based on NF instance geographical location and/or cost. The selection may for instance favor an NF instance that has the lowest cost and/or that is to be deployed at the location closest to the service consumer.

Consider now an example of some embodiments herein in the context of a 5G network. In 5G, the core network has a service-based architecture, which is broken down into communicating services known as Network Function (NF) services. These services can be hosted in any cloud infrastructure, either closer to the edge / radio access network or farther away from the edge to a centralized cloud.

As shown in Figure 2, an Network Repository Function (NRF) 40 provides an NF discovery and selection service for NF services. In this way, any NF can discover and select services offered by other NFs. More particularly, NF producers (NFp) 44 perform NF registration (Step 1) with the NRF 40. Thereafter, a consumer NF (NFc) 42, such as a Policy and Charging Control node, performs NF discovery (Step 2) with the NRF 40 to discover desired NFp(s). As part of this NF discovery, the NFc 42 transmits a discovery request to the NRF 40. The discovery request may contain a list of parameters that include the type or instance name of NF to discover, as well as network slice related identifiers (e.g., NSSI) and service parameters (e.g., list of features to be supported). In response, the NRF 40 returns a list of candidate NF producers (NFp) 44. The NFc 42 next performs NF selection (Step 3) to select one of the candidate NF producers 44 to provide the service to the NFc 42.

In this 5G context shown in Figure 2, then, the NRF 40 may be implemented by the discovery service network equipment 14 in Figure 1 , the NFc 42 may be implemented by network equipment 16, the discovery request that the NFc 42 sends to the NRF 40 exemplifies the discovery request 18 in Figure 1 , and the response that the NRF 40 returns to the NFc 42 exemplifies the response 22 in Figure 1 .

Examples of NFs using the NRF 40 to discover and use services of other NFs are the following. The Network Exposure Function (NEF) makes use of services from the access and mobility function (AMF) and the Unified Data Management (UDM) to expose mobility services to third parties (for example user equipment (UE) loss of connectivity, UE reachability, etc.). As another example, the User Plane Function (UPF) makes use of services from the Service Management Function (SMF) which provides rules for filtering traffic. These rules contain means for filtering specific applications, using service data flow - SDF - filters, or 3-tuple <protocol, server IP address, port number> Packet Flow Description - PDF filters). The rules may also contain quality of service (QoS) information such as guaranteed bitrate, priority, latency ceiling, acceptable packet drop rate, etc.

Note, too, that the NRF 40 is also responsible for identifying the health of registered NFps (see TS 129 510 V15.1 .0). Each NFp 44 contacts the NRF 40 periodically to demonstrate it is still functioning properly. This is achieved by means of an “update” functionality called NFUpdate, which updates the parameters of the network function - also known as profile (or NFProfile). If NRF 40 does not receive an update for an amount of time longer than the heartbeat interval, then it marks the NF as suspended. In some embodiments, the NRF 40 nonetheless keeps the NF discoverable, despite the NF being suspended and therefore unavailable.

Communication between an NFc 42 and NFP 44 can be direct (i.e., the latter directly reaching the former), or as of 3GPP release 16 indirect, meaning that it is possible that a node can mediate the communication. This node is known as a Service Communication Proxy (SCP). SCP can be used to either forward selection of an NFc 42 to an NFp, but it can also choose an NFp on behalf of an NFc. The idea with delegating selection of NFp 44 to SCP, is that SCP can offer functionality such as load balancing and failover but can also function as interoperability bridge between different vendors (e.g., in case NFPs are hosted by another vendor). Enforcement of signaling policies and monitoring of NF use are other reasons for SCP involvement.

In this context, some embodiments herein are applicable in cases where NFps may not always be available for NFcs, or when maintaining NFc connections to and uptime of NFps is costly. Note here that availability does not only pertain to presence of an NFp to serve an NFC but whether NFp can fulfill quality of service (QoS) an NFc may be requesting (for example throughput, latency, location, etc.). One example case is where resource-constrained (e.g., edge/private 5G) cloud datacenters host NFps for mission-critical communications applications. Another example case is where there are connectivity issues in reaching NFps, such as where NFps are accessed through or hosted by non-terrestrial networks, where connection availability is intermittent or resources are constrained, or where NFps are hosted in public clouds with unmanaged transport connection towards the NFc (e.g., over the internet). Yet another example is where NFps are hosted in multiple edge clouds, e.g., a multinational large manufacturing corporation maintaining a lot of private 5G mobile networks. In such cases there may exist several geographically distributed UPFs routing traffic locally. Maintaining connectivity and uptime of all these nodes may be an expensive administrative task to do manually.

In this context and/or use cases, according to some embodiments, the NRF 40 has at least some control, influence, or impact over NFp management and orchestration (MANG), e.g., for orchestrating NFps in real-time. In some embodiments, for example, the NFc 42 requests eligible NFps 44 from the NRF 40. The NRF 40 sends NFp descriptions to the NFC 42, together with a newly introduced promise parameter, which is embedded into the description of the NFp (i.e., it is part of NFProfile). This parameter indicates a future point in time, and specifically when an NFp 44 can be made available by the NRF 40 (i.e., how much time it takes for the NFp to be orchestrated), so that the NFc 42 can also decide whether to select the NFp or not.

In some embodiments, the NRF’s setting of the value of the “promise” parameter is based in two constituents. First, on a multi-layered distributed ledger, as an example of the distributed database 32 in Figure 1. The topmost layer registers agreements between mobile operators and cloud providers. Second, on a machine learning component that learns the time required to orchestrate an NFp 44 on a cloud provider. In one embodiment, another “enterprise” layer sits at the top on the distributed ledger. This layer will effectively capture private 5G deals between enterprise customers and mobile operators, allowing another level of integration and reduced time to bootstrap mobile networks. This embodiment may be focused on private 5G deployment.

Certain embodiments may provide one or more of the following technical advantage(s). Some embodiments provide more sustainable use of computational resources on a “per-need” basis rather than pre-provisioning of NFPs. Alternatively or additionally, some embodiments create new revenue streams for more connectivity and cloud providers. Moreover, some embodiments can function in tandem with existing implementations, since NRFs can send NFPs that are yet to be orchestrated on NFC request together with NFPs with already running NFInstances, and legacy NFCs and/or SCPs can ignore the former.

Figure 3 illustrates the block components of the system according to some embodiments. As shown, a distributed ledger stores data shared amongst mobile network operators (illustrated as operators in Figure 3), compute vendors (e.g., public cloud vendors such as Amazon, Microsoft, and Google), and optionally enterprise customers. The distributed ledger captures information about agreements between compute vendors and operators, as well as operators and enterprise customers (optionally). This distributed ledger exemplifies the distributed database 32 in Figure 1 and has properties that allow participating entities with different administrative domains to trust each other when sharing their data. The distributed ledger in particular has the properties of immutability, consensus, and replicability. Replicability means that each participating entity has the same copy of the ledger. Immutability means that no data can be removed from the ledger. Consensus means that for data to be inserted, each participating entity needs to agree. It is therefore preferrable for environments where entities sharing data do not necessarily trust each other.

Figure 3 also shows one or more NRF 40, which can provide information about NFps to requesting NFcs. The NRF 40 logically can belong to different administrative entities. It is for example possible that multiple NRF exist for the same operator, or there is one NRF for one operator, or one NRF for multiple operators. The NRF 40 has access to the distributed ledger on behalf of the operator or operators it belongs to. Given a service discovery request by an NFc 44, the NRF 40 first consults the ledger to find out which operators it has an agreement with, and then it provides an estimate on when the NFp will be available, based on available resources on the operator and time required to deploy and potentially synchronize the NF instance. In one embodiment, the estimate is based on a machine learning algorithm, which, based on several parameters, learns to estimate the time required from each cloud vendor for a requested NF instance to become available. This time is communicated as a promise parameter back to the requesting NFc 42. The promise parameter is embedded with another set of NFP characteristics in what is known as NFUpdate.

Figure 3 furthermore shows one or more NFc 42, which request services of NFps 44 in the form of NF service discovery requests from the NRF 40. The NRF 40 provides NFp information as stated above, and subsequently NFc 42 selects which of the NFps to access. The selection process may be based on several parameters, such as location, cost but also an assessment on behalf of NFc 42 on when it will require the services of NF-instance provided by the NFp 44 and a comparison of this assessment with the promise parameter sent by the NRF 40. The selection process can also be direct, or indirect with the help of SCP. The latter SCP selection is not illustrated in Figure 3.

Figure 4 illustrates an exemplary structure of the distributed database, as a distributed ledger and specifically a blockchain. There exist three different layers. The topmost layer, the operator layer, lists the operators participating in the blockchain. Every operator is identified by its unique Home Network Identifier (HNI), consisting of a Mobile Country Code (MCC) and Mobile Network Code (MNC). A status field indicates whether the operator is active or inactive, as one operator may choose to enter or leave the system at any point (and previous blocks cannot be removed).

In the middle tier, cloud vendor participation is recorded. Parameters of all cloud vendors may include the identity of the compute vendor, e.g., a tax authorities’ unique identity - fbretagsnummer in Sweden). This may be referred to as the vendor lD. Other parameters may include the cost of the compute service, shown here in monetary units (Mus) per NF orchestrated, but the cost can also be expressed in combination with or in isolation by terms, e.g., cost of power consumption. The parameters may additionally or alternatively include the type of network function or network functions served by the partnership. This is a list of NFTypes. Alternatively or additionally, the parameters include the location of the compute service, e.g., expressed in terms of geographical coordinates, or in fuzzy terms (e.g., “Athens, Greece” or “Stockholm, Sweden”). Alternatively or additionally, the parameters may include the orchestration endpoint of the compute vendor, which is the Management and Orchestration (MANO) interface towards the cloud vendor. Through this endpoint, it is possible to initialize orchestration of one or more NFInstances of a specific NFType and the NFP 40. This may for example be an IP address and port, together in some embodiments with credentials to access the endpoint. The parameters in some embodiments also include an optional promise parameter indicating the time it takes for the cloud vendor to orchestrate a function of a specific NFType. The promise parameter is used in some embodiments for deciding which cloud vendor/NFP to choose out of a plurality of NFps.

Both the operators of the topmost tier and cloud vendors of the middle tier have pointers to partnerships, the lower tier. The partnership tier contains the type of NFps that are orchestrated by cloud vendors. A timestamp, status block indicates when the partnership started. NFType indicates the type of NF orchestrated and the endpoint indicates an address at which the NFp can be contacted. Although the operator blocks and cloud vendor blocks are shown in different layers, in other embodiments the operator blocks and cloud vendor blocks exist in the same layer.

Consider now additional details of the discovery process. Figures 5A-5D illustrate the sequence diagram for the NF discovery and NFP orchestration process according to some embodiments. The discovery process involves the NFC 42 issuing a discovery request and the NRF 40 sending a discovery response.

In particular, the process begins by the NFc 42 sending a service discovery request to the NRF 40. The service discovery request may also be referred to simply as a discovery request, and exemplifies the discovery request 18 in Figure 1. As shown, the service discovery request is sent as an HTTP GET request. In any event, the service discovery request as shown includes a parameter NFType, indicating the type of NFp (e.g., AMF, UDM, etc.). In some embodiments, the service discovery request further includes either a limit or a tuple of page- number, page-size parameters indicating the number of NFps 44 to be returned by the NRF 40. In the context of this example, and to reduce the list of available NFps 44 returned, in addition to list size-reducing parameters, other type of parameters can be added to a GET request.

For example, the location of the NFp may be added as a parameter to the discovery request. The location can be expressed in precise geographical coordinates such as latitude or longitude, or contain one or more bounding boxes (i.e. , an area defined by multiple clatitude, longitude> tuples). Another expression of location can be a fuzzy term, e.g., “Germany” or “Greece”, or even “Bremen”, “Berlin” or “Athens”. Another example could be e.g., a commonly used keyword to denote location of a physical data center facility (e.g., FRA- 1 for Frankfurt).

Alternatively or additionally, a cost related parameter can be added to the discovery request, indicating the preferred budget of the requesting NFc 42 and relates to the cost of NFp 44. The cost may be specified in terms of money, or for example amount of power consumed. The latter can be used for example in cases that the NFc 42 wants to select an NFp 44 in a more sustainable/environmentally friendly manner.

Alternatively or additionally, a preference of a vendor can be added to the discovery request, e.g., as indicated by the Vendorld.

Alternatively or additionally, a specific capability set can be part of the original request (for example a specific version of NFP API).

Regardless of the content of the discovery request, subsequently, the NRF 40 attempts to retrieve NFp(s) 44 based on the NFType specified by the NFc 42. In this example, though, either NF Instances of matching NFType do not exist (and therefore need to be orchestrated), or some NF Instances of matching NFType exist, but the NRF 40 has margin to provide more options (e.g., specified by a limit parameter or page-number/page-size parameters). To be able to retrieve NFProfiles for NFp(s) that can orchestrate NFInstances of the requested NFType, the NRF 40 consults a distributed database (DB), e.g., as exemplified in Figure 4. This DB exemplifies the distributed database 32 in Figure 1 . Initially, the NRF 40 requests from the DB’s “compute” layer a list of cvendorJD, orchestration_endpoint> tuples.

In some embodiments, the NRF 40 next chooses a value for the promise parameter for each vendor on the list. There are multiple ways to elicit a value for the promise parameter. In one embodiment, this value is communicated by the DB, in a 3-tuple list cvendorJD, orchestration_endpoint, promise_value>. The promise value is provided in this case by the cloud vendor, based on the NFType requested (note that this is not illustrated in Figure 5). In this case, the promise parameter may be something that the vendor itself learned from multiple deployments of the requested NFType. Alternatively, the promise parameter can be used to indicate not only deployment time, but also a future point where capacity may be available. For example, a cloud vendor may not have capacity available right now, based on the criteria specified by the NFc 42, but the cloud vendor could have two weeks in the future. This could effectively allow a cloud vendor to advertise future capacity. Such a “belated” promise may work for example in the bootstrapping phase, but not in cases demanding a tight time constraint.

In another embodiment, this value is learned by the NRF or SCP, depending on whether communication is direct or indirect. In one embodiment, the learning is based in real deployments, in which case the first few deployments do not have a value. In another embodiment, the learning is based in a trial period. In this case, before a cloud vendor is authorized to participate, trial deployments are done to establish the levels of performance. Regardless, in the case where parameters are learned, machine learning in some embodiments is used, e.g., supervised learning. In a simple case, regression may be used to simply learn to correlate NFType with deployment time. In more complicated cases, a neural network may be used, in which case input to the neural network may include NFType, the current computational load of the vendor, the time of day and date, the parameters of the NF Instance, etc.

No matter how the promise parameter is determined, the NRF 40 (directly or indirectly via SCP) returns the NFp information together with the promise parameter back to the NFc 42 in a response, e.g., in a 200 OK response. This response exemplifies the response 22 in Figure 1 . Parameters of the response may be set in a SearchResult message sent back as payload of a 200 OK response. See also table 6.2.6.2.2-1 of TS 129 510 V15.1 .0). For example, validityPeriod may be hardcoded to the max=age parameter of the Cache-Control header field sent in the 200 OK response. Alternatively or additionally, the NFProfile list as part of NFInstances may include a modified NFProfile.

With regard to the NFProfile, the parameter promise is introduced to the NFProfile in some embodiments as follows:

In some embodiments as shown, the promise parameter is designated as optional. The parameter may be optional for example if NFps with already available NF Instances can co-exist with NFps with the promise parameter.

In some embodiments, one or more of the following parameters are set in the NFProfile returned in the response: (i) nfType set to the type of the original request; (ii) nfStatus set to UNINITIALIZED (iii) plmnList has a size of 1 and is connected to the home network identifier (HNI) of the distributed DB the NFp and/or SCP belongs to; (iv) lpv4Addresses (or ipv6Addresses) has a size of 1 and is the orchestration_endpoint of the DB; (v) Vendorld assigned to the cloud vendor ID field from the DB; (vi) several parameters such as nflnstancelD, nfStatus, capacity, load, loadTimeStamp, priority, but also NF-specific parameters are not set, as the NFInstance is not yet created nor the NFp registered to the NRF 40; and (vii) NF-lnstance related parameters nflnstanceList, numNflnstComplete may be reported as normal, as they relate to other NFInstances of the same NFType already running in NRF 40.

Note that, in case any parameters of location, Vendorld and/or cost were used in the original GET discovery request, then the NRF 40 in some embodiments filters the results returned from the distributed database to those that match or better approximate the values of the aforementioned parameters. For example, in case Vendorld was used as a parameter, the NRF 40 may only return NFps that belong to Vendorlds specified in the request, serving the requested NFType. If for example location is used, then the NRF 40 would return only those NFps that are located closer to the location specified in the request. Here, closest may be defined for example by the haversine distance, e.g., in case clatitude, longitude> are used, or whether NFps are in the same country or city in case fuzzy terms are used, etc. Finally, cost may sbe approximated using similar criteria.

Consider now additional details of NF selection. The NF Selection process may happen in the NRF 40 itself, or in the SCP or NFC. In indirect communication embodiments, for example, the SCP may select the NFp. By contrast, in direct communication embodiments, the NFc 42 may perform the NF selection. Alternatively, the NRF 40 itself can do the selection and orchestration.

Regardless, the NF selection process results in selection of one of the cloud vendors to deploy the NFp 44 with the requested NFType. The selection process itself can be based on one or more different strategies. For example, in some embodiments, the process selects the NFp 44 with the shortest promise, cost, or a weighed approach that combines the two. In other embodiments, the process selects a random NFp 44 from the list. In yet other embodiments, the process uses a round-robin approach. In this approach, a different NFp 44 is selected from a list of available NFps every time an NFc 42 places a request.

Alternatively or additionally, NF selection may exploit a machine learning approach, e.g., reinforcement learning. In one embodiment, for example, the selecting entity (NRF, SCP, or NFC) incorporates an intelligent agent that interacts with an environment. Given a state of the environment, the agent takes an action in the environment. The agent then is rewarded for its action and the environment transitions to a new state. This process is repeated. Machinelearning based algorithms such as actor-critic type approaches or Deep-Q learning may be used for the agent to learn to approximate the optimal policy, i.e., learn to choose the action that yields the highest (or close to highest) reward for any environment state.

One or more of these embodiments use the following formalizations for training of the algorithm. In one embodiment, the action is defined as the selection of an NFp 44 for a given NFType, e.g., selection of a cloud vendor ID. In some embodiments, the state is defined to be the available <NFType, CloudVendorlD> tuples at a given point in time. In one embodiment, these tuples are arranged in a 2-dimensional array. Alternatively or additionally, some embodiments define the reward in terms of the performance of a deployed NFType as NFInstance in an NFp 44 of the cloud vendor chosen by the action. One or more networklevel metrics as experienced by the NFc 42 communicating with the NFp 44 may be used for the performance, such as availability, latency, throughput, or any combination thereof. The action in some embodiments is monitored for a certain period, and the average value(s) of the metric(s) are used for reward calculation.

Some embodiments herein also exploit one or more techniques to accelerate learning. For example, some embodiments use federated learning to combine learnings from several NFcs learning in parallel, to one globally learned algorithm. Alternatively or additionally, some embodiments use transfer learning to copy an experience learned in one NFc to another that is bootstrapped later. Alternatively or additionally, some embodiments use agent communication with deep reinforcement learning to accelerate learning for algorithms, e.g., approaches such as Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL).

Consider now some example use cases to which some embodiments herein are applicable.

A first use case is for a private 5G network, with network functions hosted on the edge cloud, e.g., by third-party cloud vendors. In this case, parameters such as location may be taken under account and agreements between cloud (compute) vendors and network operators are represented in the distributed ledger. In one scenario, a “Baseband” (BB) NF hosted in a local network, e.g., within the customer premises, requests services of various core NFs, such as SMF/UPF, AMF, etc. Those NFs may in turn request the services of other NFs, such as UDM (used by AMF for user authentication), etc. In some embodiments, some of these NFs are not deployed a priori, but are instead orchestrated during private 5G network bootstrapping (e.g., during commissioning phase at customer premises). In this use case, embodiments herein not only lessen the complexity of the on-premises solution (as practically only the radio base stations/gNBs would need to be deployed), but also open a marketplace where local cloud vendors can compete for hosting NFs.

A second use case concerns a public 5G network with network functions hosted on a public cloud. In this use case, network functions are already deployed. The network may experience a temporal or permanent surge in utilization, and/or resource constraints in a specific location. For example, such surge could be attributed to attachment of many devices due to e.g., an event or onboarding of an enterprise customer. In another scenario, a critical event such as war or natural disaster may lead to resource shortages in terms of NF availability. In both of these cases, additional capacity may be needed. However, the lead- time to plan and deploy the infrastructure to hold this additional capacity could be critical to connectivity service availability. Embodiments herein leverage existing infrastructure that may be available as-a-service, thus addressing the need at hand by significantly reducing the lead- time.

Note that the one or more input filter criterions 20 herein may also be referred to as input filter criteria, where the input filter criteria may comprise a single input filter criterion or may comprise multiple input filter criterions.

In view of the modifications and variations herein, Figure 6 depicts a method in accordance with particular embodiments. The method is performed by discovery service network equipment 14 in a communication network 10. The method includes receiving a request 18 for discovery of one or more network function (NF) instances that satisfy one or more input filter criterions 20 (Block 600). The method also includes, responsive to the request 18, transmitting a response 22 that includes information 24 about one or more NF instances 26 satisfying the one or more input filter criterions 20 (Block 610). Notably, the one or more NF instances 26 satisfying the one or more input filter criterions 20 include one or more unavailable NF instances 26U that are unavailable. Accordingly, for each of the one or more unavailable NF instances 26U, the information 24 about the unavailable NF instance 26U indicates when the unavailable NF instance 26U can be or will be available.

In some embodiments, the method also includes discovering the one or more NF instances 26 satisfying the one or more input filter criterions 20 (Block 605)

In some embodiments, the one or more unavailable NF instances 26U include one or more uninitialized NF instances that are unavailable because the one or more uninitialized NF instances have not been initialized. In some embodiments, initialization of an NF instance includes setting initial values of parameters of the NF instance. In some embodiments, for each of the one or more uninitialized NF instances, the information 24 about the uninitialized NF instance indicates when the uninitialized NF instance can be or will be available by indicating when the uninitialized NF instance can be or will be initialized.

In some embodiments, the one or more unavailable NF instances 26U include one or more unorchestrated NF instances that are unavailable because the one or more unorchestrated NF instances have not been orchestrated. In some embodiments, orchestration of an NF instance includes allocating resources for the NF instance. In some embodiments, for each of the one or more unorchestrated NF instances, the information 24 about the unorchestrated NF instance indicates when the unorchestrated NF instance can be or will be available by indicating when the unorchestrated NF instance can be or will be orchestrated.

In some embodiments, the one or more unavailable NF instances 26U include one or more overloaded NF instances that are unavailable because the one or more overloaded NF instances lack capacity. In some embodiments, for each of the one or more overloaded NF instances, the information 24 about the overloaded NF instance indicates when the overloaded NF instance can be or will be available by indicating when the overloaded NF instance is expected to have capacity.

In some embodiments, for each of the one or more unavailable NF instances 26U, the information 24 about the unavailable NF instance includes a promise parameter that indicates when the unavailable NF instance can be or will be available by indicating a duration of time after which the unavailable NF instance is promised to be available.

In some embodiments, the method further comprises, for each of the one or more unavailable NF instances 26U, retrieving the information 24 that indicates when the unavailable NF instance can be or will be available from a distributed database 32. In one or more of these embodiments, the distributed database 32 is a permissioned distributed ledger. In one or more of these embodiments, the distributed ledger includes one or more records for each of one or more vendors 30 of the communication network 10. In some embodiments, a record for a vendor indicates, for each of one or more types of NFs, when the vendor can or will make an instance of the type of NF available. In one or more of these embodiments, the record for a vendor further indicates at least one or more types of NFs that the vendor is able to orchestrate. Alternatively, in one or more of these embodiments, the record for a vendor further indicates at least an endpoint to send a request for the vendor to orchestrate an NF instance. Alternatively, in one or more of these embodiments, the record for a vendor further indicates at least a geographical location at which the vendor orchestrates an NF instance. Alternatively, in one or more of these embodiments, the record for a vendor further indicates at least a cost to orchestrate an NF instance. In some embodiments, the cost is an amount of money or an amount of power

In some embodiments, the method further comprises training a model at the discovery service network equipment 14 to predict when any given unavailable NF instance can be or will be available. The method further comprises, for each of the one or more unavailable NF instances 26U, using the trained model to determine when the unavailable NF instance can be or will be available.

In some embodiments, the one or more input filter criterions include at least an NF type criterion specifying a set of one or more NF types. In some embodiments, an NF instance satisfies the NF type criterion if the NF instance is an instance of an NF that has any of the one or more NF types. Alternatively, the one or more input filter criterions include at least a location criterion specifying a set of one or more locations. In some embodiments, an NF instance satisfies the location criterion if the NF instance is deployed at any of the one or more locations. Alternatively, the one or more input filter criterions include at least a cost criterion specifying a maximum cost. In some embodiments, an NF instance satisfies the cost criterion if the cost of the NF instance is less than or equal to the maximum cost, wherein the maximum cost is a maximum amount of money or a maximum amount of power. Alternatively, the one or more input filter criterions include at least a vendor criterion specifying a set of one or more vendors 30. In some embodiments, an NF instance satisfies the vendor criterion if the NF instance is provided by any of the one or more vendors 30. Alternatively, the one or more input filter criterions include at least a capability criterion specifying a set of one or more capabilities. In some embodiments, an NF instance satisfies the capability criterion if the NF instance has any of the one or more capabilities.

In some embodiments, the method further comprises, based on the request, discovering a set of one or more candidate NF instances that satisfy the one or more input filter criterions. The method further comprises selecting a candidate NF instance from the discovered set to provide a service to a service consumer. In some embodiments, the one or more unavailable NF instances 26U comprise the selected candidate NF instance . In some embodiments, the response includes information about the selected candidate NF instance. In one or more of these embodiments, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of how soon the candidate NF instance can be or will be available. Additionally or alternatively, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of how soon the candidate NF instance is needed to be available. Additionally or alternatively, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance as a function of a cost of the candidate NF instance. In one or more of these embodiments, the method further comprises training a reinforcement learning model to predict which candidate NF instance to select to yield a maximum reward specified in terms of NF instance availability, latency, and/or throughput. In some embodiments, selecting a candidate NF instance from the discovered set comprises selecting the candidate NF instance that yields a maximum reward according to the reinforcement learning model. In one or more of these embodiments, the method further comprises dynamically initiating orchestration of the selected candidate NF instance. In some embodiments, the discovery service network equipment 14 implements an NF Repository Function, NRF or a Service Communication Proxy, SCP.

Figure 7 depicts a method in accordance with other particular embodiments. The method is performed by network equipment 16 configured to operate as a service consumer or as a proxy for a service consumer. The method includes receiving a response 22 to a request 18 for discovery of one or more network function, NF, instances that satisfy one or more input filter criterions 20 (Block 700). The response includes information about one or more NF instances satisfying the one or more input filter criterions. The one or more NF instances satisfying the one or more input filter criterions include one or more unavailable NF instances 26U that are unavailable. In some embodiments, for each of the one or more unavailable NF instances 26U, the information about the unavailable NF instance indicates when the unavailable NF instance can be or will be available.

In some embodiments, the one or more unavailable NF instances 26U include one or more uninitialized NF instances that are unavailable because the one or more uninitialized NF instances have not been initialized. In some embodiments, initialization of an NF instance includes setting initial values of parameters of the NF instance. In some embodiments, for each of the one or more uninitialized NF instances, the information about the uninitialized NF instance indicates when the uninitialized NF instance can be or will be available by indicating when the uninitialized NF instance can be or will be initialized.

In some embodiments, the one or more unavailable NF instances 26U include one or more unorchestrated NF instances that are unavailable because the one or more unorchestrated NF instances have not been orchestrated. In some embodiments, orchestration of an NF instance includes allocating resources for the NF instance. In some embodiments, for each of the one or more unorchestrated NF instances, the information about the unorchestrated NF instance indicates when the unorchestrated NF instance can be or will be available by indicating when the unorchestrated NF instance can be or will be orchestrated.

In some embodiments, the one or more unavailable NF instances 26U include one or more overloaded NF instances that are unavailable because the one or more overloaded NF instances lack capacity. In some embodiments, for each of the one or more overloaded NF instances, the information about the overloaded NF instance indicates when the overloaded NF instance can be or will be available by indicating when the overloaded NF instance is expected to have capacity.

In some embodiments, for each of the one or more unavailable NF instances 26U, the information about the unavailable NF instance includes a promise parameter that indicates when the unavailable NF instance can be or will be available by indicating a duration of time after which the unavailable NF instance is promised to be available.

In some embodiments, the one or more input filter criterions include at least an NF type criterion specifying a set of one or more NF types. In some embodiments, an NF instance satisfies the NF type criterion if the NF instance is an instance of an NF that has any of the one or more NF types. Alternatively, the one or more input filter criterions include at least a location criterion specifying a set of one or more locations. In some embodiments, an NF instance satisfies the location criterion if the NF instance is deployed at any of the one or more locations. Alternatively, the one or more input filter criterions include at least a cost criterion specifying a maximum cost. In some embodiments, an NF instance satisfies the cost criterion if the cost of the NF instance is less than or equal to the maximum cost. In some embodiments, the maximum cost is a maximum amount of money or a maximum amount of power. Alternatively, the one or more input filter criterions include at least a vendor criterion specifying a set of one or more vendors 30. In some embodiments, an NF instance satisfies the vendor criterion if the NF instance is provided by any of the one or more vendors 30. Alternatively, the one or more input filter criterions include at least a capability criterion specifying a set of one or more capabilities. In some embodiments, an NF instance satisfies the capability criterion if the NF instance has any of the one or more capabilities.

In some embodiments, the method further comprises selecting an NF instance from the one or more NF instances to provide a service to the service consumer (Block 710). In some embodiments, selecting the NF instance comprises selecting the NF instance as a function of how soon the NF instance can be or will be available. Additionally or alternatively, selecting the NF instance comprises selecting the NF instance as a function of how soon the NF instance is needed to be available. Additionally or alternatively, selecting the NF instance comprises selecting the NF instance as a function of a cost of the NF instance. In one or more of these embodiments, the method further comprises dynamically initiating orchestration of the selected NF instance (Block 720).

In some embodiments, the method further comprises training a reinforcement learning model to predict which NF instance to select to yield a maximum reward specified in terms of NF instance availability, latency, and/or throughput. In some embodiments, the method further comprises selecting, from the one or more NF instances, an NF instance to provide a service to the service consumer. In some embodiments, selecting the NF instance comprises selecting the NF instance that yields a maximum reward according to the reinforcement learning model.

Figure 8 depicts a method in accordance with other particular embodiments. The method is performed by vendor equipment 34 of a vendor 30 for a communication network 10. The method includes determining, for each of one or more types of network functions, NFs, when the vendor 30 can or will make an instance of the type of NF available (Block 800). The method also includes adding, to a distributed database 32, a record that indicates, for each of the one or more types of NFs, when the vendor 30 can or will make an instance of the type of NF available (Block 810).

In some embodiments, for each of the one or more types of NFs, the record indicates how long orchestration of an instance of the type of NF takes. Additionally or alternatively, for each of the one or more types of NFs, the record indicates when an instance of the type of NF is expected to have capacity for providing a service to a service consumer. Additionally or alternatively, for each of the one or more types of NFs, the record indicates a promise parameter indicating a duration of time after which an instance of the type of NF is promised to be available.

In some embodiments, the record further indicates, for each of the one or more types of NFs, at least an endpoint to send a request for the vendor to orchestrate an instance of the type of NF. Alternatively, the record further indicates, for each of the one or more types of NFs, at least a geographical location at which the vendor orchestrates an instance of the type of NF. Alternatively, the record further indicates, for each of the one or more types of NFs, at least a cost to orchestrate an instance of the type of NF. In some embodiments, the cost is an amount of money or an amount of power.

In some embodiments, the distributed database 32 is a permissioned distributed ledger.

Embodiments herein also include corresponding apparatuses. Embodiments herein for instance include discovery service network equipment 14 configured to perform any of the steps of any of the embodiments described above for the discovery service network equipment 14.

Embodiments also include discovery service network equipment 14 comprising processing circuitry and power supply circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the discovery service network equipment 14. The power supply circuitry is configured to supply power to the discovery service network equipment 14.

Embodiments further include discovery service network equipment 14 comprising processing circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the discovery service network equipment 14. In some embodiments, the discovery service network equipment 14 further comprises communication circuitry.

Embodiments further include discovery service network equipment 14 comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the discovery service network equipment 14 is configured to perform any of the steps of any of the embodiments described above for the discovery service network equipment 14.

Embodiments herein also include network equipment 16 configured to perform any of the steps of any of the embodiments described above for the network equipment 16.

Embodiments also include network equipment 16 comprising processing circuitry and power supply circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the network equipment 16. The power supply circuitry is configured to supply power to the network equipment 16. Embodiments further include network equipment 16 comprising processing circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the network equipment 16. In some embodiments, the network equipment 16 further comprises communication circuitry.

Embodiments further include network equipment 16 comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the network equipment 16 is configured to perform any of the steps of any of the embodiments described above for the network equipment 16.

Embodiments herein also include vendor equipment 34 configured to perform any of the steps of any of the embodiments described above for the vendor equipment 34.

Embodiments also include vendor equipment 34 comprising processing circuitry and power supply circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the vendor equipment 34. The power supply circuitry is configured to supply power to the vendor equipment 34.

Embodiments further include vendor equipment 34 comprising processing circuitry. The processing circuitry is configured to perform any of the steps of any of the embodiments described above for the vendor equipment 34. In some embodiments, the vendor equipment 34 further comprises communication circuitry.

Embodiments further include vendor equipment 34 comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the vendor equipment 34 is configured to perform any of the steps of any of the embodiments described above for the vendor equipment 34.

More particularly, the apparatuses described above may perform the methods herein and any other processing by implementing any functional means, modules, units, or circuitry. In one embodiment, for example, the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures. The circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory. For instance, the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In embodiments that employ memory, the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein. Figure 9 for example illustrates discovery service network equipment 14 as implemented in accordance with one or more embodiments. As shown, the discovery service network equipment 14 includes processing circuitry 910 and communication circuitry 920. The communication circuitry 920 is configured to transmit and/or receive information to and/or from one or more other nodes, e.g., via any communication technology. The processing circuitry 910 is configured to perform processing described above, e.g., in Figure 6, such as by executing instructions stored in memory 930. The processing circuitry 910 in this regard may implement certain functional means, units, or modules.

Figure 10 illustrates network equipment 16 as implemented in accordance with one or more embodiments. As shown, the network equipment 16 includes processing circuitry 1010 and communication circuitry 1020. The communication circuitry 1020 is configured to transmit and/or receive information to and/or from one or more other nodes, e.g., via any communication technology. The processing circuitry 1010 is configured to perform processing described above, e.g., in Figure 7, such as by executing instructions stored in memory 1030. The processing circuitry 1010 in this regard may implement certain functional means, units, or modules.

Figure 11 illustrates vendor equipment 34 as implemented in accordance with one or more embodiments. As shown, the vendor equipment 34 includes processing circuitry 1110 and communication circuitry 1120. The communication circuitry 1120 is configured to transmit and/or receive information to and/or from one or more other nodes, e.g., via any communication technology. The processing circuitry 1110 is configured to perform processing described above, e.g., in Figure 8, such as by executing instructions stored in memory 1130. The processing circuitry 1110 in this regard may implement certain functional means, units, or modules.

Those skilled in the art will also appreciate that embodiments herein further include corresponding computer programs.

A computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above.

Embodiments further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.

In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.

Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.

Figure 12 shows an example of a communication system 1200 in accordance with some embodiments.

In the example, the communication system 1200 includes a telecommunication network 1202 that includes an access network 1204, such as a radio access network (RAN), and a core network 1206, which includes one or more core network nodes 1208. The access network 1204 includes one or more access network nodes, such as network nodes 1210a and 1210b (one or more of which may be generally referred to as network nodes 1210), or any other similar 3 rd Generation Partnership Project (3GPP) access node or non-3GPP access point. The network nodes 1210 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 1212a, 1212b, 1212c, and 1212d (one or more of which may be generally referred to as UEs 1212) to the core network 1206 over one or more wireless connections.

Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, the communication system 1200 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. The communication system 1200 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.

The UEs 1212 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 1210 and other communication devices. Similarly, the network nodes 1210 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 1212 and/or with other network nodes or equipment in the telecommunication network 1202 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 1202.

In the depicted example, the core network 1206 connects the network nodes 1210 to one or more hosts, such as host 1216. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts. The core network 1206 includes one more core network nodes (e.g., core network node 1208) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 1208. Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).

The host 1216 may be under the ownership or control of a service provider other than an operator or provider of the access network 1204 and/or the telecommunication network 1202, and may be operated by the service provider or on behalf of the service provider. The host 1216 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.

As a whole, the communication system 1200 of Figure 12 enables connectivity between the UEs, network nodes, and hosts. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low- power wide-area network (LPWAN) standards such as LoRa and Sigfox.

In some examples, the telecommunication network 1202 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 1202 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 1202. For example, the telecommunications network 1202 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)ZMassive loT services to yet further UEs.

In some examples, the UEs 1212 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to the access network 1204 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 1204. Additionally, a UE may be configured for operating in single- or multi-RAT or multi-standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).

In the example, the hub 1214 communicates with the access network 1204 to facilitate indirect communication between one or more UEs (e.g., UE 1212c and/or 1212d) and network nodes (e.g., network node 1210b). In some examples, the hub 1214 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs. For example, the hub 1214 may be a broadband router enabling access to the core network 1206 for the UEs. As another example, the hub 1214 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 1210, or by executable code, script, process, or other instructions in the hub 1214. As another example, the hub 1214 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data. As another example, the hub 1214 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 1214 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 1214 then provides to the UE either directly, after performing local processing, and/or after adding additional local content. In still another example, the hub 1214 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.

The hub 1214 may have a constant/persistent or intermittent connection to the network node 1210b. The hub 1214 may also allow for a different communication scheme and/or schedule between the hub 1214 and UEs (e.g., UE 1212c and/or 1212d), and between the hub 1214 and the core network 1206. In other examples, the hub 1214 is connected to the core network 1206 and/or one or more UEs via a wired connection. Moreover, the hub 1214 may be configured to connect to an M2M service provider over the access network 1204 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with the network nodes 1210 while still connected via the hub 1214 via a wired or wireless connection. In some embodiments, the hub 1214 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 1210b. In other embodiments, the hub 1214 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 1210b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.

Figure 13 shows a UE 1300 in accordance with some embodiments. As used herein, a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs. Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc. Other examples include any UE identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-loT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.

A UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) , or vehicle-to-everything (V2X). In other examples, a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).

The UE 1300 includes processing circuitry 1302 that is operatively coupled via a bus 1304 to an input/output interface 1306, a power source 1308, a memory 1310, a communication interface 1312, and/or any other component, or any combination thereof. Certain UEs may utilize all or a subset of the components shown in Figure 13. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.

The processing circuitry 1302 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 1310. The processing circuitry 1302 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry 1302 may include multiple central processing units (CPUs).

In the example, the input/output interface 1306 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices. Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. An input device may allow a user to capture information into the UE 1300. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof. An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.

In some embodiments, the power source 1308 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. The power source 1308 may further include power circuitry for delivering power from the power source 1308 itself, and/or an external power source, to the various parts of the UE 1300 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 1308. Power circuitry may perform any formatting, converting, or other modification to the power from the power source 1308 to make the power suitable for the respective components of the UE 1300 to which power is supplied.

The memory 1310 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth. In one example, the memory 1310 includes one or more application programs 1314, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1316. The memory 1310 may store, for use by the UE 1300, any of a variety of various operating systems or combinations of operating systems.

The memory 1310 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof. The UICC may for example be an embedded UICC (eUlCC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’ The memory 1310 may allow the UE 1300 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 1310, which may be or comprise a device-readable storage medium.

The processing circuitry 1302 may be configured to communicate with an access network or other network using the communication interface 1312. The communication interface 1312 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1322. The communication interface 1312 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network). Each transceiver may include a transmitter 1318 and/or a receiver 1320 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth). Moreover, the transmitter 1318 and receiver 1320 may be coupled to one or more antennas (e.g., antenna 1322) and may share circuit components, software or firmware, or alternatively be implemented separately.

In the illustrated embodiment, communication functions of the communication interface 1312 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11 , Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.

Regardless of the type of sensor, a UE may provide an output of data captured by its sensors, through its communication interface 1312, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).

As another example, a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change. For example, the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input. A UE, when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare. Non-limiting examples of such an loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot. A UE in the form of an loT device comprises circuitry and/or software in dependence of the intended application of the loT device in addition to other components as described in relation to the UE 1300 shown in Figure 13.

As yet another specific example, in an loT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node. The UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the UE may implement the 3GPP NB-loT standard. In other scenarios, a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.

In practice, any number of UEs may be used together with respect to a single use case. For example, a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone. When the user makes changes from the remote controller, the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed. The first and/or the second UE can also include more than one of the functionalities described above. For example, a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.

Figure 14 shows a network node 1400 in accordance with some embodiments. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).

Other examples of network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).

The network node 1400 includes a processing circuitry 1402, a memory 1404, a communication interface 1406, and a power source 1408. The network node 1400 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the network node 1400 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, the network node 1400 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memory 1404 for different RATs) and some components may be reused (e.g., a same antenna 1410 may be shared by different RATs). The network node 1400 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1400, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1400.

The processing circuitry 1402 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1400 components, such as the memory 1404, to provide network node 1400 functionality.

In some embodiments, the processing circuitry 1402 includes a system on a chip (SOC). In some embodiments, the processing circuitry 1402 includes one or more of radio frequency (RF) transceiver circuitry 1412 and baseband processing circuitry 1414. In some embodiments, the radio frequency (RF) transceiver circuitry 1412 and the baseband processing circuitry 1414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1412 and baseband processing circuitry 1414 may be on the same chip or set of chips, boards, or units.

The memory 1404 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1402. The memory 1404 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1402 and utilized by the network node 1400. The memory 1404 may be used to store any calculations made by the processing circuitry 1402 and/or any data received via the communication interface 1406. In some embodiments, the processing circuitry 1402 and memory 1404 is integrated.

The communication interface 1406 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 1406 comprises port(s)/terminal(s) 1416 to send and receive data, for example to and from a network over a wired connection. The communication interface 1406 also includes radio front-end circuitry 1418 that may be coupled to, or in certain embodiments a part of, the antenna 1410. Radio front-end circuitry 1418 comprises filters 1420 and amplifiers 1422. The radio front-end circuitry 1418 may be connected to an antenna 1410 and processing circuitry 1402. The radio front-end circuitry may be configured to condition signals communicated between antenna 1410 and processing circuitry 1402. The radio front-end circuitry 1418 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. The radio front-end circuitry 1418 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1420 and/or amplifiers 1422. The radio signal may then be transmitted via the antenna 1410. Similarly, when receiving data, the antenna 1410 may collect radio signals which are then converted into digital data by the radio front-end circuitry 1418. The digital data may be passed to the processing circuitry 1402. In other embodiments, the communication interface may comprise different components and/or different combinations of components.

In certain alternative embodiments, the network node 1400 does not include separate radio front-end circuitry 1418, instead, the processing circuitry 1402 includes radio front-end circuitry and is connected to the antenna 1410. Similarly, in some embodiments, all or some of the RF transceiver circuitry 1412 is part of the communication interface 1406. In still other embodiments, the communication interface 1406 includes one or more ports or terminals 1416, the radio front-end circuitry 1418, and the RF transceiver circuitry 1412, as part of a radio unit (not shown), and the communication interface 1406 communicates with the baseband processing circuitry 1414, which is part of a digital unit (not shown).

The antenna 1410 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. The antenna 1410 may be coupled to the radio front-end circuitry 1418 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, the antenna 1410 is separate from the network node 1400 and connectable to the network node 1400 through an interface or port.

The antenna 1410, communication interface 1406, and/or the processing circuitry 1402 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 1410, the communication interface 1406, and/or the processing circuitry 1402 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.

The power source 1408 provides power to the various components of network node 1400 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). The power source 1408 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1400 with power for performing the functionality described herein. For example, the network node 1400 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1408. As a further example, the power source 1408 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.

Embodiments of the network node 1400 may include additional components beyond those shown in Figure 14 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the network node 1400 may include user interface equipment to allow input of information into the network node 1400 and to allow output of information from the network node 1400. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1400.

Figure 15 is a block diagram of a host 1500, which may be an embodiment of the host 1216 of Figure 12, in accordance with various aspects described herein. As used herein, the host 1500 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm. The host 1500 may provide one or more services to one or more UEs.

The host 1500 includes processing circuitry 1502 that is operatively coupled via a bus 1504 to an input/output interface 1506, a network interface 1508, a power source 1510, and a memory 1512. Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 13 and 14, such that the descriptions thereof are generally applicable to the corresponding components of host 1500.

The memory 1512 may include one or more computer programs including one or more host application programs 1514 and data 1516, which may include user data, e.g., data generated by a UE for the host 1500 or data generated by the host 1500 for a UE. Embodiments of the host 1500 may utilize only a subset or all of the components shown. The host application programs 1514 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems). The host application programs 1514 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, the host 1500 may select and/or indicate a different host for over-the-top services for a UE. The host application programs 1514 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.

Figure 16 is a block diagram illustrating a virtualization environment 1600 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1600 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host. Further, in embodiments in which the virtual node does not require radio connectivity (e.g., a core network node or host), then the node may be entirely virtualized.

Applications 1602 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.

Hardware 1604 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1606 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1608a and 1608b (one or more of which may be generally referred to as VMs 1608), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. The virtualization layer 1606 may present a virtual operating platform that appears like networking hardware to the VMs 1608.

The VMs 1608 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1606. Different embodiments of the instance of a virtual appliance 1602 may be implemented on one or more of VMs 1608, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.

In the context of NFV, a VM 1608 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 1608, and that part of hardware 1604 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 1608 on top of the hardware 1604 and corresponds to the application 1602.

Hardware 1604 may be implemented in a standalone network node with generic or specific components. Hardware 1604 may implement some functions via virtualization. Alternatively, hardware 1604 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1610, which, among others, oversees lifecycle management of applications 1602. In some embodiments, hardware 1604 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 1612 which may alternatively be used for communication between hardware nodes and radio units.

Although the computing devices described herein (e.g., UEs, network nodes, hosts) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.

In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer- readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally. Notably, modifications and other embodiments of the disclosed invention(s) will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention(s) is/are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of this disclosure. Although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.