Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR LATENCY-AWARE EMBEDDING OF A VIRTUAL NETWORK ONTO A SUBSTRATE OPTICAL NETWORK
Document Type and Number:
WIPO Patent Application WO/2021/068057
Kind Code:
A1
Abstract:
The disclosed s, structures, and methods are directed to a method and a system for embedding a virtual network onto the substrate optical network comprising embedding the plurality of virtual nodes onto the plurality of substrate nodes in accordance with the plurality of location constraints, computing end-to-end latency associated with a plurality of substrate paths connecting a source substrate node and a destination substrate node, wherein the plurality of substrate paths contain the plurality of substrate links and the plurality of substrate nodes, and embedding a virtual link connecting a source virtual node and a destination virtual node onto the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node, wherein the end-to-end latency associated with the one of the plurality of substrate paths is less than or equal to a maximum allowable latency for the virtual link.

Inventors:
SHAHRIAR NASHID (CA)
CHOWDHURY SHIHABUR RAHMAN (CA)
BOUTABA RAOUF (CA)
MITRA JEEBAK (CA)
HEMMATI MAHDI (CA)
Application Number:
PCT/CA2020/051180
Publication Date:
April 15, 2021
Filing Date:
August 28, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CANADA CO LTD (CA)
UNIV WATERLOO (CA)
International Classes:
H04L29/02; H04B10/27; H04L12/24; H04L12/911
Foreign References:
US8639794B22014-01-28
US8635616B22014-01-21
Attorney, Agent or Firm:
SAWYER, Francois (CA)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for latency-aware embedding of a virtual network onto a substrate optical network comprising a plurality of substrate nodes and a plurality of substrate links connecting the substrate nodes, the method comprising: receiving a request for a virtual network, the request comprising a virtual network topology and virtual network constraints associated with the virtual network, the virtual network to be embedded onto the substrate optical network, the virtual network topology comprising: a plurality of virtual nodes; a plurality of virtual links connecting the virtual nodes; and a demand bandwidth associated with each of the plurality of virtual links; the virtual network constraints comprising: a location constraint associated with at least one of the plurality of virtual nodes, providing an indication of a substrate node onto which one the at least one of the plurality of virtual nodes can be embed; a latency constraint set associated with at least one of the plurality of virtual links; generating a plan for embedding the virtual network onto the substrate optical network comprising: a mapping of the plurality of virtual nodes onto the plurality of substrate nodes in accordance with the plurality of location constraints; a mapping of virtual links between virtual nodes to a plurality of substrate paths connecting a substrate nodes in accordance with latency constraints such that a virtual link spanning a plurality of substrate paths has latency across the corresponding substrate paths less than or equal to a maximum allowable latency defined by the latency constraint for the virtual link.

2. The method of claim 1, wherein the latency associated with a virtual link spanning a plurality of substrate paths is defined as:

Where:

Ln is a delay introduced at the source substrate node and the destination substrate node; len(p ) is a physical length of the one of the plurality of substrate paths p;

Lprop is a propagation delay; namp is a total number of amplifiers associated with the one of the plurality of substrate paths p;

Lamp is a delay introduced by one of the amplifiers associated with the one of the plurality of substrate paths p;

|p I is a total number of the plurality of substrate links associated with the one of the plurality of substrate paths p; and

Lroadm is a delay introduced by a reconfigurable optical add-drop multiplexer (ROADM) present along the one of the plurality of substrate paths p.

3. The method of claim 2, wherein the latency constraint associated with the virtual link further comprising a differential delay requirement given by:

Where: is a set of individual virtual links ; is a set of substrate paths used for embedding splitting of the plurality of virtual links ; D Dmax is a maximum allowable differential delay.

4. The method of any one of the claims 2 or 3, wherein generating the plan further comprises using an integer linear programming (ILP) formulation.

5. The method of claim 4, wherein an objective function associated with the ILP formulation is given by;

Where: is set of k shortest substrate paths connecting the source substrate node and the destination substrate node for each virtual link ; is a set of admissible transmission configurations t; q is a maximum number of allowable splits in the one of the k shortest substrate paths;

S is a set of equal- width frequency slots s;

1 is selected in a manner that the second term in the objective function is effective only when multiple solutions have same value for the first part;

2 is selected in a manner that the third term in the objective function is effective only when multiple solutions have same value for the first part and second part; is a relationship between the one of the k shortest substrate paths, frequency slots s associated with the plurality of substrate links between the one of the k shortest substrate paths, one of the virtual links e, one of the admissible transmission configurations t, and one of the allowable splits; is a relationship between the one of the k shortest substrate paths, one of the virtual links e, one of the admissible transmission configurations t, and one of the allowable splits.

6. The method of claim 5, wherein is given by:

7. The method of any one of the claims 5 to 6, wherein given by:

8. The method of any one of the claim 5 to 7, wherein each of the admissible transmission configuration t is comprising of a specific baud-rate b, a modulation format m and a forward error correction code (FEC) overhead f.

9. The method of any one of the claims 2 to 8, wherein the substrate optical network in an Elastic Optical network.

10. The method of any one of the claims 2 to 9, wherein generating the plan further comprises using a heuristic approach.

11. The method of claim 10, wherein the heuristic approach further comprising: estimating an upper bound of an end-to-end latency budget in accordance with the set of latency constraints associated with the plurality of virtual links; based on the upper bound of the latency budget, generating an allowed set of substrate paths for embedding the plurality of virtual links; selecting a most constrained virtual link from a set of unmapped virtual links and the plurality of virtual links associated with the allowed set of substrate paths, wherein the most constrained virtual link is in terms of end-to end path latency as well as in terms of in terms of availability of frequency slots; computing an optimal solution for the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node based on the selected most constrained virtual link, the allowed set of substrate paths, and a set of admissible transmission configurations t; and allocating frequency slots to the plurality of substrate links associated with the one of the plurality of substrate paths.

12. The method of claim 11, wherein allocating frequency slots comprises allocating same contiguous set of frequency slots to all of the plurality of substrate links connecting the source substrate node and the destination substrate node.

13. The method of any one of the claims 11 or 12, wherein selecting the most constrained virtual link further comprising ordering the set of unmapped virtual links and virtual links associated with the allowed set of substrate paths in accordance with associated number of usable frequency slots and end-to-end latency constraints less than or equal to a maximum allowable latency for the virtual link.

14. The method of any one of the claims 11 to 13, wherein computing the optimal solution is based on maximum allowable end-to-end latency for the virtual link and at least one of a minimum frequency slot requirement, load balancing, and minimum blocking probability.

15. A system for latency-aware embedding of a virtual network onto a substrate optical network comprising a plurality of substrate nodes and a plurality of substrate links connecting the substrate nodes, the method comprising: a processing unit for executing instructions; and a memory unit for storing instructions, which when executed by the processor configure the system to perform steps for: receiving a virtual network topology and virtual network constraints, associated with the virtual network, for embedding onto the substrate optical network, the virtual network topology comprising: a plurality of virtual nodes; a plurality of virtual links connecting the virtual nodes; and a demand bandwidth associated with each of the plurality of virtual links; a latency constraint set associated with the plurality of virtual links connecting the virtual nodes; the virtual network constraints comprising: a plurality of location constraints associated with the plurality of virtual nodes, each location constraint providing an indication of at least one of the plurality of substrate nodes onto which one of the plurality of virtual nodes can be embed; embedding the virtual network onto the substrate optical network comprising: embedding the plurality of virtual nodes onto the plurality of substrate nodes in accordance with the plurality of location constraints; computing end-to-end latency associated with a plurality of substrate paths connecting a source substrate node and a destination substrate node, wherein the plurality of substrate paths contain the plurality of substrate links and the plurality of substrate nodes; and embedding a virtual link connecting a source virtual node and a destination virtual node onto the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node, wherein the end-to-end latency associated with the one of the plurality of substrate paths is less than or equal to a maximum allowable latency for the virtual link.

16. The system of claim 15, wherein the end-to-end latency associated with the one of the plurality of substrate paths is defined as:

Where: Ln is a delay introduced at the source substrate node and the destination substrate node; len(p) is a physical length of the one of the plurality of substrate paths p;

Lprop is a propagation delay; namp is a total number of amplifiers associated with the one of the plurality of substrate paths p;

Lamp is a delay introduced by one of the amplifiers associated with the one of the plurality of substrate paths p;

|p I is a total number of the plurality of substrate links associated with the one of the plurality of substrate paths p; and

Lroadm is a delay introduced by a reconfigurable optical add-drop multiplexer (ROADM) present along the one of the plurality of substrate paths p.

17. The system of claim 16, further comprising determining the embedding of the virtual network using an integer linear programming (ILP) formulation.

18. The system of claim 17, wherein an objective function associated with the ILP formulation is given by;

Where: is set of k shortest substrate paths connecting the source substrate node and the destination substrate node for each virtual link ; is a set of admissible transmission configurations t; q is a maximum number of allowable splits in the one of the k shortest substrate paths;

S is a set of equal- width frequency slots s;

1 is selected in a manner that the second term in the objective function is effective only when multiple solutions have same value for the first part; e2 is selected in a manner that the third term in the objective function is effective only when multiple solutions have same value for the first part and second part; is a relationship between the one of the k shortest substrate paths, frequency slots s associated with the plurality of substrate links between the one of the k shortest substrate paths, one of the virtual links , one of the admissible transmission configurations t, and one of the allowable splits; is a relationship between the one of the k shortest substrate paths, one of the virtual links , one of the admissible transmission configurations t, and one of the allowable splits.

19. The system of any one of the claims 16 to 18, wherein the substrate optical network in an Elastic Optical network.

20. The system of any one of the claims 16 to 19, further comprising determining the embedding of the virtual network using a heuristic approach.

21. The system of claim 20, wherein the heuristic approach further comprising: estimating an upper bound of an end-to-end latency budget in accordance with the set of latency constraints associated with the plurality of virtual links; based on the upper bound of the latency budget, generating an allowed set of substrate paths for embedding the plurality of virtual links; selecting a most constrained virtual link from a set of unmapped virtual links and the plurality of virtual links associated with the allowed set of substrate paths, wherein the most constrained virtual link is in terms of end-to end path latency as well as in terms of in terms of availability of frequency slots; computing an optimal solution for the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node based on the selected most constrained virtual link, the allowed set of substrate paths, and a set of admissible transmission configurations t; and allocating frequency slots to the plurality of substrate links associated with the one of the plurality of substrate paths.

Description:
METHOD AND SYSTEM FOR LATENCY-AW ARE EMBEDDING OF A VIRTUAL NETWORK ONTO A SUBSTRATE OPTICAL NETWORK

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to previously- filed U.S. Non-Provisional Application No. 16/600,103 entitled “METHOD AND SYSTEM FOR LATENCY- AWARE EMBEDDING OF A VIRTUAL NETWORK ONTO A SUBSTRATE OPTICAL NETWORK,” filed on Oct. 11, 2019, the contents of which is incorporated herein by reference in jurisdictions allowing such incorporation.

FIELD OF THE INVENTION

[0002] The present disclosure generally relates to the field of optical communication networks and, in particular, to methods and system for latency-aware embedding a virtual network onto a substrate optical network.

BACKGROUND

[0003] Typical optical networks based on Dense Wavelength Division Multiplexing (DWDM) are capable of providing bandwidth up to 4.4THz. Such networks generally employ fixed-grid spectrums and fixed-rate transceivers and hence are less capable of handling heterogeneous and variable traffic demands. In a conventional fixed-grid spectrum system, each block of spectrum is either 50 GHz or 100 GHz. If channels larger than this are required, the need can be served by bringing up multiple 50 or 100 GHz channels (also referred to as lighting up the channels) and allowing a higher network layer to manage them as a logical link. To address demand higher bandwidth connections, without inefficiently allocating spectrum, a more flexible spectrum allocation is provided by so-called flex-grid spectrums for optical networks. Flex-grid based optical networks provide an effective way of increasing network-level operational efficiency by allowing the possibility of per-link bandwidth allocation in granular frequency slots as small as 12.5 GHz rather than the more monolithic 50 or 100 GHz per-link as in case of previous International Telecommunication Union (ITU) grid. The overall capacity of these allocated links is a function of the transceivers employed at the terminal ends of the link, allowing for a single channel to be allocated sufficient spectrum, while minimizing the unused allocated resources.

[0004] Elastic Optical Networks (EONs) are a result of flex-grid optical networks being deployed with adaptive modulation and coding capabilities of flex-rate optical transceivers. EONs can be used to provide dynamic resource allocation in mesh-like optical networks to address dynamic, diverse and bursty traffic demands. These traffic demands can be better addressed using EONs because of the ability to increase or decrease the spectrum allocation in accordance with customer needs. Along with advances in coherent-transmission technology, EONs allow the tuning of transmission parameters such as baud rate, modulation format and forward error correction (FEC) overhead.

[0005] Low-latency communications is becoming a fundamental quality-of-service (QoS) requirement for many current and emerging applications such as intelligent transport, industry automation, immersive media experience through virtual and augmented reality, online multiplayer gaming, high- definition streaming and video conferencing, and high frequency trading. While much focus has been addressed to a radio connection to the terminal devices, the use of a virtualized core network meeting the stringent latency requirements is also required. Those skilled in the art will appreciate that a virtualized network makes use of virtualized network nodes / functions instantiated upon physical computing and storage resources, and connected together using virtualized links. The virtualized links between two functions will need to be properly created to serve the required QoS.

[0006] Different QoS requirements for different types of traffic can be satisfied by creating different virtual connections (or different virtual network slices) with different latency parameters. As fifth generation (5G) networks are more widely deployed, they are expected to support a wide range of quality-of-service (QoS) requirements, including ultra-low latency communications. Each different QoS can be served by the creation of a different link in the underlying substrate network (SN). Each of these links may be served using a different frequency slots associated with the optical spectrum, each tailored to a different QoS (e.g., low latency, high-bandwidth, high reliability), on top of a substrate optical network (SN). [0007] Network slices can be provided in the form of virtual networks (VNs) each composed of its own set of virtual nodes and virtual links embedded in (or instantiated upon the resources of) the underlying optical SNs. Each SN can comprise optical nodes and substrate optical links, and may be owned by a different network operator than the next SN. Each network slice can be managed and operated by service providers, forming a two-tier architecture. According to ACTN framework (IETF RFC8453), these VNs can be used to host virtual network services (VNSs), requested by enterprise customers.

[0008] A challenge in instantiating VNs in support of applications requiring ultra-low latency communications is to provide a mechanism for efficient mapping of VN nodes and links on the optical SN, also known as Virtual Network Embedding (VNE), which minimizes SN resource consumption while satisfying the requested end-to-end latency constraints. The problem of VNE accounting for latency constraints although crucial to the efficient functioning of next generation networks, has received very little attention heretofore.

[0009] In some previous approaches to account for latency, the requirements have been modeled as a constraint applied on the virtual links of the VN (i.e., each virtual link is mapped while satisfying a given latency target for that particular link only). However, applications typically require that a given latency target be met along an entire path between application end nodes. Moreover, per-virtual-link latency constraints also restrict the substrate paths that can be used for mapping individual virtual links.

[0010] Consequently, as part of the strict latency requirements of various emerging user applications including those enabled by 5G, there is a pressing need for solutions and algorithms for resource allocation and virtualization of optical transport networks that are capable of ensuring low-latency operation based on the Service Level Agreement (SLA).

SUMMARY

[0011] An object of the present disclosure is to provide a method for latency- aware embedding of a virtual network onto a substrate optical network comprising a plurality of substrate nodes and a plurality of substrate links connecting the substrate nodes, the method comprising receiving a virtual network topology and virtual network constraints associated with the virtual network, for embedding onto the substrate optical network, the virtual network topology comprising a plurality of virtual nodes, a plurality of virtual links connecting the virtual nodes, a demand bandwidth associated with each of the plurality of virtual links, the virtual network constraints comprising a plurality of location constraints associated with the plurality of virtual nodes, each location constraint providing an indication of at least one of the plurality of substrate nodes onto which one of the plurality of virtual nodes can be embed, a latency constraint set associated with the plurality of virtual links connecting the virtual nodes, embedding the virtual network onto the substrate optical network comprising embedding the plurality of virtual nodes onto the plurality of substrate nodes in accordance with the plurality of location constraints, computing end-to-end latency associated with a plurality of substrate paths connecting a source substrate node and a destination substrate node, wherein the plurality of substrate paths contain the plurality of substrate links and the plurality of substrate nodes, embedding a virtual link connecting a source virtual node and a destination virtual node onto the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node, wherein the end-to-end latency associated with the one of the plurality of substrate paths is less than or equal to a maximum allowable latency for the virtual link.

[0012] In accordance with other aspects of the present disclosure, the method, wherein the end-to-end latency associated with the one of the plurality of substrate paths is defined as:

Where L n is a delay introduced at the source substrate node and the destination substrate node, len(p) is a physical length of the one of the plurality of substrate paths p, L prop is a propagation delay, n amp is a total number of amplifiers associated with the one of the plurality of substrate paths p, L amp is a delay introduced by one of the amplifiers associated with the one of the plurality of substrate paths p, |p| is a total number of the plurality of substrate links associated with the one of the plurality of substrate paths p , and L roadm is a delay introduced by a reconfigurable optical add-drop multiplexer (ROADM) present along the one of the plurality of substrate paths p.

[0013] In accordance with other aspects of the present disclosure, the method, wherein the end-to-end latency further comprising a differential delay requirement given by: Where is a set of individual virtual links is a set of substrate paths used for embedding splitting of the plurality of virtual links DD max is a maximum allowable differential delay.

[0014] In accordance with other aspects of the present disclosure, the method, further comprising determining the embedding of the virtual network using an integer linear programming (ILP) formulation.

[0015] In accordance with other aspects of the present disclosure, the method, wherein an objective function associated with the ILP formulation is given by;

Where is set of k shortest substrate paths connecting the source substrate node and the destination substrate node for each virtual link is a set of admissible transmission configurations t, q is a maximum number of allowable splits in the one of the k shortest substrate paths, 5 is a set of equal-width frequency slots s ∈ 1 is selected in a manner that the second term in the objective function is effective only when multiple solutions have same value for the first part, ∈ 2 is selected in a manner that the third term in the objective function is effective only when multiple solutions have same value for the first part and second part, is a relationship between the one of the k shortest substrate paths, frequency slots s associated with the plurality of substrate links between the one of the k shortest substrate paths, one of the virtual links , one of the admissible transmission configurations t, and one of the allowable splits, is a relationship between the one of the k shortest substrate paths, one of the virtual links , one of the admissible transmission configurations t, and one of the allowable splits.

[0016] In accordance with other aspects of the present disclosure, the method, The method of claim 5, wherein is given by:

[0017] In accordance with other aspects of the present disclosure, the method, wherein given by:

[0018] In accordance with other aspects of the present disclosure, the method, wherein each of the admissible transmission configuration t is comprising of a specific baud-rate b , a modulation format m and a forward error correction code (FEC) overhead f.

[0019] In accordance with other aspects of the present disclosure, the method, wherein the substrate optical network in an Elastic Optical network.

[0020] In accordance with other aspects of the present disclosure, the method, further comprising determining the embedding of the virtual network using a heuristic approach, wherein the heuristic approach further comprising estimating an upper bound of an end-to-end latency budget in accordance with the set of latency constraints associated with the plurality of virtual links, based on the upper bound of the latency budget, generating an allowed set of substrate paths for embedding the plurality of virtual links,

[0021] In accordance with other aspects of the present disclosure, the method, selecting a most constrained virtual link from a set of unmapped virtual links and the plurality of virtual links associated with the allowed set of substrate paths, wherein the most constrained virtual link is in terms of end-to end path latency as well as in terms of in terms of availability of frequency slots, computing an optimal solution for the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node based on the selected most constrained virtual link, the allowed set of substrate paths, and a set of admissible transmission configurations t, allocating frequency slots to the plurality of substrate links associated with the one of the plurality of substrate paths.

[0022] In accordance with other aspects of the present disclosure, the method, wherein allocating frequency slots comprises allocating same contiguous set of frequency slots to all of the plurality of substrate links connecting the source substrate node and the destination substrate node.

[0023] In accordance with other aspects of the present disclosure, the method, wherein selecting the most constrained virtual link further comprising ordering the set of unmapped virtual links and virtual links associated with the allowed set of substrate paths in accordance with associated number of usable frequency slots and end-to-end latency constraints less than or equal to a maximum allowable latency for the virtual link.

[0024] In accordance with other aspects of the present disclosure, the method, wherein computing the optimal solution is based on maximum allowable end-to-end latency for the virtual link and at least one of a minimum frequency slot requirement, load balancing, and minimum blocking probability.

[0025] In accordance with other aspects of the present disclosure, there is provided a system for latency-aware embedding of a virtual network onto a substrate optical network comprising a plurality of substrate nodes and a plurality of substrate links connecting the substrate nodes, the method comprising a processing unit for executing instructions, and a memory unit for storing instructions, which when executed by the processor configure the system to perform steps for receiving a virtual network topology and virtual network constraints associated with the virtual network, for embedding onto the substrate optical network, the virtual network topology comprising a plurality of virtual nodes, a plurality of virtual links connecting the virtual nodes, a demand bandwidth associated with each of the plurality of virtual links, the virtual network constraints comprising a plurality of location constraints associated with the plurality of virtual nodes, each location constraint providing an indication of at least one of the plurality of substrate nodes onto which one of the plurality of virtual nodes can be embed, a latency constraint set associated with the plurality of virtual links connecting the virtual nodes, embedding the virtual network onto the substrate optical network comprising embedding the plurality of virtual nodes onto the plurality of substrate nodes in accordance with the plurality of location constraints, computing end-to-end latency associated with a plurality of substrate paths connecting a source substrate node and a destination substrate node, wherein the plurality of substrate paths contain the plurality of substrate links and the plurality of substrate nodes, embedding a virtual link connecting a source virtual node and a destination virtual node onto the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node, wherein the end-to-end latency associated with the one of the plurality of substrate paths is less than or equal to a maximum allowable latency for the virtual link.

[0026] In accordance with other aspects of the present disclosure, the system, wherein the end-to-end latency associated with the one of the plurality of substrate paths is defined as:

[0027] Where L n is a delay introduced at the source substrate node and the destination substrate node, len(p) is a physical length of the one of the plurality of substrate paths p, L prop is a propagation delay, n amp is a total number of amplifiers associated with the one of the plurality of substrate paths p, L amp is a delay introduced by one of the amplifiers associated with the one of the plurality of substrate paths p, |p| is a total number of the plurality of substrate links associated with the one of the plurality of substrate paths p , and L roadm is a delay introduced by a reconfigurable optical add-drop multiplexer (ROADM) present along the one of the plurality of substrate paths p.

[0028] In accordance with other aspects of the present disclosure, the system, further comprising determining the embedding of the virtual network using an integer linear programming (ILP) formulation.

[0029] In accordance with other aspects of the present disclosure, the system, wherein an objective function associated with the ILP formulation is given by;

[0030] Where is set of k shortest substrate paths connecting the source substrate node and the destination substrate node for each virtual link is a set of admissible transmission configurations t, q is a maximum number of allowable splits in the one of the k shortest substrate paths, 5 is a set of equal-width frequency slots s ∈ 1 is selected in a manner that the second term in the objective function is effective only when multiple solutions have same value for the first part, ∈ 2 is selected in a manner that the third term in the objective function is effective only when multiple solutions have same value for the first part and second part, is a relationship between the one of the k shortest substrate paths, frequency slots s associated with the plurality of substrate links between the one of the k shortest substrate paths, one of the virtual links , one of the admissible transmission configurations t, and one of the allowable splits, is a relationship between the one of the k shortest substrate paths, one of the virtual links , one of the admissible transmission configurations t, and one of the allowable splits.

[0031] In accordance with other aspects of the present disclosure, the system, wherein the substrate optical network in an Elastic Optical network.

[0032] In accordance with other aspects of the present disclosure, the system, further comprising determining the embedding of the virtual network using a heuristic approach.

[0033] In accordance with other aspects of the present disclosure, the system, selecting a most constrained virtual link from a set of unmapped virtual links and the plurality of virtual links associated with the allowed set of substrate paths, wherein the most constrained virtual link is in terms of end-to end path latency as well as in terms of in terms of availability of frequency slots, computing an optimal solution for the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node based on the selected most constrained virtual link, the allowed set of substrate paths, and a set of admissible transmission configurations t, allocating frequency slots to the plurality of substrate links associated with the one of the plurality of substrate paths.

BRIEF DESCRIPTION OF THE FIGURES

[0034] The features and advantages of the present disclosure will become apparent from the following detailed description, taken in combination with the appended drawings, in which: [0035] FIG. 1 depicts a high-level block functional block diagram of a system configured to implement latency-aware virtual network embedding over elastic optical networks, in accordance with various embodiments of present disclosure;

[0036] FIG. 2 depicts a high-level block diagram of representative components for the system configured to implement latency-aware virtual network embedding, in accordance with various embodiments of the present disclosure;

[0037] FIG. 3 illustrates a representative example of a physical/substrate elastic optical network (SN), in accordance with various aspects of present disclosure;

[0038] FIG. 4 illustrates a representative example of a virtual network, in accordance with various aspects of present disclosure;

[0039] FIG. 5 illustrates a high-level functional block diagram of a system, configured to compute path based latency, in accordance with various aspects of present disclosure;

[0040] FIG. 6 illustrates a high-level functional block diagram of a latency-aware virtual network embedding system, in accordance with various aspects of present disclosure;

[0041] FIG. 7 illustrates a high-level functional block diagram of a heuristic solution module, in accordance with various aspects of present disclosure;

[0042] FIG. 8 illustrates a representative example of the SN embedded with the virtual network, in accordance with various aspects of present disclosure;

[0043] FIGs. 9, 10, and 11 illustrate representative examples of functional flow diagrams of a heuristic process directed to a method for virtual network embedding implemented in the latency-aware virtual network embedding system, in accordance with various embodiments of present disclosure;

[0044] FIGs. 12, 13, and 14 illustrate various representative scenarios comparing performance of a fixed grid network and elastic optical network implementing for same virtual network, in accordance with various embodiments of present disclosure; and [0045] FIG. 15 depicts a functional flow diagram of a process directed to a method implemented in a system, in accordance with various embodiments of the present disclosure.

[0046] It is to be understood that throughout the appended drawings and corresponding descriptions, like features are identified by like reference characters. Furthermore, it is also to be understood that the drawings and ensuing descriptions are intended for illustrative purposes only and that such disclosures are not intended to limit the scope of the claims.

DETAILED DESCRIPTION

[0047] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the described embodiments appertain.

[0048] FIG. 1 is a high-level block functional block diagram of a system 100 that can be configured to implement various embodiments of latency-aware virtual network embedding. As shown, a latency-aware virtual network embedding system 500 may be configured to receive a virtual network request associated with a virtual network 300 to be embedded on a physical/substrate elastic optical network 200. In accordance with a network topology for the virtual network 200 defined in the received request, and latency requirements for the links within VN 200, and the resources available in physical substrate elastic optical network 200, the latency-aware virtual network embedding system 500 controls or instructs the virtual network embedding (VNE). Latency-aware virtual network embedding system 500 can direct instantiation of virtual nodes and links within physical/substrate elastic optical network 200 to create VNE 600. It is to be noted that physical/substrate elastic optical network 200 and the physical/substrate elastic optical network with VNE 600 may be the same network; however the latter is merely illustrated with VNE. It will be appreciated that in certain embodiments, the latency-aware virtual network embedding system 500 may be implemented on the system 100 as software constructs with programming instructions stored in associated memories and implemented by the associated processors.

[0049] The system 100 may comprise one or more computing devices, represented as a single server 100. Although represented as a single server, the system 100 may be implemented as one or more real or virtual servers. As shown the system 100 may be configured to receive a virtual network 300 to be implemented on a physical/substrate elastic optical network 200.

[0050] FIG. 2 is a high-level block diagram of representative components for the system 100 configured to implement latency-aware virtual network embedding, in accordance with various embodiments of the present disclosure. It should be appreciated that FIG. 2 provides only an illustration of one implementation of the system 100 and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment can be done to implement the system 100 without departing from the principles presented herein.

[0051] As shown, the illustrative system 100 employs one or more processors 102, one or more computer-readable random access memories (RAMs) 104, one or more computer-readable read only memories (ROMs) 106, one or more computer readable storage media 108, device drivers 114, a read/write (R/W) interface 116, a network interface 118, all interconnected over a communications fabric 120. The communications fabric 120 may be implemented with any architecture designed for passing data and control information between processors (such as microprocessors, communications and network processors, etc.), memory, peripheral devices, and any other hardware components within a system.

[0052] The one or more operating system(s) 110 and the one or more application program(s) 112 are stored on one or more of the computer readable storage media 108 for execution by the one or more of the processors 102 via the one or more of the respective RAMs 104 (which typically include a cache memory). In the illustrated embodiment, each of the computer readable storage media 108 may be one or more of a magnetic disk storage device of an internal hard drive, CD-ROM, DVD, memory stick, magnetic tape, magnetic disk, optical disk, a semiconductor storage device such as RAM, ROM, EPROM, flash memory or any other computer-readable tangible storage device that can store a computer program and digital information.

[0053] The system 100 may also include a read/write (R/W) interface 116 to read from and write to one or more portable computer readable storage media 126. Application programs 112 on said devices may be stored on one or more of the portable computer readable storage media 126, read via the respective R/W interface 116 and loaded into the respective computer readable storage media 108.

[0054] It will be appreciated that in certain embodiments the application programs 112 stored on one or more of the portable computer readable storage media 126 may configure the system 100 to provide latency aware VNE functionality. Broadly, the VNE functionality may receive a virtual network 300 (discussed below) and determine various possibilities to map the virtual network 300 (discussed below) to a physical/substrate elastic optical network (SN) 200 (discussed below). Once an optimal solution to map the virtual network 300 (discussed below) is determined, the mapping may be embedded on the SN 200 (discussed below) to implement the virtual network 300 (discussed below).

[0055] The application programs 112 on the system 100 may be downloaded to the system 100 from an external computer or external storage device via a communication network (for example, the Internet, a local area network or other wide area network or wireless network) and network interface 118. From the network interface 118, the programs may be loaded onto computer readable storage media 108.

[0056] The system 100 may also include any one or more of an optional display screen 122, a keyboard or keypad 124, and a computer mouse or touchpad 128. The Device drivers 114 may interface to display screen 122 for imaging, to keyboard or keypad 124, to computer mouse or touchpad 128, and to display screen 122 (in case of touch screen display) for pressure sensing of alphanumeric character entry and user selections. The device drivers 114, R/W interface 116 and network interface 118 may comprise hardware and software (stored on computer readable storage media 108 and ROM 106).

[0057] The programs described herein are identified based upon the application for which they are implemented in a particular embodiment of the present disclosure. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the disclosure should not be limited to use solely in any specific application identified and implied by such nomenclature.

[0058] It will be appreciated that the system 100 may be a server, a desktop computer, a laptop computer, a tablet, a smartphone, a personal digital assistant or any device that may be configured to implement the present technology, as should be understood by a person skilled in the art.

[0059]

[0060] FIG. 3 illustrates a representative example of the SN 200, in accordance with various aspects of present disclosure. It is to be noted that in certain embodiments, the SN 200 may be a undirected graph G = (V, E), where V is the set of substrate optical nodes (SNodes) 202-1, 202- 2...202-N and E is the set of substrate optical links (SLinks) 204-1, 204-2...204-N. Without loss of generality, it has been assumed the set of SNodes 202-1, 202-2...202-N are colorless, directionless, and contentionless. Further, the set of SLinks 204-1, 204-2...204-N may be bidirectional, i.e., adjacent SNodes (e.g. SNodes 202-1 and 202-2) may be connected by one optical fiber in each direction (not shown).

[0061] SN 200 is an elastic optical network (EON) and as such, the usable optical spectrum on each SLink e = (u, v) E E may be divided into equal- width frequency slots represented by the set S and enumerated as 1, 2.. . |S|, where u and v are two SNodes (e.g. SNodes 202-1 and 202-2) connected by a SLink (e.g. SLink 204-1). Each of the SLinks has an associated bandwidth capacity. The bandwidth capacity of a given SLink differ from the bandwidth capacity of another Slink. In addition to the bandwidth capacity, each link also has an associated cost for providing a unit of bandwidth.

[0062] Lurther, and P represents a set of all possible paths in the undirected graph G = is a subset of P and represents the set of k -shortest paths between SNodes u, v e V, respectively. The two endpoints of a lightpath p ∈ P are typically referred to as the source and the destination. The source is denoted as src(p) and the destination as dst(p). The number of SLinks in the lightpath is represented as |p|. The physical length of the lightpath p in kilometers is denoted as len(p). Lor example, in LIG. 3, SNodes 202-1 and 202-3 may be treated as endpoints of the lightpath p E P with two SLinks 204-4 and 204-7 in lightpath p E P. Also a binary variable δ pe may denote the association between a lightpath p ∈ P and any link e ∈ E.

[0063] In certain embodiments, in order to enable data transmissions with different data rates d ∈ D, the system 100 can configure the transmission parameters on the path p with physical length len(p) . Such transmission parameters may include any one or more of baud-rate or symbol rate b , modulation format m (e.g. QPSK, 16 QAM, 32 QAM etc.), forward error correction code (FEC) overhead, / selected from a set of possible values of baud-rates or symbol rates B, modulation formats M and FEC overheads F. Those skilled in the art will appreciate that as there are a finite number of parameters, and each parameter has a finite number of possible values in a given system, the overall number of possible configurations can be considered to define a space T represented in the above example by a tuple (D,B,M,F). The selected tuple may be represented by t = ( d , b,m,f) E T = (D, B, M, F) where each of d, b, m, and /, are selected from the possible configuration values of D, B, M, F respectively. Further, it should be understood that in some embodiments, the data rate d may be a function of the other parameters. For the sake of representation t (d) t (d) t (b) t (m) and t (f) may be used to denote the data-rate, baud-rate, modulation format, and FEC overhead of a configuration t ∈ T.

[0064] In certain embodiments, based on physical layer characteristics, the system 100 may also compute, or otherwise obtain, a reach table R . The reach table can be used to determine, through a lookup operation, the maximum length of the lightpath p (i.e., the reach r t ) capable of retaining a satisfactory optical signal to noise ratio (SNR) when configured according to a transmission configuration t ∈ T. Finally, n t denotes the number of frequency slots required to accommodate a transmission configuration t ∈ T, which may depend on the parameters of t. Table 1 depicts a representative example of the reach table R, in accordance with various aspects of present disclosure.

Table 1

[0065] FIG. 4 illustrates a representative example of a virtual network 300, in accordance with various aspects of present disclosure. It is to be noted that in certain embodiments, the virtual network request may include an undirected graph , where are a set of virtual nodes (VNodes) 302-1, 302-2, and 302-3 and a set of virtual links (VLinks) 304-1,304-2 and 304-3 respectively.

[0066] As an example, request for the creation of virtual network 300 may describe the topology and constraints of the virtual network to be embedded onto the SN 200. As depicted, the virtual network 300 comprises three VNodes 302-1, 302-2, and 302-3 connected by Vlinks 304-1, 304-2, and 304-3 each having a different respective bandwidth requirement represented as . For example, bandwidth requirement associated with the VLinks 304-1, 304-2, and 304-3 are 700Gbps, 1200Gbps, and 800Gbps respectively. The request for virtual network 300 may include additional information associated with the location constraint set for each of the VNodes 302-1, 302-2, and 302-3. For example, the location constraint set for VNodes 302-1 may be {202-1, 202-2, 202-6}, indicating that VNodes 302-1 must be provisioned on one of the SNodes {202-1, 202-2, 202-6}. Similarly, the location constraint set for VNodes 302-2 may be {202-2, 202-3, 202-4}, indicating that VNodes 302-2 must be provisioned on one of the SNodes {202-2, 202-3, 202-4} and the location constraint set for VNodes 302-3 may be {202-4, 202-5, 202-6}, indicating that VNodes 302-3 must be provisioned on one of the SNodes {202-4, 202-5, 202-6}.

[0067] In certain embodiments, each VNode may be associated with a location constraint set indicating the set of SNodes where can be embedded. The system 100 may set a binary input variable to 1 if u e V is in the location constraint set , or to 0 otherwise. The endpoints of the may be denoted by and , where maybe two VNodes (e.g. VNodes 302-1 and 302-3) connected by a Vlinks (e.g. Vlinks 304-1 and 304-2). Each has a respective bandwidth requirement . In certain embodiments, the system 100 may allow the Vlinks (e.g. VLinks 304-1 and 304-2) to be mapped to multiple substrate optical paths (SPaths) consisting of a plurality of Slinks (e.g. SLink 204-1, 204-2 etc.), each with a lower data-rate than . Mapping Vlinks (e.g. VLinks 304-1 and 304-2) to multiple SPaths may allow the system 100 to overcome the problem of reach r t becoming smaller for higher data-rates (as depicted by Table 1) which can act to limit the number of usable SPaths.

[0068] The system 100 may put a limit on the number of VLink splits to a value q ≥ 1. It is understood by a person skilled in the art that, such multi-path provisioning may be achieved by already known techniques such as virtual concatenation (VCAT) in Optical Transport Network (OTN) or bonding capabilities of FlexE.

[0069] FIG. 5 illustrates a high-level functional block diagram of a system 400, configured to compute path based latency, in accordance with various aspects of present disclosure. As shown, the system 400 comprises of a virtual network request module 402, a physical/substrate elastic optical network information module 404 and a latency computation module 406. It is to be noted that other elements may be present, but are not illustrated for the purpose of tractability and simplicity.

[0070] The virtual network request module 402 may receive a request for a virtual network (e.g. virtual network 300). As noted above, such a request may contain a virtual network topology and virtual network constraints associated with the virtual network. The virtual network topology may comprise VNodes (e.g. VNodes 302-1, 302-2, and 302-3), VLinks (e.g. VLinks 304-1 and 304-1), a demand bandwidth associated with each of the Vlinks and latency constraint set . The latency constraints define an acceptable latency for networking paths and not just specific individual links. The virtual network request module 402 may extract information associated with the virtual network (e.g. virtual network 300) and forward the information to the latency computation module 406. Such information includes various location constraints (e.g. location constraint set ) and the demand bandwidth associated with each VLink. The physical/substrate elastic optical network information module 404 provides the latency computation module 406 with information associated with a physical/substrate elastic optical network (e.g. SN 200). It will be understood that that latency computation module 400 may receive the information from module 404 in response to an explicit request for such information, or it may be provided along with the request information from module 402. Such information includes bandwidth capacity per link, length of a link, etc. [0071] Based on received information, the latency computation module 406, computes end- to-end latencies associated with various possible SPaths connecting a source virtual node (e.g. VNode 302-1) and a destination virtual node (e.g. VNode 302-1). As an example, be a loop- free path (e.g. VPath between VNodes 302-1 and 302-3 including Vlinks 304-1 and 304-2) in the virtual network (e.g. virtual network 300). As such, a VPath can have more than one Vlink, therefore . The end-to-end latency budget for the VPath may be represented using a tuple where L is the latency budget constraint of the VPath, is the set of all paths in that require a bounded latency as mandated by a service provider and represents the set of all positive real numbers. The tuple indicates a requirement that after the embedding the virtual network (e.g. virtual network 300) to the physical/substrate elastic optical network (e.g. SN 200), the end-to-end latency of the VPath represented as should not exceed maximum allowable end-to-end latency . In this case, latency , where is the latency of the . All the end-to-end latency constraints of the virtual network (e.g. virtual network 300) request may be represented by a latency constraint set . The latency computation module 406 may supply the latency constraint set received as a part of virtual network request to a latency-aware virtual network embedding system 500 (discussed below).

[0072] In order to compute end-to-end latency for a path (e.g. end-to-end latency ), the latency computation module 406 considers various factors contributing to latency on that path. Such factors include latency contributors at a SNode (e.g. SNode 202-1), latency contributors on the lightpath p and differential delay requirements when the traffic is split between to substrate paths between the source and destination node.

[0073] In certain embodiments, a SNode (e.g. SNode 202-1) may be a multi-layer flexible node architecture that consists of various components, such as a set of Optical Transport Network/Flexible Ethernet (OTN/FlexE) line-cards (with or without a Forward Error Correction (FEC) encoder/decoder module) (not shown), a set of bandwidth-variable transponders (BVTs) (not shown), and a reconfigurable optical add-drop multiplexer (ROADM) (not shown).

[0074] OTN is a deterministic transport technology that eliminates queuing delay due to congestion at the SNode. A FlexE based SNode can be scheduled with static resource allocation, thereby avoiding any congestion altogether. As such, latency incurred by OTN/FlexE SNode can be ignored for the purpose of simplicity. However, in certain embodiments, latency incurred by OTN/FlexE SNode may be considered for overall by OTN/FlexE SNode.

[0075] The latency incurred on the set of BVTs (not shown) may vary, depending on their design and supported functionalities. The set of BVTs may be associated with a source SNode (e.g. SNode 202-1) and a destination SNode (e.g. SNode 202-3). More complex the set of BVTs (not shown) include functionality such as in-band management and can have latencies in the range 5-10μs. However, certain simple and low-cost set of BVTs (not shown) without features such as in-band management may result in set of BVTs processing time L txp as low as 30 ns.

[0076] In certain embodiments, a pair of FEC encoder/decoder modules (not shown) may be used within the BVTs for detecting and correcting transmission errors due to noise and other impairments that may present in high-capacity transmissions. A stronger FEC may increase a system margin for a given Bit Error Rate (BER) and optical signal power, thereby increasing the signal-to-noise ratio (SNR), enabling longer optical reaches. The pair FEC encoder/decoder modules (not shown) in the SNode (e.g. SNode 202-1) may introduce a processing delay of ≈10 μs each for standard FECs. However, in certain embodiments processing delay for super FECs with increased correction ability can go up to 150 ps.

[0077] Given that the set of BVTs (not shown) and the pair of FEC encoder/decoder modules (not shown) may be involved in a transmission, the overall delay introduced at the terminal SNode (e.g. SNode 202-1) of the lightpath p, irrespective of type of BVTs and FECs, may be represented as

[0078] It is to be noted that the lightpath p goes through all these components only at the source and destination SNodes (e.g. SNodes 202-1 and 202-3 respectively). At intermediate SNodes (e.g. SNode 204-5), also referred to as a bypass node, the lightpath p only passes through ROADMs bypassing OTN/FlexE elements, BVTs and FEC encoder/decoder modules. Therefore, for such bypass nodes, the transmission lightpath p may not encounter latencies due to OTN/FlexE elements, BVTs and FEC encoder/decoder modules of bypass nodes. [0079] With this said, the lightpath p can span several optical links through several bypass SNodes (e.g. SNode 202-N). Depending on the path's length and the network topology, the lightpath p may contain amplifiers, ROADMs and bandwidth-variable optical cross-connect (BV-OXCs) placed along the lightpath p, each contributing towards latency. Other components contributing to latency in the lightpath p may include one or both of re-generators and dispersion compensating fibers (DCF). However, for the sake of simplicity contributions by re-generators and DCFs may be ignored.

[0080] The major contributor to the overall latency on the lightpath p may be the propagation delay, L prop , within an optical link. This can be generalized to a delay of ~ 4.9 μs per kilometer of fiber. Further, the amplifiers in the lightpath p may be required to boost the signal strength on long transmission lines. Usually, erbium doped fiber amplifiers (EDFAs) are widely used in long-haul networks and each EDGA may introduce a delay, L amp , of ≈ 150 ns. The number of amplifiers on lightpath p , may be represented as n amp (p ) which may be approximately where len(p ) is the physical length of p and is the typical distance between two amplifiers, called fiber span (in the order of 80km).

[0081] Another component present along a transparent lightpath is the ROADM, installed on the bypass SNode (e.g. SNode 202-5). ROADMs may also include bandwidth variable optical cross connects (BV-OXC) that can switch the necessary of amount spectrum at bypass SNode (e.g. SNode 202-5). Because optical flows can be independently switched by these intermediate devices for different lightpaths, the latency introduced by ROADMs and BV-OXCs, L roadm , may be on the order of tens of ns. The total number of ROADMs (or, BV-OXCs) on the lightpath p is ( |p| + 1), where |p|is the number of SLinks on the lightpath p.

[0082] Based on the above discussion the latency incurred on the end-to-end lightpath may be expressed as follows:

[0083] In certain embodiments, VLink embedding may be performed by splitting the demand associated with a single logical channel over multiple SPaths. This results in a VLink' s traffic being carried by a set of lightpaths. Each lightpath in this the set of lightpaths supporting the may have its own set of physical parameters (e.g. a given lightpath may differ from another lightpath associated with the VLink in terms of physical length and number of bypass SNodes). These differences in the lightpaths associated with a VLINK can result in different latencies for these lightpaths. Hence, the latency of may be determined by the lightpath p with the maximum L p , as the destination SNode (e.g. SNode 202-3) has to wait for the slowest split before merging traffic from each of the splits of the VLink. The set of paths used for embedding the splits of a may be represented as and the corresponding the latency perceived for the may be represented as follows:

[0084] In certain embodiments, VLink embedding by splitting may impose additional buffering overhead at the destination SNode (e.g. SNode 202-3) to offset the different delays experienced by different splits of the VLink transmitted on different lightpaths. This is also known as the differential delay problem. To account for a differential delay, the destination SNode (e.g. SNode 202-3) may store all but the most delayed flow in a buffer. When the link with the greatest latency delivers packets, the previously delivered packets can be released from the buffer. The amount of buffering required to compensate the differential delay may depend upon both the data-rates of the individual SPaths and the maximum allowed differential delay ( DD max ). The differential delay requirement for a VLink may be expressed in terms of a maximum and a minimum delay of its embedding SPath set as follows:

[0085] FIG. 6 illustrates a high-level functional block diagram of a latency-aware virtual network embedding system 500, configured to compute an SPath with an assured end-to-end latency as well as minimum number of frequency slots required to embed all VLinks of a virtual network (e.g. virtual network 300), in accordance with various aspects of present disclosure. It should be understood that such a system may be designed to compute optimal SPaths, but this should be understood as a preference.

[0086] In certain embodiments, the latency-aware virtual network embedding system 500 may compute end-to-end latency (e.g. end-to-end latency ) associated with SPaths connecting a source SNode (e.g. SNode 202-1) and a destination SNode (e.g. 202-3), wherein the SPaths may contain Slinks (e.g. Slinks 204-1, 204-2, 204-4, 204-7) and SNodes (e.g. SNodes 202-1, 202-2, 202-3, 202-9).

[0087] In certain embodiments, the latency-aware virtual network embedding system 500 may embed a VLink (e.g. VLink 304-1) connecting a source VNode (e.g. VNode 302-1) and a destination VNode (e.g. VNode 302-1) onto the one of the SPaths connecting the source SNode (e.g. SNode 202-1) and the destination SNode (e.g. SNode 202-3), wherein the end-to-end latency (e.g. end-to-end latency ) associated with the one of the SPaths is less than or equal to a maximum allowable end-to-end latency L for the VLink (e.g. VLink 304-1).

[0088] As shown, the latency-aware virtual network embedding system 500 employs a pre- computation module 502, a constraint module 504, a solution selection module 506, an integer linear program (ILP) solution module 508, and a heuristic solution module 510. It is to be noted that other elements may be present, but are not illustrated for the purpose of tractability and simplicity.

[0089] In certain embodiments, the system 400 may provide the latency-aware virtual network embedding system 500 with various information associated with the undirected graph G = (V, E) (e.g. SN 200), reach table R and the undirected graph (e.g. virtual network 300) with location constraint set and latency constraint set . For each VLink , the pre-computation module 502 may compute , a set of k paths between each pair of SNodes in the VLink' s endpoints’ location constraint set . For each SPath , the pre- computation module 502 may compute a set of admissible transmission configurations, , such that each configuration results in a reach and may have a data rate t (d) . As such, each one of the VNodes may be mapped to exactly one SNode from its location constraint set , denoted by the following decision variable:

[0090] Further, the bandwidth demand associated with the VLink may be satisfied by frequency slots provisioned over multiple SPaths (up to a maximum of q). Frequency slot provisioning over multiple SPaths may be performed using a combination of the following two cases: i. Allocating contiguous sets of frequency slots on distinct SPaths; ii. Allocating different contiguous sets of frequency slots on the same SPath (with the same or different transmission configuration(s))

[0091] The latter represents a scenario when there are sufficient frequency slots available over an SPath, however, its length does not allow provisioning the required data-rate using a single contiguous set of frequency slots. The pre-computation module 502 may consider that each transmission configuration for the SPath can be instantiated multiple times (up to a maximum of q times). The pre-computation module 502 can considers the following variable to represent VLink mapping:

[0092] The pre-computation module 502 also consider the following variable to represent relationship between a mapped SPath and the frequency slots in its SLinks:

[0093] The constraint module 504 ensures various constraints have been considered appropriately while mapping the virtual network (e.g. virtual network 300) to the Substrate optical network (e.g. SN 200). Such constraints include VNode mapping constraints, latency constraints, differential delay constraints, VLink demand constraints, frequency slot assignment constraints, and coordination between node and link mapping.

[0094] As such, using equations (5) and (6) the constraint module 504 ensures each VNode is mapped to SNode, while satisfying the location constraint. Using equation (7) the constraint module 504 ensures one SNode to host at most one VNode from the same Virtual network.

[0095] In order to satisfy the latency constraint set , the constraint module 504 converts the equation (3) to a linear constraint in order to find the latency of each VLink from its sets of embedding paths as following:

[0096] The following constraint, then uses the equation (8) to ensure that each latency constraint is satisfied:

[0097] As previously discussed, VLink embedding must satisfy the differential delay requirement in the equation (4). To this end, the constraint module 504 uses equations (8) and (10) to find the maximum and minimum delay of each VLink, respectively. The value of l in the equation (10) may be sufficiently larger than the maximum value of L p to avoid getting zero as the minimum latency of a VLink induced by a non-selected path, tuple, and instance combination.

[0098] For a non-selected path combination, the second term on the R.H.S of the equation

(10) becomes dominant, generating the constraint . Also for a selected path, tuple, and instance combination, the second term becomes zero and the first term generates the constraint . As , dominates over to generate the minimum delay of a VLink as the minimum delay of all the selected path, tuple, and instance combinations. Finally, the constraint module 504 uses equations (8) and (10) to enforce the differential delay requirement for VLink as following:

[0099] The constraint module 504 uses constraint in the equation (12) to ensure that for each , the sum of data-rates resulting from applying the selected transmission configuration on the selected paths is equal to the VLink’s demand. The constraint module 504 can use the equation (13) to enforce an upper limit on the number of splits.

[0100] Regarding frequency slot assignment and spectral contiguity constraints, the constraint module 504, using the equation (14), ensures that if a path p is selected with a specific transmission configuration t, then the required number of frequency slots n t to support the data- rate t (d) is allocated on the path p. Using the equation (15), the constraint module 504 ensures that each frequency slot on an SLink is allocated to at most one substrate path. Finally, the constraint module 504 uses equations (16) to ensure the frequency slots allocated on each SLink of the SPath form a contiguous frequency spectrum.

[0101] In order to coordinate between node and link mapping, the constraint module 504, using the equation (17), ensures that two endpoints of a are mapped to the source and destination SNodes of an SPath p only when p is used for embedding .

[0102] Based on inputs from constraint module 504, in certain embodiments, the solution selection module 506 may provide an objective function, as represented by the equation (18) either to the ILP solution module 508 or to the heuristic solution module 510. In certain embodiments, the selection criteria followed by the solution selection module 506 may be based on various factors, such as size of the virtual network to be provisioned, size of physical/substrate elastic optical network, time required to find a solution to embed the virtual network over the physical/substrate elastic optical network.

[0103] In certain embodiments, if the ILP solution module 508 is selected to compute an optimal solution for optimal SPath with an assured end-to-end latency as well as minimum number of frequency slots required to embed all VLinks of a virtual network (e.g. virtual network 300), the solution selection module 506 may provide the objective function represented by the equation (18) and all other constrains discussed from equation (5) to equation (17) to the ILP solution module 508. The ILP solution module 508 may perform a search over the space of possible solutions associated with the equation (18). In so doing, the ILP solution module 508 may find a solution that minimizes the total number of frequency slots required to embed all the VLinks of a VN as shown in the first part of equation (18). However, if there are multiple possible solutions, in order to break ties among multiple solutions with the same total number of slots, the solution selection module 506 may use a second term in equation (18) that minimizes the total length of paths being used and a third term in equation (18) that minimizes the number of splits over all the VLinks.

[0104] Those skilled in the art will appreciate that ILP module 508 may use any number of different techniques to perform its search of the solution space. In some embodiments, an exhaustive search may be employed, while in others possible solutions may be evaluated to find one that satisfies latency and available resource constraints even if not optimal.

[0105] The weight ∈ 1 ∈n equation (18) may be chosen in a way that the second term is effective only when multiple optimal solutions have same value using the first term. The weight e 2 in equation (18) may be chosen in a way that the third term is effective only when multiple optimal solutions have same value when both the first and second terms in (18) have been used.

[0106] Even though the above discussed functionality of ILP solution module 508 may provide an optimal embedding of the virtual network satisfying end-to-end latency constraint, the real time computational complexity, and the time required to embed such virtual network may be prohibitively high. To address this, a more practical heuristic approach may be used to provide a reasonable solution in a reasonable amount of time. As an example, the heuristic approach described below may provide a sub-optimal embedding in that has an associated cost that may be up to, for e.g. 10%-15% more than the optimal embedding solution obtained through an exhaustive search approach. However the heuristic approach may arrive at the non-optimal solution in near real time. For example, for a large network with around 100 substrate nodes or more, the ILP solution may take up to several hours to compute an optimal embedding, while the heuristic based approach may find an acceptable solution in less than 1 second. The ability to determine near-optimal solutions in near real-time allows for greater flexibility in provisioning networks. For example, virtual networks can be provisioned, tested, changes made to the virtual network, re-provisioned and tested again, all without requiring long wait times to determine possible embeddings for the virtual networks.

[0107] With this said, FIG. 7 illustrates a high-level functional block diagram of heuristic solution module 510, configured to compute a node embedding function, a link embedding function, and total number of frequency slots required to provision a virtual network (e.g. virtual network 300) in accordance with various aspects of present disclosure. As shown, the heuristic solution module 510 employs a node embedding module 512, a link embedding module 514 and a frequency slot computation module 516. It is to be noted that other elements may be present, but are not illustrated for the purpose of tractability and simplicity.

[0108] In certain embodiments, if the heuristic solution module 510 is selected to compute an optimal solution for optimal SPath with an assured end-to-end latency as well as minimum number of frequency slots required to embed all VLinks of a virtual network (e.g. virtual network 300), the solution selection module 506 may provide the objective function represented by the equation (18) and all other constrains discussed from equation (5) to equation (17) to the heuristic solution module 510. The node embedding module 512 may compute a node embedding function , such that based on the node embedding function t the node embedding module 512 may perform embedding each to an in accordance with their location constraints. [0109] Further, the link embedding module 514 may compute a link embedding function and . In so doing, the link embedding module 514 computes the link embedding function g for a maximum q splits for the demand bandwidth of each , selects an SPath and an appropriate transmission configuration from the reach table R for each split, and may allocate a contiguous segment of frequency slots represented by the starting and ending frequency slot index on each SLink along the SPath. represents the i- th split, where X and denote the selected SPath and transmission configuration for the i- th split, respectively. In addition, allocation of spectrum frequency slots for the i- th split begins at index and ends at index along each SLink in the · The link embedding function g may assist in embedding VLink in such a manner that end-to-end latency constraint on virtual paths may be satisfied, i.e.

[0110] To this end, the link embedding module 514 may consider a sequential VLink embedding order that is dynamically computed to converge to a solution within a reasonable time. Further, the link embedding module 514 may estimate an upper bound of the end-to-end latency budget of a based on the end-to-end latencies and frequency slots availability in the SPaths of the other unmapped VLinks. In certain embodiments, the SPaths in may be sorted in increasing order of their length and the upper bound of the end-to-end latency budget may be set to the end-to-end latency of the maximal i — th Spath , in such a manner that the chosen value satisfies the latency constraint set associated with the VLinks. Therefore, the link embedding module 514 based on the estimated latency end-to end path latency budget may generate an allowed set of candidate SPaths for embedding .

[0111] In order to increase the chances of finding a feasible solution, the link embedding module 514 may embed a most constrained VLink e in terms of end-to end path latency as well as in terms of in terms of availability of frequency slots in the allowed set of candidate SPaths during each iteration. Further, the link embedding module 514 may find the most constrained from a set of unmapped VLinks and the allowed set of candidate SPaths for embedding . [0112] The link embedding module 514 may estimate the end-to-end latency budget of the unmapped VLinks in a manner that embedding later in the order may not fail due to very tight budgets or due to insufficient number of contiguous frequency slots in their candidate SPaths ·

[0113] In certain embodiments, the link embedding module 514 takes the frequency slot availability in the allowed set of candidate SPaths into account while estimating the end-to-end latency budgets. As such, the link embedding module 514 may compute an index d g for each VLink e, the index d g denotes a maximum index of the SPath in the set of allowed candidate SPaths corresponding to the end-to-end latency budget. While computing this index d g , the link embedding module 514 may employ spectrum awareness by maximizing the minimum number of usable frequency slots in the allowed set of candidate SPaths for each of the remaining VLinks while satisfying end-to- end latency constraints. In so doing, the link embedding module 514 may perform binary search over the interval between one and a minimum value of the number of free frequency slots in the allowed set of candidate SPaths of remaining Vlinks. As such, in certain embodiments the frequency slots may be allocated in an increasing order of index associated with the frequency slots.

[0114] In each iteration of a binary search, the link embedding module 514 may select a median value med in the range and increases the index d g until the total number of free frequency slots in the allowed set of candidate SPaths becomes equal or more than med . The link embedding module 514 then uses the index to compute the estimated end-to-end latency . After estimating end-to-end latencies for all the unmapped VLinks, the link embedding module 514 checks if any latency constraint in the set latency constraint set is violated. Based on this test, the binary search continues in the appropriate range and terminates when the range size becomes one.

[0115] In this manner, the binary search in the link embedding module 514 provides a lower bound on the minimum number of usable frequency slots across all the VLinks. In order to find the most constrained VLink in terms of the number of usable frequency slots, the link embedding module 514 traverses the unmapped VLinks in decreasing order of their associated bandwidth demands. Since multiple VLinks can have the minimum number of usable frequency slots, the associated bandwidth demand may be used as a tie-breaker. For each VLink e in the enumerated order, the link embedding module 514 increases the index d g computed earlier by binary search and checks if all the end-to-end latency constraints can still be satisfied using an expanded set of allowed SPaths. According to the property of binary search, there exists at least one VLink for which the index cannot be increased beyond a without violating at least one end-to-end latency constraint.

[0116] Considering such VLink as the most constrained VLink, the link embedding module 514 may compute an optimal solution for embedding VLink e, given the mapped VNodes to the SNodes and the allowed set of candidate SPaths · The optimal solution for consists of a multi-set of SPaths where each SPath in the multi-set has an associated transmission configuration and frequency slot allocation. For each instance of the multi-set of SPaths, the link embedding module 514 checks the feasibility of all possible transmission configurations and provides the multi-set of SPaths to the frequency slot computation module 516.

[0117] The frequency slot computation module 516 may compute the total number of frequency slots required to provision the virtual network (e.g. virtual network 300) which is a minimum value according to following cost function: where, is the total number of Slinks on the .

[0118] The functionality of the heuristic solution module 510 may be subject to substrate resource constraints, and spectral contiguity (i.e., the allocated frequency slots of each split are always adjacent to each other) and continuity constraints (i.e., the same sequence of frequency slots are allocated on each SLink along an SPath) on the lightpaths.

[0119] Finally, for each pair of a multi-set of SPaths and a corresponding transmission configuration, the frequency slot computation module 516 performs frequency slot allocation over various SPaths in accordance with the most constrained VLink. In certain embodiments, the frequency slot allocation may be performed by techniques known to a person skilled in the art, such techniques may include First-Fit approach

[0120] In certain embodiments, the heuristic solution module 510 may also receive a priority objective as input, where the priority objective may further define the priority of the manner in which the heuristic solution module 510 provides the optimal solution. The priority objective may be frequency slot minimization, load balancing, and blocking minimization. In any case, the optimal solution primarily follows the criteria of end-to-end latency constraints and the above said objectives are secondary.

[0121] With this said, among all the feasible solutions consisting of multi-set of SPaths, an associated transmission configuration, and a spectrum slice allocation if the objective is frequency slot minimization, the heuristic solution module 510 creates an embedding plan for embedding the virtual network (e.g. virtual network 300) that requires the minimum number of frequency slots while the assured end-to-end latency is still being maintained. If the priority objective is load balancing, the heuristic solution module 510 embeds the virtual network (e.g. virtual network 300) that distributes the load at an increased cost of embedding the VLink e while still maintaining the assured end-to-end latency. If the priority objective is blocking minimization, the heuristic solution module 510 embed the virtual network (e.g. virtual network 300) that again at an additional cost of embedding the VLink e avoids blocking of a virtual network (e.g. virtual network 300) being embedding on a substrate optical network (e.g. SN 200) while still maintaining the assured end-to-end latency.

[0122] It will be appreciated that, although the functionality of various systems, such as systems 400, and 500 have been discussed individually, however these systems may be implemented in conjunction with each other on the system 100 by any one or more of hardware- based, software-based, and firmware-based elements.

[0123] FIG. 8 illustrates a representative example of the SN 600 embedded with the virtual network 300, in accordance with various aspects of present disclosure. As shown, the VNode 302-1 is embedded to the SNode 202-1 and VNode 302 is embedded to the SNode 202-3. Further, based on the functionality of the latency-aware virtual network embedding system 500 discussed above the optimal SPath may be represented as 602 contains Slinks 204-4 and 204-7. [0124] FIGs. 9, 10, and 11 illustrate representative examples of functional flow diagrams 700, 720, and 730 of a heuristic process directed to a method for virtual network embedding implemented in the latency-aware virtual network embedding system 500, in accordance with various embodiments of present disclosure. As shown in FIG. 9, the process 700 commences at task block 702. At task block 702, the process 700 receives information associated with the virtual network topology and the substrate network topology.

[0125] The process 700 then advances to task block 704 where the process 700 verifies if there are any , to embed. If there are any to embed, the process 700 proceeds to task block 706, else the process 700 moves to task block 718. At task block 706, the process 700 passes the control to the process 720 in order to obtain the next most constrained VLink e from a set of unmapped virtual links. As shown in FIG. 10, the process 720 begins at task block 722. At task block 722, the process 720 receives the set of unmapped virtual links and the latency constraint set £(£) associated with the set of unmapped virtual links.

[0126] The process 720 proceeds to task block 724, where the process 700 computes an estimated latency associated with the set of unmapped virtual links. The estimating of latency is performed using the longest SPath in the k shortest SPaths between each pair of SNodes in the VLink’ s endpoints’ location constraint set C(u ) and generate the allowed set of candidate SPaths · The process 720 moves to task block 726, where the process 720 performs binary searching to find the maximum number of available frequency slots that each VLink e in the set of unmapped VLinks can use in the allowed set of candidate SPaths without violating any latency in the latency constraint set £(£).

[0127] Finally, at task block 728, the process 720 returns the most constrained VLink e that maximizes the number of available frequency slots to be used to the process 700 at task block 706. Returning to FIG. 9, the process 700 advances to task block 708 where the process 700 removes the most constrained from the set of unmapped virtual links and the process 700 proceeds to task block 710.

[0128] At task block 710, the process 700 passes the control to the process 730 in order to compute embedding of the selected most constrained using the reach table R. As shown in FIG. 11, the process 730 begins at task block 732. At task block 732, the process 730 receives the set of k shortest SPath connecting the most constrained VLink e and a set of data-rates that can be used for embedding the most constrained VLink e.

[0129] The process 730 advances to task block 734 where the process 730 generates a set of all multi-sets of SPaths of size maximum up to q using the set of k shortest SPath as base set. The process 730 then advances to task block 736. At task block 736, the process 730 verifies that if there is any unexplored multi-set of SPath. If all the multi-sets of SPaths have been explored, the process 730 moves to task block 750. However, if there are some multi-sets of SPath to be explored, the process 730 advances to task block 738.

[0130] At task block 738, the process 730 verifies the SPaths in the selected unexplored multi-set of SPath follows the differential delay constraint. If the SPaths in the selected unexplored multi-set of SPath follows the differential delay constraint, the process 730 advances to task block 740, else the process 730 returns to task block 736.

[0131] At task block 740, the process 730 generates all multi-sets of data-rates with cardinality of the SPaths associates selected multi- set of SPaths using the set of data-rates as base set. As such, generates all multi- sets of data-rates is performed in such a manner that the sum of data-rates equals the demand associated with the most constrained VLink e. The process 730 then advances to task block 742.

[0132] At task block 742, the process 730 verifies that if there is any unexplored multi-set of data-rates. If all the multi-sets of data-rates have been explored, the process 730 returns to task block 736. However, if there are some multi-sets of data-rates to be explored, the process 730 advances to task block 744 and provide a selected multi-set of data-rates.

[0133] At task block 744, the process 730 generates all the permutations of the selected multi-set of data-rates and moves to task block 746. At task block 744, the process 730 verifies that if there is any unexplored permutation of the selected multi-set of data-rates. If all the permutations of the selected multi-set of data-rates have been explored, the process 730 returns to task block 738. However, if there are some permutations of the selected multi-set of data-rates to be explored, the process 730 advances to task block 748 and provide selected SPaths and data- rates of the multi-set. [0134] At task block 748, the process 730 computes a tuple t = (d, b, m, f) and assigns frequency slots using dynamic programing with the selected SPaths and data-rates of the multi- set. Finally, at task block 750, the process 730 returns SPath, data-rates and frequency slots of the multi-set that maximizes the spectrum usage to be used to the process 700 at task block 710. Returning to FIG. 9, the process 700 advances to task block 712 where the process 700 verifies if the embedding was successful or failed. If the embedding was successful, the process 700 proceeds to task block 714. If the embedding was failed, the process advances to task block 716.

[0135] At task block 714, the process 700 update the resources associated with the physical/substrate elastic optical network in accordance with the computed embedding and the computed virtual link embedding is added to the solution and the process 700 returns task block

704.

[0136] At task block 716, the process 700 returns that heuristic process could not find a feasible embedding. Finally, at task block 718, the process 700 returns the solution.

[0137] FIGs. 12-14 illustrate various representative scenarios comparing performance in a fixed grid network and an elastic optical network implementing the same virtual network embedding, in accordance with various embodiments of present disclosure. As shown, FIG. 12 depicts acceptance ratio (%) versus latency budget (%), for fixed grid network and elastic optical network (flexi grid network) corresponding to different differential delays. The acceptance ratio is a ratio of number of request accepted for the virtual network to the number total request for the virtual network. As depicted, the maximum acceptance ratio for fixed grid network is approximately 80%, whereas the maximum acceptance ratio for elastic optical network increases to approximately 95%.

[0138] As shown, FIG. 13 depicts acceptance ratio (%) versus latency budget (%), for fixed grid network and elastic optical network (flexi grid network) corresponding to different differential delays. The bandwidth acceptance ratio is a ratio of number of request accepted corresponding to the bandwidth request for the virtual network to the number total request for the virtual network. As depicted, the maximum acceptance ratio for fixed grid network is approximately 78%, whereas the maximum acceptance ratio for elastic optical network is approximately 95%. [0139] FIG. 14 illustrates the cumulative distribution function (CDF) of cost ratios for fixed grid network and elastic optical network (flexi grid network). The cost ratio is obtained by two different approaches (ILP formulation and heuristic approach) for solving the same problem instance. Cost ratio measures the relative performance of two approaches.

[0140] FIG. 15 depicts a functional flow diagram of process 800 directed to a method implemented in the system 100, in accordance with various embodiments of the present disclosure. The process 800 commences at task block 802, where the system 100 receive a virtual network topology and virtual network constraints, associated with the virtual network. As noted above, the system 400 may receive a request for a virtual network (e.g. virtual network 300), where the request may contain a virtual network topology and virtual network constraints associated with the virtual network. The virtual network topology may comprise VNodes (e.g. VNodes 302-1, 302-2, and 302-3), VLinks (e.g. VLinks 304-1 and 304-1), and a demand bandwidth associated with each of the VLinks.

The process 800 advances to task block 804, where the system 100 performs embedding the plurality of virtual nodes onto the plurality of substrate nodes in accordance with the plurality of location constraints. As noted above the latency-aware virtual network embedding system 500 may compute a node embedding function , such that based on the node embedding function t the node embedding module 512 may perform embedding each to an in accordance with their location constraints.

[0141] The process 800 advances to task block 806, where the system 100 computes end-to- end latency associated with a plurality of substrate paths connecting a source substrate node and a destination substrate node, wherein the plurality of substrate paths contain the plurality of substrate links and the plurality of substrate nodes. As discussed above, the latency-aware virtual network embedding system 500 may compute end-to-end latency (e.g. end-to-end latency ) associated with SPaths connecting a source SNode (e.g. SNode 202-1) and a destination SNode (e.g. 202-3), wherein the SPaths may contain Slinks (e.g. Slinks 204-1, 204-2, 204-4, 204-7) and SNodes (e.g. SNodes 202-1, 202-2, 202-3, 202-9).

[0142] Finally, at task block 808, where the system 100 performs embedding a virtual link connecting a source virtual node and a destination virtual node onto the one of the plurality of substrate paths connecting the source substrate node and the destination substrate node, wherein the end-to-end latency associated with the one of the plurality of substrate paths is less than or equal to a maximum allowable latency for the virtual link.

[0143] As noted above, the latency-aware virtual network embedding system 500 may embed a VLink (e.g. VLink 304-1) connecting a source VNode (e.g. VNode 302-1) and a destination VNode (e.g. VNode 302-1) onto the one of the SPaths connecting the source SNode (e.g. SNode 202-1) and the destination SNode (e.g. SNode 202-3), wherein the end-to-end latency (e.g. end-to-end latency £ a ) associated with the one of the SPaths is less than or equal to a maximum allowable end-to-end latency L for the VLink (e.g. VLink 304-1)

[0144] Those skilled in the art will appreciate that where conventional systems may take into account link budgets in determining latency, these approaches have all been based on the assumption that the links for which latency budgets are calculated are real physical links and are not logical links atop a virtualized system. In the above discussion, a request for the creation of a virtual network atop a substrate network is received. The virtual network can be defined by a topology and a set of constraints. Constraints associated with virtual nodes may include location based constraints, processing and storage based constraints and other such requirements. The links between the virtual nodes may have a set of constraints, including a latency constraint. When mapping the virtual nodes to a substrate node, the requirement of being able to connect to the substrate node supporting a second virtual node while supporting the latency constraint has to be taken into account. Through the use of flex-grid optical networks, as well as EONs, different physical link parameters can be evaluated. As the number of nodes and links in the virtual network increases, the number of possible configurations increases. By breaking the assignment problem into a series of smaller steps, the constraints can be met in a manner that is suitable for a dynamic environment. The use of either an ILP approach or a heuristic approach can be supported so that the latency requirements can be satisfied.

[0145] It will be appreciated that the process 700, 720, 730, and 800 may also be performed by computer programs, which may exist in a variety of forms both active and inactive. Such as, the computer programs may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats. Any of the above may be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Representative non-transitory computer readable storage devices include conventional computer RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. Representative computer readable signals, whether modulated using a carrier or not, are signals that a computer hosting or mnning the computer program may be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is tme of computer networks in general.

[0146] Thus, by virtue of techniques provided by the system 100, performing virtual network embedding on elastic optical network in accordance with end-to-path latency allows more flexible latency allocation over its constituent virtual links, enabling more opportunities to optimize substrate paths for embedding

[0147] It is to be understood that the operations and functionality of the system 100, constituent components, and associated processes may be achieved by any one or more of hardware-based, software-based, and firmware-based elements. Such operational alternatives do not, in any way, limit the scope of the present disclosure.

[0148] It will also be understood that, although the embodiments presented herein have been described with reference to specific features and structures, it is clear that various modifications and combinations may be made without departing from such disclosures. The specification and drawings are, accordingly, to be regarded simply as an illustration of the discussed implementations or embodiments and their principles as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present disclosure.