Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DYNAMIC DEPLOYMENT OF NETWORK APPLICATIONS HAVING PERFORMANCE AND RELIABILITY GUARANTEES IN LARGE COMPUTING NETWORKS
Document Type and Number:
WIPO Patent Application WO/2020/149786
Kind Code:
A1
Abstract:
Embodiments of the invention relate to computerized systems and computerized methods configured to optimize data transmission paths in a large-scale computerized network relative to reliability and bandwidth requirements. Embodiments of the invention further relate to computerized systems and methods that direct and control the physical adjustments to data transmission paths in a large-scale network's composition of computerized data transmission nodes in order to limit data end-to-end transmission delay in a computerized network to a delay within a calculated worst-case delay bound.

Inventors:
LIU SHAOTENG (SE)
STEINERT REBECCA (SE)
KOSTIC DEJAN (SE)
Application Number:
PCT/SE2020/050045
Publication Date:
July 23, 2020
Filing Date:
January 17, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RISE RES INSTITUTE OF SWEDEN AB (SE)
International Classes:
H04L45/02; H04L45/42; H04L47/76
Foreign References:
CN205071038U2016-03-02
Other References:
LIU SHAOTENG ET AL: "Flexible distributed control plane deployment", NOMS 2018 - 2018 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, IEEE, 23 April 2018 (2018-04-23), pages 1 - 7, XP033371601, DOI: 10.1109/NOMS.2018.8406150
ZHANG BANG ET AL: "Optimal Controller Placement Problem in Internet-Oriented Software Defined Network", 2016 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC), IEEE, 13 October 2016 (2016-10-13), pages 481 - 488, XP033070041, DOI: 10.1109/CYBERC.2016.98
HU TAO ET AL: "Reliable and load balance-aware multi-controller deployment in SDN", CHINA COMMUNICATIONS, CHINA INSTITUTE OF COMMUNICATIONS, PISCATAWAY, NJ, USA, vol. 15, no. 11, 30 November 2018 (2018-11-30), pages 184 - 198, XP011697888, ISSN: 1673-5447, [retrieved on 20181121], DOI: 10.1109/CC.2018.8543099
S. LIUR. STEINERTD. KOSTIC: "Flexible distributed control plane deployment", NOMS 2018-2018 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM. MINI CONFERENCE. IEEE, 2018, pages 1 - 7, XP033371601, DOI: 10.1109/NOMS.2018.8406150
F. J. ROSAND P. M. RUIZ: "On reliable controller placements in software-defined networks", COMPUTER COMMUNICATIONS, vol. 77, 2016, pages 41 - 51, XP029424907, DOI: 10.1016/j.comcom.2015.09.008
Attorney, Agent or Firm:
AWA SWEDEN AB (SE)
Download PDF:
Claims:
CLAIMS

We claim:

1. An electronic data transmission scheduling system for a computer network having a plurality of computerized data transmission nodes and a plurality of electronic aggregators, wherein each computerized data

transmission node of the plurality of computerized data transmission nodes has at least one data transmission link of a plurality of data transmission links to another computerized data transmission node of the plurality of

computerized data transmission nodes, and wherein each electronic aggregator of the plurality of electronic aggregators forwards data to a set of computerized data transmission nodes of the plurality of computerized data transmission nodes, a configuration of the plurality of computerized data transmission nodes and the plurality of data transmission links collectively forming a network topology, the electronic data transmission scheduling system, comprising:

an electronic reliability checker that examines possible data transmission routes between computerized data transmission nodes of the plurality of computerized data transmission nodes and data transmission links of the plurality of data transmission links to determine a network mapping reliability score for the network topology, wherein the network mapping reliability score represents a probability that an electronic aggregator of the plurality of the electronic aggregators connects to at least one computerized data

transmission node of a set of computerized data transmission nodes given computerized data transmission node and data transmission link failure probabilities; an electronic bandwidth checker configured to calculate bandwidth allocations and a flow routing plan for the network topology, wherein the electronic bandwidth checker develops calculated bandwidth allocations and the flow routing plan by testing different combinations of data transmission links from the plurality of data transmission links and different combinations computerized data transmission nodes of the plurality of computerized data transmission nodes; and

an electronic resource planner that applies the network mapping reliability score from the electronic reliability checker, traffic characteristics and demands, and the calculated bandwidth allocations and the flow routing plan from the electronic bandwidth checker to evaluate and develop a computer-implementable deployment strategy that directs rearrangement of the network topology, by calculating and evaluating a trade-off between the network mapping reliability score, estimated bandwidth demands, the calculated bandwidth allocations and the flow routing plan that satisfies predetermined requirements for reliability and bandwidth.

2. The electronic data transmission scheduling system of claim 1 , wherein the electronic reliability checker determines the network mapping reliability score for the network topology relative to a predetermined reliability requirement, wherein the electronic reliability checker further determines the network mapping reliability score by calculating a lower reliability bound.

3. The electronic data transmission scheduling system of claim 1 , wherein the electronic bandwidth checker calculates bandwidth allocations and the routing plan for the network topology to ensure flow routability in line with predetermined bandwidth requirements, by evaluating routability of different combinations of inter-traffic between computerized data transmission nodes in the plurality of computerized data transmission nodes given the network topology and estimated traffic characteristics and flow demands, wherein the network topology comprises a mapping of the plurality of computerized data transmission nodes to the network topology and associations between computerized data transmission nodes and the plurality of computerized data transmission nodes.

4. The electronic data transmission scheduling system of claim 3 wherein the electronic bandwidth checker further calculates bandwidth allocations for the network topology by computing whether all flows can be routed without overloading any transmission links from the plurality of data transmission links relative to predetermined bandwidth constraints, with all possible paths.

5. The electronic data transmission scheduling system of claim 1 , wherein the electronic resource planner further calculates mapping and association of network traffic given performance requirements on reliability and bandwidth for each data transmission link of the plurality of data transmission links.

6. The electronic data transmission scheduling system of claim 5, wherein the electronic resource planner further estimates traffic characteristics and flow demands in the computer network and wherein the electronic resource planner further evaluates and calculates the mapping and association of computerized transmission nodes of the plurality of computerized

transmission nodes, by assessing a calculated reliability score, traffic characteristics, flow demands, routing plan and routability of the calculated bandwidth allocation, relative to predetermined reliability and bandwidth requirements.

7. The electronic data transmission scheduling system of claim 1 , further comprising:

an electronic flow delay checker that determines a flow routing plan and an end-to-end delay of delivering a data packet from a sending computerized data transmission node of the plurality of computerized processing nodes to a destination computerized data transmission node of the plurality of

computerized processing nodes over at least one data transmission link of a plurality of data transmission links,

wherein the electronic flow delay checker evaluates whether a flow routing plan and calculated end-to-end delays satisfies estimated traffic characteristics and flow demands, calculated bandwidth allocations and predetermined end-to-end delay requirements,

wherein the electronic resource planner is further configured to use the calculated end-to-end delay and flow routing plan along with the network mapping reliability score from the electronic reliability checker, traffic characteristics and demands, the calculated bandwidth allocation from the electronic bandwidth checker to evaluate and develop a computer- implementable deployment strategy that directs rearrangement of the network topology by calculating trade-offs among the network mapping reliability score, traffic characteristics and demands, the calculated bandwidth allocation, and calculated end-to-end delay that satisfies predetermined requirements for reliability, bandwidth and end-to-end delay.

8. The electronic data transmission scheduling system of claim 7, wherein the electronic flow delay checker determines the end-to-end delay by calculating a worst-case end-to-end delay bound in sending the data packet from the sending computerized data transmission node of the plurality of computerized processing nodes to the destination computerized data transmission node of the plurality of computerized processing nodes over the at least one data transmission link of a plurality of data transmission links.

9. The electronic data transmission scheduling system of claim 6, wherein the electronic resource planner evaluates and calculates the mapping and association of computerized transmission nodes and the plurality of computerized transmission nodes, by assessing the calculated reliability score, traffic characteristics and demands, routing plan and the routability of the calculated bandwidth allocations and calculated end-to-end delay, relative to predetermined requirements for reliability, bandwidth and end-to-end delay, wherein the electronic data transmission scheduling system further comprises estimation of traffic characteristics and demands.

10. A method for electronic data transmission scheduling for a computer network having a plurality of computerized data transmission nodes and a plurality of electronic aggregators, wherein each computerized data

transmission node of the plurality of computerized data transmission nodes has at least one data transmission link of a plurality of data transmission links to another computerized data transmission node of the plurality of

computerized data transmission nodes, and wherein each electronic aggregator of the plurality of electronic aggregators forwards data to a set of computerized data transmission nodes of the plurality of computerized data transmission nodes, a configuration of the plurality of computerized data transmission nodes and the plurality of data transmission links collectively forming a network topology, the method for electronic data transmission scheduling, comprising:

examining possible data transmission routes between computerized data transmission nodes of the plurality of computerized data transmission nodes and data transmission links of the plurality of data transmission links by an electronic reliability checker that to determine a network mapping reliability score for the network topology, wherein the reliable data transmission reliability represents a probability that an electronic aggregator of the plurality of the electronic aggregators connects to at least one computerized data transmission node of a set of data transmission nodes given computerized data transmission node and data transmission link failure probabilities;

calculating a flow routing plan and bandwidth allocations for the network topology by an electronic bandwidth checker, wherein the electronic bandwidth checker develops calculated bandwidth allocations and a flow routing plan by testing different combinations of data transmission links from the plurality of data transmission links and different combinations

computerized data transmission nodes of the plurality of computerized data transmission nodes and computes whether all flows can be routed without overloading any data transmission link relative to bandwidth requirements; and

preparing a computer-implementable deployment strategy that directs rearrangement of the network topology by an electronic resource planner that applies the network mapping reliability score from the electronic reliability checker, traffic characteristics and demands, routing plan and the calculated bandwidth allocation from the electronic bandwidth checker to evaluate and calculate trade-offs between the network mapping reliability score, and the calculated bandwidth allocation, such that the computer-implementable deployment strategy satisfies predetermined requirements for reliability and bandwidth,

wherein the electronic resource planner scheduling system further comprises mapping and association of computerized data transmission nodes and estimation of traffic characteristics and demands.

11. The method for electronic data transmission scheduling of claim 10, the method further comprising:

determining by the electronic reliability checker the network mapping reliability score for the network topology relative to a predetermined reliability requirement by calculating a lower reliability bound.

12. The method for electronic data transmission scheduling of claim 10, further comprising:

determining bandwidth allocations and routing plan by the electronic bandwidth checker for the network topology by:

evaluating a routing plan and the routability of different combinations of inter-traffic between computerized data transmission nodes and the plurality of computerized data transmission nodes given a network topology, given a mapping and association of the plurality of computerized data transmission nodes to the network topology and associations between computerized data transmission nodes and the plurality of computerized data transmission nodes, and estimated traffic characteristics and demands between mapped and associated computerized data transmission nodes; and

calculating bandwidth allocations for the network topology by the electronic bandwidth checker by determining whether all flows can be routed without overloading any transmission links from the plurality of data

transmission links relative to predetermined bandwidth constraints, with all possible paths.

13. The method for electronic data transmission scheduling of claim 12, wherein the electronic resource planner evaluates and calculates the mapping and association of computerized transmission nodes and the plurality of computerized transmission nodes, by assessing the calculated reliability score, traffic characteristics and flow demands and the routability of the calculated bandwidth allocation and routing plan, relative to predetermined reliability and bandwidth requirements,

wherein the electronic resource planner scheduling system further comprises estimation of traffic characteristics and demands.

14. The method for electronic data transmission scheduling of claim 10, further comprising:

determines by an electronic flow delay checker an end-to-end delay of delivering a data packet from a sending computerized data transmission node of the plurality of computerized processing nodes to a destination

computerized data transmission node of the plurality of computerized processing nodes over at least one data transmission link of a plurality of data transmission links;

wherein the electronic flow delay checker further evaluates whether a flow routing plan and calculated end-to-end delays satisfies estimated traffic characteristics and flow demands, calculated bandwidth allocations and predetermined end-to-end delay requirements;

wherein the electronic resource planner is further configured to employ a calculated end-to-end delay and routing plan by the flow delay checker and the network mapping reliability score from the electronic reliability checker, traffic characteristics and demands, and the calculated bandwidth allocation from the electronic bandwidth checker to develop a computer-implementable deployment strategy that directs rearrangement of the network topology by calculating trade-offs among the network mapping reliability score, the calculated bandwidth allocation, the calculated end-to-end delay and routing plan that satisfies predetermined requirements for reliability, bandwidth and end-to-end delay.

15. The method for electronic data transmission scheduling of claim 14, wherein determining the end-to-end delay by the electronic flow delay checker comprises calculating a delay bound representing a worst-case end-to-end delay in sending the data packet from the sending computerized data transmission node of the plurality of computerized processing nodes to the destination computerized data transmission node of the plurality of

computerized processing nodes over the at least one data transmission link of a plurality of data transmission links; and evaluating and calculating by the electronic resource planner the mapping and association of computerized transmission nodes and the plurality of computerized transmission nodes, by assessing the calculated reliability score, traffic characteristics and demands, routing plan and the routability of the calculated bandwidth allocation and the calculated end-to- end delay, relative to predetermined reliability, bandwidth and end-to-end delay requirements,

wherein the electronic data transmission scheduling system further comprises estimation of traffic characteristics and demands. 16. The method for electronic data transmission scheduling of claim 10, wherein the electronic automated rescheduling system triggers recalculation of the electronic data transmission schedule, further comprising:

measuring and collecting information by the electronic network monitor about the computerized network performance and operational status; and processing monitoring information by the electronic change detector from the electronic network monitor to automatically signal recalculation of the electronic data transmission schedule upon detected changes of the computerized network conditions.

17. The method for electronic data transmission scheduling of claim 10, further comprising:

automatically adjusting power levels of computerized nodes and data transmission links by an electronic power utilization manager following the electronic data transmission schedule by increasing or decreasing a power level for each element of computerized network equipment in the computer network depending upon a power state of the element.

Description:
Dynamic Deployment of Network Applications Having Performance and Reliability Guarantees in Large Computing

Networks

FIELD

[0001] Embodiments of the invention relate to computerized systems and computerized methods configured to optimize data transmission paths in a large- scale computerized network relative to reliability and bandwidth requirements. Embodiments of the invention further relate to computerized systems and methods that direct and control physical adjustments to data transmission paths in a large-scale network’s composition of computerized data transmission nodes in order to limit data end-to-end transmission delay in a computerized network to a delay within a calculated worst-case delay bound.

BACKGROUND

[0002] The following description includes information that may be useful in understanding embodiments of the invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.

[0003] Large communication networks, such as T-Mobile’s North American network or Microsoft’s cloud network, are among the most complex industrial systems to be found anywhere in the world today. Managing these large communication networks efficiently while also satisfying all the demands placed on them is obviously not a trivial problem to solve. Many computer engineers and other technical experts have worked tirelessly to improve these systems so that they will continue to operate as intended while also being able to

accommodate the increasingly complicated demands placed on them.

[0004] Computerized control planes are known for various types of communications networks, particularly large communications networks. A control plane comprises the portion of a router architecture that is concerned with drawing the network topology and/or the information in a routing table that defines what to do with incoming data packets. Control planes decide which routes go into a main routing table for unicast routes and into additional routing tables for multicast routes.

[0005] Control planes can be applied to access networks of nearly all types. Control planes offer well-known advantages including, among the others, significant operating expense saving, capital expense savings, and improved traffic restoration. However, the control plane technology provides some challenges as well, such as scalability issues and low control of the network operator on the paths used by the traffic in end-to-end services with consequent lack of resource optimization.

[0006] Prior art solutions for control plane scalability generally focus on aspects such as electronic aggregator-to-processing entity or processing entity- to-electronic aggregator delay reduction, computerized processing entity capacity and utilization optimization, flow setup time and communication overhead minimization. While these efforts in the prior art have been helpful, they have not yet provided an optimal solution for the problems they attempt to solve. [0007] For large-scale programmable networks, flexible deployment of distributed control planes is essential for service availability and performance. However, the solutions available in the prior art only focus on placing controllers whereas the consequent control traffic is often ignored, leading to numerous drawbacks.

[0008] Similarly, the next generation of networked systems will likely include programmable and virtualized infrastructures. The accelerated development of network programmability and virtualization is a reasonable and expected development, as commodity hardware becomes increasingly cheaper and the demand for elastic cloud services continues to grow. In distributed and networked environments, deployment of virtual instances requires a robust deployment strategy to ensure service reliability and performance, such as bandwidth and flow delay. Here, virtual instances comprise physical hardware that has been configured in a logical (or virtual) configuration to perform as if the composition of physical hardware was an integrated unit. As cloud services become central to the digital transformation in research, industry, and finance, it becomes of increasing importance to consider the service performance within and across geo-distributed data centers. Addressing these challenges is fundamental to elastic services entailing effective resource usage and service performance requirements.

[0009] The ability to flexibly plan and deploy virtualized infrastructures with performance guarantees is relevant to many networked systems that rely on virtual entities requiring reliable data transactions, such as: • software defined networks (“SDN”) operating using distributed control planes;

• network services implemented by virtual network functions (“VNF”); and

• cloud computing services running in distributed system architectures.

[0010] Common to these networked systems is the deployment of virtual entities (a virtual machine, a network controller instance or a virtual network function) which implement a service (cloud application, control service or network service). Coupled with digital transformation, there is a drastic increase in services centered around reliable data transactions with performance guarantees on bandwidth and flow delay.

[0011] In particular, flow delay is of growing importance as more services (e.g., Intelligent Transportation Systems (“ITS”)) require short response times for close to real-time applications. Similarly, big data computations carried out across geo-distributed data centers may in many cases require guaranteed performance. Accordingly, the way cloud computing tasks have been planned and deployed will impact service performance with respect to how quickly and accurately a cloud computing result can be delivered (some results could get lost due to, e.g., link loss).

[0012] As networks and services grow more complex, automated

deployment of virtualized infrastructures becomes essential. Methods and systems enabling automated deployment typically require the capability for taking performance guarantees and reliability requirements into account. Application areas or implementation of improved solutions include control plane

deployments, cloud computing, and network function virtualization (“NFV”).

[0013] While network control systems and services have improved in recent years, there nevertheless exists a continuous need to improve the design and operation of network control systems and services, especially where such improvements can be accomplished in a commercially reasonable fashion.

SUMMARY OF THE INVENTION

[0014] Embodiments of the invention provide an electronic data

transmission scheduling system for a computer network having a plurality of computerized data transmission nodes and a plurality of electronic aggregators, wherein each computerized data transmission node of the plurality of

computerized data transmission nodes has at least one data transmission link of a plurality of data transmission links to another computerized data transmission node of the plurality of computerized data transmission nodes, and wherein each electronic aggregator of the plurality of electronic aggregators forwards data to a set of computerized data transmission nodes of the plurality of computerized data transmission nodes, a configuration of the plurality of computerized data transmission nodes and the plurality of data transmission links collectively forming a network topology. Embodiments of the electronic data transmission scheduling system comprise an electronic reliability checker that examines possible data transmission routes between computerized data transmission nodes of the plurality of computerized data transmission nodes and data transmission links of the plurality of data transmission links to determine a network mapping reliability score for the network topology, wherein the reliable data transmission reliability represents a probability that an electronic aggregator of the plurality of the electronic aggregators connects to at least one

computerized data transmission node of a set of computerized data transmission nodes given computerized data transmission node and data transmission link failure probabilities. Embodiments of the invention also include an electronic bandwidth checker configured to calculate bandwidth allocations for the network topology, wherein the electronic bandwidth checker develops calculated bandwidth allocations by testing different combinations of data transmission links from the plurality of data transmission links and different combinations computerized data transmission nodes of the plurality of computerized data transmission nodes. Embodiments of the invention further comprise an electronic resource planner that uses the network mapping reliability score from the electronic reliability checker, estimated traffic demands in the electronic resource planner and the calculated bandwidth allocations from the electronic bandwidth checker to develop a computer-implementable deployment strategy that directs rearrangement of the network topology by calculating a trade-off between the network mapping reliability score and the calculated bandwidth allocation that satisfies predetermined requirements for each of the network mapping reliability score and the bandwidth requirements.

[0015] Embodiments of the invention further comprise the electronic data transmission scheduling system described above that further comprises an electronic flow delay checker that determines an end-to-end delay of delivering a packet from a sending computerized data transmission node of the plurality of computerized processing nodes to a destination computerized data transmission node of the plurality of computerized processing nodes over at least one data transmission link of a plurality of data transmission links; wherein the electronic resource planner is further configured to use the end-to-end delay along with the network mapping reliability score from the electronic reliability checker and the calculated bandwidth allocations from the electronic bandwidth checker to develop a computer-implementable deployment strategy that directs

rearrangement of the network topology by calculating trade-offs among the network mapping reliability score, the calculated bandwidth allocations, and the end-to-end delay that satisfies predetermined requirements for reliability, bandwidth and end-to-end delay.

[0016] Embodiments of the invention further provide a method for electronic data transmission scheduling for a computer network having a plurality of computerized data transmission nodes and a plurality of electronic aggregators, wherein each computerized data transmission node of the plurality of

computerized data transmission nodes has at least one data transmission link of a plurality of data transmission links to another computerized data transmission node of the plurality of computerized data transmission nodes, and wherein each electronic aggregator of the plurality of electronic aggregators forwards data to a set of computerized data transmission nodes of the plurality of computerized data transmission nodes, a configuration of the plurality of computerized data transmission nodes and the plurality of data transmission links collectively forming a network topology. Embodiments of the method for electronic data transmission scheduling comprise examining possible data transmission routes between computerized data transmission nodes of the plurality of computerized data transmission nodes and data transmission links of the plurality of data transmission links by an electronic reliability checker that to determine a network mapping reliability score for the network topology, wherein the reliable data transmission reliability represents a probability that an electronic aggregator of the plurality of the electronic aggregators connects to at least one computerized data transmission node of a set of computerized data transmission nodes given computerized data transmission node and data transmission link failure probabilities. Embodiments of the method further comprise calculating bandwidth allocations for the network topology by an electronic bandwidth checker, wherein the electronic bandwidth checker develops flow routing plans and calculated bandwidth allocations by testing different combinations of data transmission links from the plurality of data transmission links and different combinations computerized data transmission nodes of the plurality of computerized data transmission nodes. Embodiments of the invention also comprise preparing a computer-implementable deployment strategy that directs rearrangement of the network topology by an electronic resource planner that uses the network mapping reliability score from the electronic reliability checker and the estimated traffic demands in the electronic resource planner to calculate trade-offs between the network mapping reliability score and the calculated bandwidth allocations, such that the computer-implementable deployment strategy satisfies

predetermined requirements for reliability and bandwidth.

[0017] Embodiments of the invention comprise the method for electronic data transmission scheduling as described above and further comprise determining by an electronic flow delay checker an end-to-end delay of delivering a packet from a sending computerized data transmission node of the plurality of computerized processing nodes to a destination computerized data transmission node of the plurality of computerized processing nodes over at least one data transmission link of a plurality of data transmission links; wherein the electronic resource planner is further configured to use the end-to-end delay along with the network mapping reliability score from the electronic reliability checker, estimated traffic demands in the electronic resource planner and the calculated bandwidth allocation from the electronic bandwidth checker to develop a computer- implementable deployment strategy that directs rearrangement of the network topology by calculating trade-offs among the network mapping reliability score, the calculated bandwidth allocation, and the calculated end-to-end delay that satisfies predetermined requirements for reliability, bandwidth and end-to-end delay.

[0018] Embodiments of the invention further comprise a method for automatic monitoring and decision-making for dynamically triggering

recalculations of the electronic data transmission scheduling determined by observed changes of the networking conditions. This automatic monitoring and decision-making method may be employed to further minimize time delays in data transmission. Further, embodiments of the invention comprise determining the power utilization of computerized data transmission processing nodes and computerized data transmission nodes, that are used or unused for electronic data transmission scheduling. Once the power utilization has been computed, then the power utilization data may be considered in the overall deployment of the data transmission scheme, according to an embodiment of the invention. BRIEF DESCRIPTION OF THE DRAWINGS

[0019] Embodiments of the invention will be further explained by means of non-limiting examples with reference to the appended drawings. Figures provided herein may or may not be provided to scale. The relative dimensions or proportions may vary. It should be noted that the dimensions of some features of the present invention may have been exaggerated for the sake of clarity.

[0020] FIG. 01A illustrates an electronic data transmission scheduling system 160, according to an embodiment of the invention that operates with a computerized network 150 having a network topology.

[0021] FIG. 01 B illustrates a flowchart that outlines a solution to the problems overcome by embodiments of the invention that may be obtained in six (three plus three) iterative optimization steps carried out in line with the computerized workflow 100, according to an embodiment of the invention.

[0022] FIG. 01 C Illustrates the optimization time with different

implementations of the bandwidth verification when Simulated Annealing

(denoted AA) is used for mapping and association, which are suitable for application with embodiments of the invention.

[0023] FIG. 01 D Illustrates the optimization time with different

implementations of the bandwidth verification when the FTCP heuristics (FS) is used for mapping and association, which are suitable with embodiments of the invention. [0024] FIG. 01 E illustrates a representative time reduction ratio when comparing the CGH algorithm with CPLEX for network topologies of increasing size for uniform traffic (left) and control traffic patterns (right), respectively, suited for embodiments of the invention.

[0025] FIG. 01 F illustrates the time total running time for calculating a deployment plan for combinations of algorithms implementing various embodiments of the invention for one mid-sized topology.

[0026] FIG. 02 illustrates a controller computer-implementable deployment strategy operating under certain constraints and a given network topology 200, according to an embodiment of the invention.

[0027] FIG. 03 provides a graph of failure probability versus link bandwidth that may be used by a computerized system to determine the optimal trade-off between required reliability and associated bandwidth demands, according to an embodiment of the invention.

[0028] FIG. 04A provides a graph that quantifies the influence on the required bandwidth relative to the worst-case delay bound constraints of control traffic flows, according to an embodiment of the invention.

[0029] FIG. 04B provides a graph that quantifies the influence on the reliability relative to the worst-case delay bound constraints of control traffic flows, according to an embodiment of the invention. [0030] FIG. 05 illustrates an example of graphical computation of delay bound and backlog bound using network calculus concepts, according to an embodiment of the invention.

[0031] FIG. 06 illustrates a computerized method in a pseudo algorithm form that will assist the potential path variables satisfy the delay and backlog constraints (9) and (10) while satisfying other appropriate conditions, according to an embodiment of the invention.

DETAILED DESCRIPTION OF AN EMBODIMENT OF THE INVENTION

[0032] Embodiments of the invention entail a computerized method and system that automates operation of a large network of distributed computerized processing entities that exchange information with each other in line with defined policies on bandwidth constraints, flow delay bounds and/or service reliability. An example for such a large network would be T-Mobile’s North American network or Microsoft’s cloud network.

[0033] Embodiments of the invention provide a computerized network management system or a computerized tool that systematically analyzes the impact of different computerized network or service deployments with respect to characteristics such as reliability, bandwidth, and flow delay. The output comprises a computer-operable and/or computer-implementable deployment strategy (or plan) that can be realized by having additional computing elements (or software in a computing element) initiate processing entities at planned locations along with the data flows following performance guarantees and reliability guarantees. The computer-implementable deployment strategy defines network and/or service control operations in line with calculated performance guarantees. Embodiments of the invention further relate to computerized systems and methods that direct and control the physical adjustments to data transmission paths in a large-scale network’s composition of computerized data transmission nodes in order to limit data end-to-end transmission delay in the computerized network to within a calculated worst-case delay bound. [0034] Embodiments of the invention may further provide a black-box optimization framework that provides a computerized method and system that quantifies the effect of the consequent control traffic when deploying a distributed control plane in a computerized network. Evaluating different computerized implementations of the framework over real-world computerized network topologies shows that close to optimal solutions may be achieved. Moreover, experiments using such large-scale network computerized systems indicate that applying a computerized method for controller placement without considering the control traffic, causes excessive bandwidth usage (worst cases varying between 20.1 % to 50.1 % higher bandwidth) and congestion when compared to various embodiments of the prior art, such as may be found in S. Liu, R. Steinert, and D. Kostic,“Flexible distributed control plane deployment,” in NOMS 2018-2018 IEEE/IFIP Network Operations and Management Symposium. Mini conference. IEEE, 2018, pp. 1-7. To ensure resource efficiency, service reliability and guaranteed performance, deployment of distributed control planes (fundamental for scalable management of traffic over physical and virtual data transmission nodes) takes both placement and control traffic properties into account.

[0035] Embodiments of the invention provide: 1 ) a novel formalization of the problem of distributed control plane optimization, enabling 2) tuning of reliability and bandwidth requirements. By analyzing the challenges and complexity of the computerized processing entity placement and traffic routability problem, embodiments of the invention introduce a generic black-box optimization process formulated as a feasibility problem. For embodiments of the invention, a computerized processing entity comprises a computer capable of executing computer instructions implementing the formulas detailed here, and especially a computer networked with other computers. Thus, embodiments of the invention provide computerized systems that specify each step of the process along with guiding implementations. Embodiments of the invention may be implemented as specialized electronic control systems configured for physical attachment to the other elements comprising a computer network. As such, embodiments of the invention can be used for planning a physical computer network with respect to bandwidth, reliability and delay requirements. Similarly, the specific components for directing and controlling such a physical computer network may comprise specialized hardware and other electronic devices that monitor the physical computer network, make physical adjustments to the flow of data through the network, and otherwise direct the operations for such a physical computer network.

[0036] In contrast to the prior art, an embodiment of the optimization process detailed here adds additional steps for quantifying the consequences of deploying a control plane solution that predetermines reliability and bandwidth requirements. As a powerful computerized prediction tool, service providers and operators can use embodiments of the invention to fine-tune control plane deployment policies for application in their systems. In combination with the generic design of the black-box optimization process, many existing

computerized methods can be adapted and employed for control plane optimization and service management. [0037] Embodiments of the invention also provide a computerized method and a computerized system that provides a capability for identifying how computerized processing entities and electronic aggregators in a network may be deployed and connected in a computerized network topology to implement a distributed service that satisfies reliability and performance requirements.

Embodiments of the invention may comprise specialized electronic components, specialized electronic circuits, specialized hardware, computerized software and/or hardware that may identify in/for a large computer network:

1. the set of computerized data transmission nodes operating as

computerized processing entities;

2. the set of computerized data transmission nodes operating only as electronic aggregators and their association to computerized processing entities;

3. the connections between the computerized processing entities and the electronic aggregators; and

4. the routes of expected data flows, meeting the requirements on reliability, bandwidth and flow delay.

[0038] Embodiments of the invention further enable automated deployment of computerized processing entities and electronic aggregators in a computerized network topology considering reliability, bandwidth and flow delay that: 1. provides an optimized computer-implementable deployment strategy for the network in line with network operator specific requirements for flow delay, bandwidth or control service reliability;

2. predicts the demands on flow delay for an estimated computer- implementable deployment strategy given a fixed control service reliability and a fixed limit on bandwidth and a network topology;

3. predicts the demands on bandwidth for an estimated computer- implementable deployment strategy given a fixed control service reliability and a fixed limit on flow delay and a network topology;

4. predicts the achievable control service reliability for an estimated computer-implementable deployment strategy given fixed limits on flow delay and bandwidth and a network topology.

[0039] Embodiments of the invention enable guaranteed reliability in line with predetermined requirements and computerized systems executing the embodiments of the invention may output a deployment strategy that provides congestion-free connections between associated computerized processing entities and electronic aggregators in a manner that cannot be performed using prior art methods.

[0040] RELEVANT TERMINOLOGY AND DEFINITIONS

[0041] A computerized processing entity in a computerized network may be, e.g., a network controller; a computing unit (for example, a virtual machine“VM”); as well as a cluster of computerized processing entities (such as, a data center). A computerized processing entity may be virtual or physical. A virtual

computerized processing entity comprises a collection of physical computerized processing entities configured to perform as if such systems comprises a unified entity. A network of distributed computerized processing entities may operate locally in a computerized network or computerized data center, or globally across geo-distributed computerized data centers.

[0042] An electronic aggregator comprises a computerized network entity processing that forwards network traffic (i.e., one or many network flows), e.g., a computerized gateway or a computerized network switch. An electronic aggregator may be, e.g., a computerized gateway, a computerized network switch, or a non-computerized device such as a field-programmable gate array (FPGA) that comprises an integrated circuit designed to be configured by a customer or a designer after manufacturing that contains an array of

programmable logic blocks. An electronic aggregator may be virtual or physical. An electronic aggregator may include, for example, an OpenFlow switch in the context of Software-Defined Networks (“SDN”) or a radio access point in a Software-Defined Radio Access Network (“SoftRAN”) context. In any case, the electronic aggregator acts as a data forwarding device.

[0043] A data transmission node or network node or computerized data transmission node (the terms are synonymous) comprises a computerized network entity hosting an electronic aggregator. Suitable computerized network entities may include application-specific hardware. Put another way, a data transmission node includes at a minimum a central processing unit and related equipment (network connections, memory, etc.) needed for successful executing the programs to accomplish its predetermined tasks, such as hosting an electronic aggregator. A data transmission node may or may not host a computerized processing entity configured to perform additional tasks. A node may be virtual or physical. A virtual data transmission node nevertheless comprises a collection of physical entities configured to operate in a collective manner to perform a defined task.

[0044] A data transmission link or network link (the terms are synonymous) comprises a connection between computerized data transmission nodes and may be virtual or physical. A virtual network link nevertheless comprises a collection of physical computing devices that have been arranged to perform a collective task.

[0045] A network link propagation delay refers to the signal propagation delay of the link.

[0046] A network topology comprises a set of interconnected computerized data transmission nodes via network links. Of course, the network topology may also include data transmission nodes comprised of application specific hardware. In short, the network topology here comprises a physical structure or a virtual structure (comprised of physical structures), and in all cases represents a physical entity and not a mere mathematical calculation. [0047] Bandwidth refers to the amount of data that can be transmitted by a link during a time unit.

[0048] An electronic bandwidth checker calculates a routability indicator l in step 1 10 of FIG. 01 B. In one embodiment of the invention, the electronic bandwidth checker may run algorithms such as FPTAS and est (described herein below) for this purpose. The electronic bandwidth checker outputs the routability indicator to the electronic resource planner. The electronic bandwidth checker can also output the corresponding flow routing plan along with the calculated bandwidth allocation per network link for routable solutions following the bandwidth requirements.

[0049] A network flow comprises a stream of data packets.

[0050] Flow delay represents the end-to-end delay of delivering a packet of a certain flow from the source node (sender) to the destination node (receiver) over one or several physical or virtual links. A virtual link nevertheless comprises a series of physical entities configured to perform a particular task, as if these entities comprised a single entity.

[0051] An electronic flow delay checker computes a routability indicator c in the Delay and Backlog Verification step 1 1 1 of FIG01 B. The electronic flow delay checker calculates Delay and Backlog bounds according to a network calculus model. The electronic flow delay checker outputs the delay and backlog bounds and a routability indicator to the electronic resource planner. The checker can also output the corresponding flow routing plan along with the calculated bandwidth allocation per network link for routable solutions following the flow delay requirements.

[0052] Reliability is defined as the probability that an operational electronic aggregator is connected to at least one operational computerized processing entity, given data transmission node and link failure probabilities. Reliability guarantee corresponds to guaranteeing (deterministically or probabilistically) that a service request can be delivered to and handled by at least one computerized processing entity.

[0053] An electronic reliability checker comprises tests and estimates the failure probabilities of the network. The electronic reliability checker calculates a reliability score R, which is input to the electronic resource planner.

[0054] A computer-implementable deployment strategy refers to setting up a network of distributed virtual or physical computerized processing entities exchanging data. Such a computer-implementable deployment strategy is the result of a multi-step process, including planning where in the physical network the computerized processing entities should perform their predetermined operations, as well as the flow paths among computerized processing entities across a physical network. Computer-implementable deployment strategies may be implemented by physical computing devices in an automated fashion. In such systems, the presence of a human is not required, and such systems may theoretically operate without intervention indefinitely. [0055] An electronic resource planner executes Mapping (step 103), Association (step 105), Traffic estimation (step 107) and the condition test (step 1 17), given reliability and bandwidth and delay requirements and configuration and network topology input. Mapping and Association involve calculating cost functions based on achieved reliability, bandwidth and delay routability indicators. The cost functions are maintained throughout the iterative workflow process by the electronic resource planner. The electronic resource planner develops a computer-implementable deployment strategy based on evaluating the fulfilment of conditions from reliability /bandwidth/delay routabilility.

[0056] Limitations in the Prior Art

[0057] The early definition of the control plane in a software-defined network (“SDN”) setting assumes that one computerized processing entity (in this context an SDN controller or control instance) handles flow requests over a set of associated electronic aggregators (i.e. SDN switches). To improve reliability and scalability, more recent solutions propose a distributed control plane, consisting of multiple physically distributed but logically centralized control instances. Under distributed conditions, both placement and routing of data traffic between processing entities should be considered to avoid a dramatic increase in communication overhead. The prior art in distributed control plane deployment typically ignores control plane traffic routability (e.g. bandwidth constraint and flow delay limitations) and only considers placement optimized for scalability or reliability. [0058] For example, the prior art in control plane scalability mainly focuses on aspects such as switch-to-computerized processing entity or computerized processing entity-to-computerized processing entity delay reduction,

computerized processing entity capacity and utilization optimization, and flow setup time and communication minimization. Prior art examples having reliability models or measurements employ intuitive objective functions to obtain a placement solution, without providing an estimate of the achieved reliability. One prior art solution, for example, proposed a method for estimating the network reliability in polynomial time and provide a lower bound of the actual reliability, along with a heuristic algorithm to decide the number of processing entities and their locations, to guarantee a certain reliability requirement. However, the prior art does not address control traffic routability.

[0059] Cloud computing comprises shared pools of configurable computer systems and system resources and higher-level services that can be

automatically and rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.

[0060] In cloud computing, problems similar to the above-mentioned control plane deployment problem also exist. The placement of virtual machines (“VMs”) having an awareness of network resources and inter-VM traffic demands have been studied in the prior art. One prior art solution proposed VM placement approaches with the considerations on the inter-traffic between VMs, by assuming a very simple model of the underlying network (without network topology and routing considerations). In other prior art, methods for placing a fixed number of VMs with traffic consideration based on networks inside a data center have been proposed. For example, one on-demand VM placement solution in the prior art assumes a simple network model which is insufficient in real network applications

[0061] In the context of elastic network services, a virtual network function can be viewed as a VM with a point-to-point source destination traffic demand. Some prior art focuses on the placement and routing of a given set of VNFs inside a data center. One prior art solution proposes heuristic algorithms for dynamic on-demand placement and routing of VNFs.

[0062] Many existing solutions overlook the importance of considering the traffic as part of the solution. Dealing with the traffic associated with certain computer-implementable deployment strategies is typically ignored in the prior art, although traffic flows need to be forwarded timely and reliably through a network infrastructure having varying link capacities, availability, and other networking properties. Traffic congestion, for example, is especially destructive since it may degrade service performance, or worse, cause availability issues - the latter cannot be tolerated in services critical to human safety, for example.

[0063] The inventors have discovered that the importance of considering reliability, bandwidth and delay of flows between networking entities (both physical and virtual), factors which have been severely overlooked in the prior art. Dealing with the traffic flows of certain infrastructure deployments is often ignored, although it is essential for most services that the network traffic is forwarded timely through networks exhibiting varying network properties related to service availability and bandwidth.

[0064] The inventors have further discovered that at best, the existing state of the art only partially deals with the problem, either by proposing solutions that deal with deployment without the proper consideration of all types of traffic flows, or only deployment without involving certain steps like determining the number of processing entities and data transmission paths. Besides, important aspects related to reliability, bandwidth and flow delay have not been properly modelled and studied until the inventors recognized their importance in developing an improved computerized solution to the problem.

[0065] Because of these shortcomings, prior art approaches fail to support reliable service elasticity, albeit being a fundamental feature for scalable network service operation and availability. Not considering reliability, flow delay and bandwidth aspects of the traffic exchanged between entities may consequently lead to serious scalability and service reliability issues in large-scale distributed systems.

[0066] It is worth noting that flow delay optimization based on empirical objectives are essentially different from embodiments of the present invention. The present invention is based on well-formulated theory and models, whereas much of the prior art is based on empirical objectives and cannot calculate and guarantee a worst-case flow delay bound. [0067] Formal Problem Definition Associated with Embodiments of the Invention

[0068] Assumptions and Requirements

[0069] FIG. 01A illustrates an electronic data transmission scheduling system 160, according to an embodiment of the invention that operates with a computerized network 150 having a network topology, exemplified in the context of a distributed control plane. The electronic data transmission scheduling system 160 includes an electronic reliability checker 161 , an electronic bandwidth checker 162, and an electronic resource planner 164, according to an

embodiment of the invention. Some embodiments of the invention also include an electronic flow delay checker 163. The electronic data transmission scheduling system 160 may comprise one or more computers, a dedicated circuit, a field-programmable gate array (FPGA) and other suitable devices as known to a person of ordinary skill in the relevant field. The electronic data transmission scheduling system 160 may even be combined with other network devices in some embodiments of the invention.

[0070] An electronic automated rescheduling system 165, comprising an electronic network monitor 166 and an electronic change detector 167, can be used for initiating the calculation of an updated deployment plan upon changed network conditions of the computerized network 150 (e.g., changed traffic patterns or computerized node or link failures). The electronic power utilization manager 168 controls the energy usage of used and unused computerized data transmission nodes and links. While the electronic power utilization manager 168 is shown as being outside the electronic data transmission scheduling system 160, in some embodiments, the electronic power utilization manager 168 could be constructed inside the electronic data transmission scheduling system 160.

[0071] The computerized network 150 will first be described to illustrate the environment in which the electronic data transmission scheduling system 160 operates, according to an embodiment of the invention. The computerized network 150 has a network topology applicable for processing by embodiments of the invention.

[0072] The network 150 is the object that the electronic data transmission scheduling system 160 processes, according to an embodiment of the invention. Given a network topology with data transmission nodes and links (as defined herein), the electronic data transmission scheduling system 160 determines which data transmission nodes of 150 contains the processing entities, and which nodes are simply electronic aggregators.

[0073] The electronic data transmission scheduling system 160 may operate outside the network 150. The electronic data transmission scheduling system 160 controls, configures, deploys the control plane for the network 150. But, the electronic data transmission scheduling system 160 may typically not be a part of network 150, according to an embodiment of the invention. For example, electronic data transmission scheduling system 160 may operate remotely, or even offline (e.g., just used for booting up the network 150). [0074] So, data transmission nodes (computerized processing entities) 171 , 173, together with computer nodes 191 -196, comprise a set of data transmission nodes.

[0075] Given the sample network 150 in FIG 01 A, by running the electronic resource planner 164, the electronic resource planner 164 determines (for example) the following mapping for the network 150 that it is better to launch processing entities in data transmission nodes 171 , 173, in order to receive a desired network mapping reliability score as well as data end-to-end transmission delay. So, electronic resource planner 164 configures the network 150 and makes processing entities launched in data transmission nodes 171 , 173. See, e.g., the description of the electronic resource planner 164 with respect to step 109 shown in FIG. 01 B, as well as the definition provided for the electronic resource planner 164 hereinbelow.

[0076] Notice that mapping means which data transmission nodes are selected and configured to contain the processing entities. The configuration may happen either online or offline, according to an embodiment of the invention

[0077] After being processed by the electronic resource planner 160, the network 150 comprises a distributed control plane comprising two control planes 181 , 183 that each comprise representative electronic aggregators 191-196 but could, of course, include a significantly greater number of electronic aggregators. Each control plane 181 , 183 includes data transmission nodes 171 , 173. The data transmission nodes 171 , 173 comprise computerized devices and/or hardware that handle control traffic within their respective control planes 181 ,

183. The computer nodes 191 -196 correspond to electronic aggregator entities and handle the data traffic in their respective control planes 181 , 183.

[0078] The solid lines between the elements in the computer network 150 represent data transmission links. The dashed lines represent switch-control traffic, e.g., management instructions from a network manager to a given computerized node. The large solid line connecting the control planes 181 , 183 represents control-to-control traffic, e.g., instructions between the two data transmission nodes 171 , 173. The collective impact of these control instructions is to make the control planes 181 , 183 function collectively as a distributed control plane in the computerized network 150.

[0079] The electronic reliability checker 161 examines possible data transmission routes between computerized data transmission nodes involving data transmission nodes 171 -173 and computerized nodes 191 -196 and the data transmission links to determine network mapping reliability scores for the network topology. The electronic resource planner 164 tests different mappings of processing entities based on reliability scores provided by the reliability checker 161 , wherein the reliable data transmission reliability represents a probability that an electronic aggregator 191 -196 connects to at least one processing entity 171 -173 within a set of data transmission nodes (171 , 173) or (193-196) given computerized data transmission node and data transmission link failure probabilities, according to an embodiment of the invention. The electronic reliability checker 161 calculates the measures described herein that pertain to its outputs. The electronic reliability checker 161 may comprise one or more computers, a dedicated circuit, a field-programmable gate array (FPGA) and other suitable devices as known to a person of ordinary skill in the relevant field.

[0080] The electronic bandwidth checker 162 calculates a flow routing plan along with the calculated bandwidth allocation per network link for routable solutions (indicated by l) following the bandwidth requirements, wherein the electronic bandwidth checker 162 develops calculated bandwidth allocations by testing different combinations of data transmission links and different

combinations computerized data transmission nodes 171 -173 and 191 -196, according to an embodiment of the invention. The electronic bandwidth checker 162 calculates the measures described herein that pertain to its outputs. The electronic bandwidth checker 162 may comprise one or more computers, a dedicated circuit, a field-programmable gate array (FPGA) and other suitable devices as known to a person of ordinary skill in the relevant field.

[0081] The electronic resource planner 164 uses the network mapping reliability score from the electronic reliability checker 161 , the traffic estimates from the electronic resource planner 164 and the calculated bandwidth allocation from the electronic bandwidth checker 162 to develop a computer-implementable deployment strategy that directs rearrangement of the network topology by calculating a trade-off between the network mapping reliability score and the calculated bandwidth allocation that satisfies predetermined requirements for reliability and the bandwidth, according to an embodiment of the invention. The electronic resource planner 164 calculates the measures described herein that pertain to its outputs. The electronic resource planner 164 may comprise one or more computers, a dedicated circuit, a field-programmable gate array (FPGA) and other suitable devices as known to a person of ordinary skill in the relevant field.

[0082] The electronic flow delay checker 163 determines an end-to-end delay of delivering a packet from a sending computerized data transmission node 171 -173 and 191 -196 to a destination computerized data transmission node 171 - 173 and 191 -196 over at least one data transmission link of a plurality of data transmission links, according to an embodiment of the invention. The electronic flow delay checker 163 will be described in further detail below. The electronic flow delay checker 163 calculates the measures described herein that pertain to its outputs. The electronic flow delay checker 163 may comprise one or more computers, a dedicated circuit, a field-programmable gate array (FPGA) and other suitable devices as known to a person of ordinary skill in the relevant field. For embodiments of the invention including the electronic flow delay checker 163, the electronic resource planner 164 is modified to include the end-to-end delay determination of the electronic flow delay checker along with the outputs of the electronic reliability checker 161 , the traffic estimates from the electronic resource planner 164 and the electronic bandwidth checker 162 to develop a computer-implementable deployment strategy that directs rearrangement of the network topology by calculating trade-offs among the network mapping reliability score, the calculated bandwidth allocation, and the calculated end-to-end delay that satisfies predetermined requirements on reliability, bandwidth and end-to- end delay, according to an embodiment of the invention.

[0083] The electronic network monitor 166 of the electronic automated rescheduling system 165 is a computerized system that monitors the operational state and performance of the computerized network and the network traffic. The electronic change detector 167 of the electronic automated rescheduling system 166 comprises a computerized system that detects changes in the monitored operational state and performance of the computerized network and network traffic. The electronic change detector can detect changes in end-to-end transmission delays, flow demands, reliability, link or node failures, and/or any other change which influences the reliability and or performance requirements of the deployed control plane. When the electronic change detector 167 detects a change, the electronic rescheduling system 165 signals the electronic data transmission scheduling system 160 to calculate a new control plane deployment strategy, according to an embodiment of the invention.

[0084] The electronic power utilization manager 168 is a computerized system that automatically controls the energy usage of data transmission nodes and links. After the electronic data transmission scheduling system 160 has output a computer-implementable deployment strategy, according to an embodiment of the invention, the electronic power utilization manager 168 adjusts the energy levels of computerized data transmission nodes and links.

The envisaged effect is that the energy level of computerized data transmission nodes and links is adjusted to the level of utilization, such that, e.g., unused links and nodes are powered off completely or operating at lower power levels.

[0085] As shown in Fig. 01 A, G(V = T u M, E) can be used to represent a graph of the computerized network topology 150, where t/and £ denote computerized data transmission nodes 191 -194 and 171 -173 and data

transmission links between the computerized data transmission nodes 191 -194 and 171 -173, respectively. T denotes the set of nodes holding electronic aggregators and M represents a candidate set of nodes eligible for hosting a computerized processing entity. N ^ T denotes the set of electronic aggregators that needs to be associated to a computerized processing entity. Further, let each node n e t/ have a given probability of being operational, denoted by p n . Analogously, data transmission links (u,v) e E are operational with probability p u,v . Assume different Independent and Identically Distributed (“i.i.d.”) operational probabilities for network links and nodes. Note, that this probability can be predetermined based on expert knowledge or inferred by learning about the system performance over time.

[0086] A variety of methods may be employed for calculating the delay bound. For example, some embodiments of the invention may employ Network Calculus (“NC”) while other embodiments of the invention may employ queuing theory for calculating a stochastic delay bound. The examples herein employ NC, but the ordinary artisan will be aware that other techniques may be employed to calculate the delay bound. Network Calculus (“NC”) may be applied for calculating the worst-case delay and buffer space requirements of flows in a network, according to an embodiment of the invention.“NC”, a system theory for communication networks, is further described below in connection with FIG. 5. To apply NC, estimation (b f , d f ) of the arrival curve of each flow is required, where b f denotes the burstiness, and d f is the sustainable arrival rate (throughput). Besides, the computerized system also computes the Equivalent Service Curve (“ESC”) Y t = (n, T f ) offered by the network for the flow f, where n is the service rate and T f is the maximum service delay. With arrival curve and ESC, the computerized system can then derive the delay bound D f and buffer bound B f of flow f

[0087] To forward traffic with bounded delay, each node employs a certain guaranteed performance service discipline to schedule flows that require the same output link. For example, suppose the bandwidth and propagation delay of an output link e is (u e , t e ), and the reserved and guaranteed service rate of a flow is r e , r e £ Ue. Then, if the electronic data transmission scheduling system 160 employs the Weighted Fair Queuing (“WFQ”) discipline, the service curve for the flow is yf = (r e , Lmax/re + Lmax/Ue ), where Lmax is the maximum packet size, r e is the service rate, L ma r e + L m a/u e is the maximum service delay. Suppose the flow has an arrival curve of (b f , d f ), according to NC, the delay bound of the flow for traversing the link is Df = L ma r e + L m a/u e + t e + b f I r e , and the backlog bond Bf = b f + {Lma re + L m a u e ) d f . If the electronic data transmission scheduling system 160 employs the Self-Clocked Fair Queueing (“SCFQ”) discipline, the service curve is yf = (r e , L ma r e m L m ax/u e ), where m denotes all the flows to the scheduler, f denotes the target flow. The ordinary artisan may find additional references for more guaranteed performance service disciplines and their corresponding service curves.

[0088] Formal Problem Solved by Embodiments of the Invention

[0089] The electronic reliability checker 161 shown in FIG. 01 A determines the network mapping reliability score as follows, according to an embodiment of the invention. Let binary variables y, denote the computerized processing entity locations, where y, = 1 if node / e M hosts a computerized processing entity, and y, = 0 otherwise. Define C = / 1 y, = 1, i e M) to denote the set of deployed computerized processing entities. Let binary variable a / , = 1 if electronic aggregator j e N is associated with the computerized processing entity in / e C, otherwise a# = 0. Although each electronic aggregator j can only be associated with one computerized processing entity at a time, it may have multiple backup computerized processing entities (e.g., in the case of processing entities with Openflow V1 .2 protocol). The distance between an electronic aggregator j and a computerized processing entity can be one network link hop or multiple hops. The reliability of node j is represented as R(G, j, C) (among | C | computerized processing entities), capturing the probability of node j connecting with at least one of the operational computerized processing entities. Solutions satisfying the constraints given topological conditions and reliability threshold b are found by Rmin = min(R(G, j, C), Vj e N) > b. [0090] The electronic bandwidth checker 162 shown in FIG. 01A calculates the bandwidth allocations for the network topology as follows, according to an embodiment of the invention. Let ( u e , t e ) represent the bandwidth capacity and propagation delay of each data transmission link e e E. Let r= (u e , t e ); V e e E. Suppose (S f , t f ) being the (source, sink) of traffic flow f. Let (b t , d) denote the burstiness and demand (throughput) of f. Let F = {f = (S f , t f , (b t , d f ))} be the set of all the traffic of the deployed infrastructure. Let F c < ^F be the inter-traffic of computerized processing entities that F c = {f = (St, t f , (bt, d f )) \ S f e C, t f e C}. Let K f denote all the possible non-loop paths for f e F, and let k = utK f . Let binary decision variable $ K) , VK e ¾-to denote whether path K is selected and used for the flow routing. Let variable X(K) denote the reserved guaranteed service rate for the flow along path K, VK e k There will be a sub-flow on path K if and only if X K> = 1 and X(k) > 0. Let f = {X(K ) \ K e K f } denote the reserved guaranteed service rates on all the paths of f. Let D ma x denote the delay bound constraint, and Bmax denote the backlog bound constraint. The electronic resource planner 164 shown in FIG. 01 A may also compute some of the equations above as well, according to an embodiment of the invention.

[0091] The electronic flow delay checker 163 shown in FIG. 01A may calculate the delay bound of a flow described as follows, according to an embodiment of the invention. Embodiments of the invention assume a flow can be split and routed on a list of selected paths k = {K \ K) = 1, K e K f }, with each path route a sub-flow f K) , K e k . Thus, to calculate the delay bound D f and buffer bound B f of a flow f, a computerized method and system needs to calculate the delay and backlog bounds of sub-flow f K> , which are denoted as D K> and B K> , respectively, since one has D f = max{D/ K> ) and B f = max{B K> ). Embodiments of the invention use {b/ K> , d/ K> ) in computerized methods and methods to denote the burstiness and arrival rate (throughput) of the sub-flow f (K) . The burstiness of each sub-flow should be less than or equal to the burstiness of the aggregated flow f, according to an embodiment of the invention. Considering the worst case, the computerized method and system can assume b/ K> = b f , bK e K f , according to an embodiment of the invention. The computed summation of the arrival rates of all the sub-flows should equal to that of the aggregated flow, which means åKeK f di K) = df.

[0092] For each sub-flow f K) , given X(K) and the output link bandwidth and flow delay (u e , t e ) at data transmission link e along the path, by applying Network Calculus (“NC”) and the corresponding service discipline, embodiments of the invention in the form of computerized methods and systems calculate the flow f K> ’s service curve at data transmission link e as ^ t (K) = (X(K), T e ). Suppose the path of f K> have k nodes, then computerized methods and systems may calculate the total ESC of path

e K, by applying the concatenation property of NC. Here ® denotes the min-plus convolution operation in NC. Then, by applying NC, embodiments of the invention derive the delay bound of each sub-flow as D/ K> = ts (K> f +b (K> t

/X(K)+å eeK t e and B/ K> = b (K) f +ts (K) f d (K) f , where ts (K) f is the service latency and which depends on the used service discipline (e.g., Weighted Fair Queueing). The delay bounded deployment problem requires that for all non-zero sub-flows

DP < Dmax, BP < Bmax, VK E Kf, Vf E F.

[0093] Using the above method, then embodiments of the invention operating as computerized systems formulate the problem to be solved by computerized methods and systems as follows:

[0094] The electronic bandwidth checker 162 shown in FIG. 01A may be configured to execute equations (5), (6), and (7), according to some

embodiments of the invention.

[0095] Processing components and workflow for embodiments of the invention. [0096] Embodiments of the invention constructed as computerized methods and systems may solve two problems that arise during deployment: 1 ) determining the number of computerized processing entities, their location and relative connectivity; and, 2) information flow configuration across the

infrastructure of deployed computerized processing entities relative to specific performance requirements on bandwidth, flow delay and reliability.

[0097] FIG. 01 B illustrates a solution to the problems described above obtained in a series of interlinked, iterative optimization steps carried out in line with the computerized workflow 100, according to an embodiment of the invention. The computerized workflow 100 is suitable for implementation in computerized systems, according to an embodiment of the invention. In particular, the computerized workflow 100 is suitable for implementation via the architecture described in FIG. 01 A, according to an embodiment of the invention.

[0098] The electronic resource planner 164 performs an initial configuration and input (Step 101 ) that comprises specifying a network topology and its networking properties for a large computerized system such as the computerized system 150 shown in FIG. 01A. The electronic resource planner 164 also inputs operator-specified constraints on the service to be deployed with respect to bandwidth, flow delay and reliability, according to an embodiment of the invention.

[0099] The electronic resource planner 164 next performs a mapping (Step 103) of the computerized processing entities in a large network, such as the computerized system 150 shown in FIG. 01 A, given the configuration of and inputs to the large network, according to an embodiment of the invention. The electronic resource planner 164 outputs from Step 103 a computer-readable processing entity location map, according to an embodiment of the invention.

[0100] The electronic resource planner 164 next performs an association (Step 105) which defines the connectivity among computerized processing entities and electronic aggregators in the computerized system 150 shown in FIG. 01A. The electronic resource planner 164 outputs a computer- readable association plan, according to an embodiment of the invention.

[0101] The electronic resource planner 164 next performs a traffic estimation (Step 107) that computes the demand and burstiness of each flow according to the input association plan given a network-operator specified traffic model for the computerized system 150 shown in FIG. 01A, according to an embodiment of the invention.

[0102] The electronic bandwidth checker 162 next performs a routability check (step 109) that includes bandwidth verification (step 1 10) and may optionally include delay and backlog bound verification (step 1 1 1 ), according to an embodiment of the invention. The combined“bandwidth routability verification” (Steps 109/1 10) tests whether all flows can be routed without overloading any data transmission link relative to specified bandwidth

constraints. The inputs to Steps 109/1 10 are the set of network topology properties and the estimated flow demand and burstiness previously calculated by the electronic resource planner 164 in the traffic estimation step 107. The output is a routability indicator l. If (L > 1) a flow routing plan can be retrieved from the bandwidth checker 162. When the bandwidth checker 162 has performed the bandwidth verification steps 109/1 10, operations may move to the next step 1 1 1. Note: executing step 1 1 1 is optional and only required if flow delays are to be accounted for in the deployment plan. If step 1 1 1 is not executed, the electronic resource planner will assess the conditions in step 1 17.

[0103] The electronic flow delay checker 163 performs a delay and backlog verification (step 1 1 1 ) that tests whether the estimated flows can be routed under given flow delay and buffer space requirements under the conditions defined by the deployment plan. The electronic flow delay checker 163 also calculates delay and backlog bounds based on the input from the traffic estimates previously calculated in the Traffic Estimation step 107 in the electronic resource planner. If the routability indicator from the previous step 1 10 in the electronic bandwidth checker is A < 1, then the routability indicator c will be initialized to c=l, meaning that the step 1 1 1 will not be executed, and the operation move on to step 1 17. Otherwise, step 1 1 1 will be executed and the output of the electronic flow delay checker 163 will be the calculated routing indicator c and a flow routing plan, followed by executing the step 1 17.

[0104] In the condition step 1 17 executed in the electronic resource planner 164, the electronic resource planner 164 determines whether the estimated flows are routable by inspecting a routability indicator. If routable, the electronic resource planner 164 outputs a computer-implementable deployment strategy that meets all requirements. If the electronic resource planner 164 determines that the estimated flows are not routable by inspection of a routability indicator, and the end-conditions are met (i.e. number of maximum iterations), the electronic resource planner 164 will output a null-deployment strategy (a form of computer-implementable deployment strategy), meaning that no satisfactory solution was found. Otherwise, the electronic resource planner 164 performs the association step (step 105) or the mapping step (step 103), according to an embodiment of the invention. The order for signaling the redoMapping (1 13) or redoAssociation (1 15) and the number of iterations for re running step 103 and step 105 can be flexibly defined in an embodiment of the invention.

[0105] In various embodiments of the invention, the electronic resource planner 164 may flexibly specify the end-conditions. For example, the electronic resource planner may end the computerized method illustrated in FIG. 01 B after a maximum number of iterations, or when all the routability and reliability requirements have been fulfilled. In case the identification of a feasible computer-implementable deployment strategy fails before the end-conditions are met, the electronic resource planner 164 has the following options:

• change or redefine the topological properties, or

• check the feasibility of the requirements on bandwidth, flow delay or

reliability. [0106] Embodiments of the invention may also employ the second option for optimization purposes, since computerized tools may fix two requirements of the three and optimize the remaining one using binary search. For example, embodiments of the invention can fix the flow delay and reliability requirements, and search for the minimum bandwidth needed to satisfy the flow delay and reliability requirements.

[0107] Some practical considerations

[0108] The computerized workflow 100 may be formally expressed mathematically in terms of a feasibility problem, according to an embodiment of the invention. The underlying routing mechanisms of the computerized infrastructure influence the formulation of the problem - if flows cannot be split and routed, then an additional constraint need to be added on the decision variables K) , e.g. å KEKf $ K) = 1, Vf e F. Computerized systems, such as the computerized system 150 shown in FIG. 01 A and mapped in FIG. 2, may be implemented in accordance with available computational resources and operational requirements, according to an embodiment of the invention.

[0109] Suitable computing equipment and methods, such as machine learning algorithms for graph problems using distributed computing platforms can process the computerized workflow 100, according to an embodiment of the invention.

[0110] One exemplary implementation of the computerized workflow 100 is provided below. For applying the exemplary computerized workflow 100 for deploying the control plane for Software Defined Networks, the computerized processing entities are network controllers. According to the mechanisms of the infrastructure, there are two ways to organize the control plane, which are in- band control and out-of-band control.

[0111] With out-of-band control all the computerized processing entities communicate with each other using dedicated networks (e.g., dedicated networks running in remote clouds), according to an embodiment of the invention. In contrast with in-band control, both computerized processing entity and electronic aggregator traffic share the same network. Embodiments of the computerized workflow 100 may be configured to support both cases, although the out-of-band case may additionally require that the paths for the inter processing entity traffic F c should be limited within the computerized control network. This limitation has already been implicitly included in the definition of the set K f . The K f is defined as the set of all the possible paths for a certain flow f. A possible path for flow fe F c in the out-of-band case can only be selected among data transmission links belonging to the control network.

[0112] Differences to the Known Prior Art

[0113] Some prior art comprises a heuristic computerized processing entity placement method but provides no considerations regarding control plane routability and reliability.

[0114] Other prior art focuses on the placement of SDN switches, rather than processing entities. Moreover, this prior art only tries to place a minimum number of SDN-enabled switches in a hybrid SDN network to achieve 100% single-link failure coverage. This prior art does not consider flow demands and link capacity constraints.

[0115] Still other prior art proposes a network controller placement and electronic aggregator association method configured to avoid single link/node failure, without the control plane traffic routability check and delay and backlog check steps.

[0116] Some other prior art describes a method for placing SDN controllers in a way that the delay may be optimized. Nevertheless, this prior art also does not take control plane traffic routability issue into consideration when doing the placement.

[0117] Still other prior art comprises allocating computing and network resource for a given number of VMs. However, this method does not have proper network flow delay estimations.

[0118] Finally, some prior art comprises a method for allocating processing entities to electronic aggregators. This prior art is similar to the association step 105 shown in FIG. 01 B, but this prior art has neither a routability check step nor placement step.

[0119] To summarize, the prior art differs from embodiments of the present invention in mainly two ways: 1 ) the prior art solves the placement problem with methods significantly different from the invention; and 2) the prior art does not consider and address the reliability, bandwidth and flow delay aspects of a computer-implementable deployment strategy properly.

[0120] Advantages and Applications

[0121] The most prominent feature of various embodiments of the invention is that can solve problems relevant to several practical applications in large distributed computing systems. Hence, embodiments of the invention are widely applicable to automated resource management problems requiring guaranteed performance and reliability of information flows.

[0122] Examples of applications that benefit from embodiments of the invention are many and include:

• Distributed big data computations, where resource allocation and

synchronization of partial results require data transactions under certain performance requirements and reliability guarantees.

• Deployment of logically centralized control planes in Software-Defined Networks, where data transactions for synchronized information updates need to be carried out in line with specified performance requirements and reliability guarantees to maintain information consistency and service availability.

• Self-organization of wired and wireless communication infrastructures (e.g. sensor networks) requiring multiple cluster heads selection and association of nodes and flows following performance and reliability requirements. • Scaling of virtualized network (“VN”) functions implementing elastic services, which require performance and reliability guarantees of deployed flows to ensure service availability and quality.

• Organization of wireless access networks, associating access points and base stations as electronic aggregators with distributed computerized processing entity instances (computerized processing entities) as part of a distributed evolved packet core.

• Cell association of wireless equipment (electronic aggregators) to

dynamically deployed virtual access points or base stations (computerized processing entities), allowing for turning on and off access infrastructure relative to, e.g., changing user concentration, service usage intensity or energy efficiency requirements (note: this assumes fixed or close to fixed location of transceivers).

[0123] Embodiments of the invention not only provide computerized systems that calculate the worst-case delay, but also provide computerized systems based on mathematical models and theories that effectively guarantee that the affected data flow transmissions in the computerized network will not exceed such calculated worst-case delay.

[0124] Thus, embodiments of the invention may be considered to offer guarantees. In contrast, others in the prior art claims to delay reduction are only based on experience or experimental observations and cannot provide any theoretical guarantees. The prior art is not known to offer delay guarantees for all flows in the network. [0125] Second, the delay in most queuing systems mainly comprises: queuing delay and propagation delay. The prior art only considers the

propagation delay but not queuing delay.

[0126] Third, the proposed workflow has the delay and backlog check step together with the other four steps in one iterative process. It is not typically a good idea to running them separately, e.g., running a main iteration with the other four steps to determine mapping and association, and then running a separate iteration which only contains has the delay and backlog step to schedule flow paths to meet the worst-case flow delay constraints. This is due to that in order to find a flow scheduling that meets the worst-case delay

constraints, the optimization procedure may have to alter the mapping

and association plans. This is a complicated and non-trivial iterative optimization process.

[0127] Embodiments of the invention may be applied to problems of the same class independently of the specific properties of the underlying

computerized network topology and networking conditions, allowing the system operator to specify traffic models, topological properties and end-conditions arbitrarily for execution by the computerized system. Furthermore, each step of the computerized workflow 100 shown in FIG. 01 B enables a flexible

implementation based on any suitable computerized method of choice (heuristic methods, machine learning, exhaustive search, etc.) and is hence adaptive to the computerized system at hand, in terms of computational capacity, platforms and software. [0128] Moreover, embodiments of the invention offer flexible usage as an offline tool for analyzing and quantifying the effects of computer-implementable deployment strategies in a networked system, or as an integrated online process in a system implementing automated resource management. For example, embodiments of the invention can be used as a tool by a network operator to analyze and plan different computer-implementable deployment strategies fitted to different service availability scenarios with different requirements on reliability, bandwidth and end-to-end delay (see, FIG. 03 and FIG. 04). As an online process, embodiments of the invention may be integrated in computerized systems implementing service elasticity.

[0129] An application example of the computerized workflow 100 shown in FIG. 01 B may comprise a distributed control plane deployment in SDN, according to an embodiment of the invention. This example demonstrates the applicability of embodiments of the invention to the distributed control plane deployment problem, which here encompasses computerized processing entity placement and associated control traffic of a distributed control plane that appears logically centralized. In this context, a computerized processing entity such as 173 in FIG. 01 A corresponds to an SDN controller (or control instance), whereas an electronic aggregator corresponds to a network switch, such as 192 in FIG. 01A. The objective of the computerized workflow 100 here is to produce a computer-implementable deployment plan that identifies: (1 ) the control service infrastructure of processing entities (control instances); (2) the associated electronic aggregator connections; and, (3) the traffic flows relative to specified reliability, bandwidth and flow delay requirements.

[0130] This example presents three major challenges. First, the control instances should preferably be placed in a manner that satisfies the given constraints on reliability. This includes decisions on how many control instances to use, where to place them and how to define their control regions. The computerized processing entity (SDN controller) placement problem in general is NP-hard. Second, the computerized system must verify that the control traffic introduced by a placement solution can be scheduled on the underlying network without overloading any network link. Third, the computerized system as the electronic data transmission scheduling system in 160 must verify the control traffic flows can be routed in way that the required delay and backlog bounds are held.

[0131] To solve the first problem, the prior art offers a general resort to heuristics to reduce the search space as the computerized processing entity placement problem in general is NP-hard. The second problem can be modelled as a multi-commodity flow problem for the verification of the routability. The third problem is a Mixed Integer Programming (“MIP”) problem, which is NP-complete. This kind of problem may be generally solved by employing random search algorithms. [0132] The following subsections illustrate how each step of an embodiment of the computerized workflow 100 shown in FIG. 01 B may be applied to solve the distributed control plane deployment problem.

[0133] Details of the computerized workflow 100 used in this example

[0134] Recall, that embodiments of the invention operate in a workflow comprising six steps, as shown in FIG. 01 B. The following subsections outline each step in detail with respect to the control plane deployment problem.

[0135] Mapping (Step 103) as Employed in this Example

[0136] The generality of the optimization process allows for a black-box implementation, expressed in this example using Simulated Annealing for Mapping (“SAM”). In short, the computer-implementable SAM algorithm (for a computer-implementable deployment strategy) executed in electronic resource planner 164 follows the standard

simulated annealing template, except that the SAM algorithm generates a new solution and decreases the temperature T when receiving the redoMapping signal. A computerized method and system for generating a new solution is designed as randomly adding or removing a

computerized processing entity based on the current mapping solution.

[0137] In some embodiments of the invention, the electronic resource planner 164 may calculate computer-implementable cost (energy) function for evaluating a mapping with respect only to bandwidth routability may be defined as:

[0138] In other embodiments of the invention, the electronic resource planner 164 may calculate the computer-implementable cost (energy) function for evaluating a mapping with respect to bandwidth and flow delay routability may be defined as:

[0139] The electronic bandwidth checker 162 shown in FIG. 01A may calculate l in the bandwidth verification (step 1 10), according to an embodiment of the invention l indicates whether control traffic is routable (l > 1 ) under bandwidth and reliability constraints or not (l < 1 ). When bandwidth and reliability constraints are satisfied, the cost function calculated in the electronic resource planner 164 reaches its maximum value 0.

[0140] The electronic flow delay checker 163 shown in FIG. 01A may calculate c in the delay and backlog bound verification step (step 1 1 1 ), according to an embodiment of the invention c indicates whether control traffic is routable (c > 1 ) under the delay and backlog constraints or not (c < 1 ). When delay, backlog and reliability constraints are satisfied, the cost function calculated in the electronic resource planner 164 reaches its maximum value 0. [0141] Since directly computing the reliability R(G, j, C) is NP-hard, the computerized approximation method and system in this example is applied for computing instead a lower bound:

[0142] The electronic reliability checker 161 shown in FIG. 01A may be configured to perform the computations immediately above, according to an embodiment of the invention. An algorithm example is provided below, where the transition probability function P defines the probability with which a new mapping will be accepted and a computer function (getNextSolution function in Algorithm 1 ) generates a new mapping by randomly adding, removing or moving a control instance based on the previous mapping:

Algorithm 1 The simulated annealing algorithm for mapping Input control signal: RedoMapping with inputs C, cost nev

Initialization

l: Choose a set C of controllers from the set V

2: Calculate Rmin = min {R{G, j, C) , Mj E V )

3: C current C , T T n m ai

4: Output Rmin, C

Upon control signal < RedoM apping\C , cost new > do

6 if T it al then

7 cost neu j

8 else t 0id , cost new , T) > random[0, l) then

9 C

10 end

11 C = t

12 Calculate Rm

13 T = hT

14 Output Rmin, C

15 end upon [0143] The electronic reliability checker 161 employs a computerized approximation method to first compute the set of the disjoint paths from an electronic aggregator j to all the processing entities C, noted as K j . Given the i.i.d operational probability of links/nodes on each disjoint path, this computerized method calculates the failure probability of each path, noted as F k , k ^ K j . Then, the electronic reliability checker 161 computes the:

[0144] Association (Step 105)

[0145] The electronic resource planner 164 employs a computerized algorithm the performs simulated annealing for association (SAA) and is similar to SAM, according to an embodiment of the invention. One difference relates to the cost function cost = min(0, l - 1) (or cost = min(0, c - 1)) maintained by the electronic resource planner 164. The other difference is that the electronic resource planner 164 may also generate a new solution by a random association of electronic aggregators and processing entities towards obtaining a satisfying solution. This procedure (getNextSolution for association) is exemplified in the algorithm description below by the computer function in Algorithm 2

(getNextSolution for association). The association step is executed after the Mapping step (step 103) or upon receiving the redoAssociation signal. Algorithm 2 Procedure of getNextSolution() for association

Input control signal: The set of controllers C. Current asso

ciation {di \i £ C, j £ N}. Number of hops between any

pair of nodes dist(i, j) , i £ N, j £ N. Let A(c) denotes

the aggregators associated to controller c.

l: procedure GETNEXTSOLUTION(C, (a^ |i £ C, £ N })

2: Randomly select an controller i £ C, that satisfies rest = N— A(i)— C 0, where rest denotes the

aggregators not associated to i.

3: Compute m

4: while True

5: Randomly select an aggregator j £ rest

6: distlnv = 1 /(dist(i, j) minDist + 1)

7: if distlnv > random( 0, 1) then

8: Get the current controller i' of j.

9: Assign = 0, assign a t = 1.

10: return £ C, £ N}

it: end if

12: end while

13: end procedure

[0146] Traffic estimation (Step 107)

[0147] The electronic resource planner 164 estimates the bandwidth demands of electronic aggregator-processing entity and processing entity processing entity flows, according to an embodiment of the invention. Let (s f , t f , d f , b f ) represent the source, sink, demand and burstiness of a flow f respectively. The objective of the electronic resource planner 164 here in this Traffic

Estimation step is to estimate each (d f , b f ) while S t and t f are known from the mapping (step 103) and association steps (step 105). The result of the Traffic Estimation step is used as input for the routability check step 109, involving the bandwidth verification step 1 10 and possibly the delay and backlog verification step 1 1 1.

[0148] Since the optimization process performed by the electronic resource planner 164 process treats the model of control traffic as an input variable, embodiments of the invention may employ any known computerized traffic model for estimating each (d t , b t ). For example, embodiments of the invention can perform the modeling with either a simple linear modelling method or advanced machine learning techniques.

[0149] Here is a simple traffic estimation model suitable for the electronic resource planner 164, associated with an embodiment of the invention. First, for computerized demand estimation assume that the message sizes of electronic aggregator request and corresponding computerized processing entity response are Treq = 128 and Tres = 128 bytes, respectively. Furthermore, after dealing with a request, the computerized processing entity instance sends messages of size T s tate = 500 bytes to each of the other |C| - 1 control instances notifying about the network state changes. Note that, the computerized traffic model here is essentially in line with the ONOS traffic model. The electronic resource planner 164 sets message sizes according to various parameters known to an ordinary artisan but can also be set arbitrarily. With these parameter settings and given the request rate h, j e N of each electronic aggregator, embodiments of the invention can simply estimate the flow demand between electronic aggregator j and its associated computerized processing entity is r j Treq and r j Tres, for electronic aggregator-processing entity direction and processing entity-electronic aggregator direction, respectively. A computerized simple linear model may estimate the outgoing inter-control flow demand from computerized processing entity / to another computerized processing entity, which is described as TVesZ/ v afj. Second, the computerized methods employed here may estimate the burstiness of a flow f as bm ti od f , where b r is a burstiness ratio. We assume the burstiness of a flow is proportional to its demand.

[0150] Bandwidth verification (Step 1 10)

[0151] The electronic bandwidth checker 162 shown in FIG. 01A computerized step checks whether all flows can be routed without overloading any data transmission link relative to specified bandwidth constraints, with all the possible paths. As discussed, the electronic bandwidth checker 162 calculates the routability indicator l (executing the est and FPTAS algorithms). Thus, computerized implementations of this step assume $ K) = 1, VKe ¾- and checks whether constraints (5), (6), (7) (provided above) can hold, which may be solved using a computerized linear programming (LP) method. However, solving this routability problem using computerized methods still means dealing with an undesired large number of variables X(K), which scales up exponentially with the number of vertexes and edges with a graph. This issue can be circumvented by formulating in a computer a maximum concurrent flow problem (as (15), (16), (17), (18) suggest below), which is easier to solve and equivalent to the multi- commodity flow problem.

[0152] The electronic bandwidth checker 162 shown in FIG. 01A may calculate maximum concurrent flow in the network, according to an embodiment of the invention. The fundamental idea of the maximum concurrent flow problem is to keep the capacities of the data transmission link fixed while scaling the injected traffic so that all flows fit in the network. The optimization objective l reflects the ratio of the traffic that can be routed over the current injected traffic. If the computerized methods obtain l > 1, the current traffic is routable, and all data transmission link utilizations are less than one. The computerized interpretation is that more traffic variation can be tolerated with check larger l.

[0153] The dual of the above maximum concurrent flow problem, as specified in (19)-(20) below, where l e , z f are the dual variables of edge

constraints (16) and flow constraints (17), respectively, has a linear number of variables and an exponential number of constraints, according to an embodiment of the invention. This approach allows for a computerized solution to the problem to a desired level of accuracy using a primal-dual algorithm. A computerized method can perform the primal-dual algorithm based on Fully Polynomial Time Approximation Schemes (FPTAS). An embodiment of the invention may implement e.g. the“Fast Approximation Scheme” (FAS) algorithm by Karakostas [GKA08] as a computerized algorithm, in which case the computerized method can obtain the near-optimal l, which is guaranteed within the (1+e) factor of the optimal, within the time complexity of 0(s ~2 \E\ 2 log 0(V \E\). The Karakostas

[GKA08] FAS algorithm is further detailed in“Faster approximation schemes for fractional multicommodity flow problems,” Transactions on Algorithms (TALG), vol. 4, no. 1 , p. 13, 2008, which is incorporated fully herein by reference.

[0154] Another embodiment of the invention used for control plane deployment (or similar problems) may implement a faster variant of FAS based on FPTAS by only iterating the computation through controller nodes rather than through all nodes. This significantly reduces the total number of iterative steps and, hence, the computational complexity as the number of controllers are substantially fewer than the number of nodes in a computerized network. An example algorithm of FPTAS is outlined below:

Algorithm 3 The FPTAS algorithm for computing l

1: D(l) <- 2: 1(e) «— d

3: Rf < 0,

4: while D( > phase loop

5: for each node c E C do iteration loop

6: d'(f) = d f Vf E F c

7: while D(l) < 1 and d!(f) > 0 for some / C F c do > step loop

8: Pj Shortest path using l as link weights, V/ C

F c with d'(f) > 0

9: p(e)— 'å, j.eeP{ d'(f)/u e is the utilization of e £ E. 1

10: s— max(l max eeUf p p(e))

11: Route d r (f) = d'(f)/ amount flow along P j ,

16: end while

17: end for

18: end while

19: Rf = R f /log 1+ ^ f E F

20: \ = minvf eP (Rf /df )

Output: L

[0155] The computerized bandwidth verification step (step 1 10) will ultimately output the optimal value of l and the corresponding solution X*(K)

[0156] In an embodiment of the invention, the electronic bandwidth checker 162 shown in FIG. 01A calculations for the bandwidth verification step (step 1 10 shown in FIG. 01 B) can be accelerated by implementing the computerized algorithm est in the electronic bandwidth checker 162 that assesses if l > 1. Such a computerized algorithm calculates ow and m h. If both iow > 1 and A high < 1 , then a computerized FPTAS based algorithm may be executed to calculate l. Otherwise, l = ( i ow + Xmgh)/2.0 is returned without execution of a computerized FPTAS based algorithm. An example algorithm is provided below:

Algorithm 4 The estA algorithm

1. Calculate A A

2: if X high < 1.0 or Xi ow > 1.0 then

3: A— Xhi g h A; olu )/2.0

4: else

5: compute A with the FPTAS algorithm described in

Algorithm 3.

6: end if

Output: A

[0157] The combination of a computerized algorithm FPTAS based on a faster modified variant of FAS and the computerized algorithm est for assessing if l > 1 may reduce the running time of the bandwidth verification step 1 10 by a level of 50x compared to the prior art implementation of FAS. In practice, this means for large topologies a reduced running time from days or hours down to minutes and seconds.

[0158] FIG. 01 C Illustrates the optimization time with different

implementations of the bandwidth verification when Simulated Annealing

(denoted AA) is used for mapping and association, which are suitable for application with embodiments of the invention. The network topologies are from “The internet topology zoo. [Online] Available: http://www. topology- zoo. org/” depicted on the x-axis are arranged in increasing size order. [0159] As shown in FIG07A, when combining simulated annealing (AA) for the mapping and association steps with our est calling the FPTAS algorithm in one embodiment of the invention, the running time under all topologies is significantly reduced compared to using FAS for large networks. In another embodiment of the invention, the Mapping and the Association steps may be implemented by the use of the FTCP algorithm described in prior art“F. J.

Rosand P. M. Ruiz, On reliable controller placements in software-defined networks,” Computer Communications, vol. 77, pp. 41-51 , 2016” in combination with the est .

[0160] FIG. 01 D Illustrates the optimization time with different

implementations of the bandwidth verification when the FTCP heuristics (FS) is used for mapping and association, which are suitable with embodiments of the invention. The network topologies are from“The internet topology zoo. [Online] Available: http://www. topology- zoo.org/” depicted on the x-axis are arranged in increasing size order.

[0161] FIG01 D illustrates significant reduction in running time when FTCP (denoted FS) is combined with est or FAS, respectively.

[0162] Delay and backlog bound verification (Step 1 1 1 )

[0163] The electronic flow delay checker 163 calculates the routability indicator c with respect to delay. The computerized delay and backlog bound verification step (step 1 1 1 ) follows the bandwidth verification step (step 1 10). If l < 1, the computerized method returns with c = l. Otherwise, the electronic flow delay checker 163 shown in FIG. 01 A may engage a computerized Genetic Algorithm or a Column Generation Heuristic Algorithm to determine whether there is a routing solution that satisfies the delay and backlog constraints, according to an embodiment of the invention. An example algorithm of the delay and backlog verification step 1 1 1 based on CGH is shown below:

Algorithm 5 Delay and backlog verification algorithm based on column generation intuition

Initialization

1: Using L max /u e + t e as edge weights, compute K* = {Kf \f e F }, where K j is the shortest path for flow /

under the edge weights.

2: iter = 0

3: Relax the binary variable d^ e {0, 1} to <5^ e [0, 1]

4: Set l 0id = 0

5: while (iter < MAXITERATIONS and Neg j f)

do

6: Solve the optimization problem with the set of paths

«;*, get l

7: if l— l 0id <= 0 then

8: Break

9: end if

10: l 0id = l

11: Calculate the duals DE = {l e \e e E] of constraint

(45).

12: Calculate the duals DF = {zf \f e F} of constraint

(47)

13: With I e as the edge weights, compute P =

{Pf , distf \Vf e F}, where Pf is the shortest path for

flow /, and distf is the distance of the path.

14: Neg j = {}

15: for / in F do

16: res = distf — Zf

17: if res < 0 then

18: if ' exίK L-max/He ^ E Tnax S^ eV:K A Und d 1'1 + L max * H < B max then

19: Add / to Neg;

20: end if

21: end if

22: end for

23: Select P j that distf = min({dist f \\/ f e Negf })

24: Add Pf to K*

25: end while

26: Change

27: Solve the optimization problem with the set of paths K*, get l

Output: L

[0164] An embodiment of a suitable computerized Genetic Algorithm uses {X f , V f e F} as the genes. The computerized Genetic Algorithm uses the solution {X*(K)} produced by the bandwidth verification step (step 1 10), to initialize its population (the initial population are generated by random permutations of { *(K )}. The computerized Genetic Algorithm calculates the fitness of each individual as fitness = min(Dmax/D*, Bmax/B*), with D* and B* denoting the delay and backlog bound produced by the individual. After running a certain number of generations, the computerized Genetic Algorithm outputs the c = max{fitnesses).

[0165] Another embodiment of a suitable computerized Column

Generation Heuristic (CGH) Algorithm formulates the delay and backlog verification problem as an optimal flow scheduling problem and checks whether it is possible to schedule all the flows under bandwidth, delay and backlog constraints. However, instead of considering all the possible paths set k when scheduling flows, the CGH algorithm uses certain heuristic to select a small subset of paths k' Q K. The key of CGH is to split the problem into a master problem and subproblem. The master problem is here the maximum concurrent flow problem but for a subset of the paths k' Q K. The subproblem based on (20) is created to identify new variables that can improve the objective function of the master problem. The heuristic implemented by CGH is comprises three steps, iterated until no new variables can be found: 1 ) solving the master problem with a subset of the variables and obtaining the values of the duals; 2) considering the subproblem with the dual values and finding the potential new variable; and, 3) repeating step 1 ) with the new variables that have been added to the subset.

This allows us to optimize flow scheduling only within this subset of paths rather than considering the whole set of the paths. By doing this, the search space can be greatly reduced, which can dramatically reduce the running time by the magnitude of 500x for uniform traffic patterns or 1000x for control traffic patterns compared to using MILP solvers such as CPLEX to directly solve the problem as shown in FIG 01 E.

[0166] FIG. 01 E illustrates a representative time reduction ratio when comparing the CGH algorithm with CPLEX for network topologies of increasing size for uniform traffic (left) and control traffic patterns (right), respectively, suited for embodiments of the invention. The network topologies are from“The internet topology zoo. [Online] Available: http://www. topology- zoo.org/”.

[0167] In one embodiment of the invention implementing the CGH, we observe in FIG. 01 E the aforementioned reduction for various topologies of increasing size.[SLI19]

[0168] Executing the entire process 100 as illustrated in FIG. 01 B in an embodiment of the invention can for a mid-sized network topology yield a deployment plan within seconds, as illustrated in FIG. 01 F. In an embodiment of the invention, applying CGH rather than CPLEX can lead to a reduction in running time by 80x, in comparison.

[0169] FIG. 01 F illustrates the time total running time for calculating a deployment plan for combinations of algorithms implementing various

embodiments of the invention. The network topology“InternetMCI” is available at “The internet topology zoo. [Online] Available: http://www. topology- zoo.org/”.

[0170] Exemplary Use Cases Employing Embodiments of the Invention [0171] One can apply at least two use cases to embodiments of the invention described herein.

[0172] Case 1 : Computer-implementable deployment strategy output

[0173] FIG. 02 illustrates a computerized processing entity deployment strategy operating under certain constraints and a given network topology 200, according to an embodiment of the invention. FIG. 02 further illustrates a corresponding deployment plan employing computerized processing entity instances (201 -204) of the computerized processing entity instances 201 -219 and an association plan (denoted as Node ID/Associated Processing entity ID, e.g., for the computerized processing entity instance 205, the Node ID is 1 and the Associated Processing entity ID is 13, as shown in FIG. 2), in an instance when the minimum required reserved bandwidth is 36 MBits/s per data transmission link, given the reliability threshold b = 0:99999 and requirement Rmin

> b

[0174] The network topology 200 is taken from the Internet topology Zoo(ITZ ) known as "Internetmci". For simplicity, the computerized method may assume an in-band control scheme that M = N = T, according to an embodiment of the invention.

[0175] The electronic data transmission scheduling system 160 may further assume that each node of the nodes 201 -219 holds an electronic aggregator having a request rate of 500 requests/s and link propagation delay ranges [1 ms, 10ms] and that the operational probability of each node 201 -219, data transmission link and computerized processing entity is 0.9999. Given a reliability, bandwidth and backlog constraints as (b = 0.9999, u e =400 Mbits/s,

Bmax = lOMBytes), embodiments of the invention may output a computerized processing entity deployment solution that ensures worst case flow delay of 180 ms, as shown in FIG 2.

[0176] Case 2: Analytic tool

[0177] Embodiments of the invention may provide an electronic data transmission scheduling system 160 that provides a computerized analytic tool, possibly in the form of a physical simulator, configured to systematically assess the impact of a given computer-implementable deployment strategy relative to reliability, bandwidth and flow delay. For example, embodiments of the electronic transmission scheduling system 160 can quantify the influence on the achieved maximum reliability relative to an increasing data transmission link bandwidth constraint (see, FIG. 3). In this embodiment, given flow delay and backlog constraints fixed, by varying the bandwidth of data transmission links in a network, the computer-implemented method can test how good deployment solutions for the workflow 100 shown in FIG. 01 B can found in terms of reliability. The electronic transmission scheduling system 160 may detect that when scaling up the data transmission link bandwidths in a network topology, the failure probability decreases, and hence Rmin increases. This example illustrates how service providers and operators can use embodiments of the invention as a practical tool for quantifying the trade-off between bandwidth and reliability. It may help them to make decisions such as how much investment should be put on upgrading the network infrastructure, and what is the expected gain on reliability.

[0178] FIG. 03 provides a graph of failure probability versus link bandwidth that may be used by the electronic data transmission scheduling system 160 may determine what are the minimum data transmission link bandwidths that a worst- case-delay-guaranteed deployment solution needs.

[0179] FIG. 04B provides a graph that quantifies the influence on the achieved reliability relative to the worst-case delay constraints of control traffic flows, according to an embodiment of the invention. Given the bandwidth and backlog constraints fixed, with different delay bounds requirement, a

computerized system can determine what are the maximum reliability a deployment solution can achieve.

[0180] As the electronic data transmission scheduling system 160 reduces the delay bound, the required bandwidth for guaranteeing such a delay bound increases, as shown in FIG. 4A. The graph shown in FIG. 4A illustrates how service providers and operators can employ embodiments of the invention as a practical tool for quantifying the trade-off between the flow delay they want to guarantee and the bandwidths of the network infrastructure they should have, enabling development of flexible and fine-tuned computerized processing entity deployment policies.

[0181] Example #2: Deployment of VMs For a Cloud Computing Task [0182] Deploying and executing a computational task in a distributed big data framework and architecture ( e.g ., Apache Spark or similar), is essentially identical to deploying a distributed control plane. Such an architecture generally allows for distributing a computation task within or across geo-distributed data centers. Within the context of a cloud computing application of a distributed cloud computing architecture, a manager entity, such a computerized management function within data transmission node 171 shown in FIG. 01A carries out computational tasks by controlling one or many computerized worker entities. Each computerized worker entity (e.g., switches 192) may host one or several computerized distributed agents which each may handle one or several computation tasks.

[0183] In this context, the electronic data transmission scheduling system 160 may map the deployment of a distributed computational infrastructure (or application), by deciding the distributed set of managers (the number and location in a network topology) and the association of computerized worker entities, according to an embodiment of the invention. In the cases when the framework (or similar) allows for it, embodiments of the invention could also be applied to also associate worker entities (e.g., switches like 192) to distributed agents running on external nodes.

[0184] Such a deployment may be viewed as an abstract overlay representing the intercommunication necessary to solve a big data problem in a distributed manner and with guaranteed network reliability and performance. Embodiments of the invention may thus be used to deploy entities for executing cloud computations while accounting for reliability, bandwidth and flow delay, such that the requirements in critical cloud computing applications ( e.g ., to support decision-making in real-time services) can be adequately met.

[0185] Note that embodiments of the invention focus on network I/O constraints of a computing infrastructure and generally not on the computational resources of each computational entity. To accomplish this task, additional constraints like the computation capability of a computerized worker need to be considered, which decides how many executors can co-exist on a worker or on another host.

[0186] Application Example #3: Deployment of a VNF Service Chain

[0187] Embodiments of the invention may be used to deploy service chains comprising a series of connected VNF instances. In this case, embodiments of the invention, namely the electronic data transmission scheduling system 160, essentially deploy computerized processing entities (e.g., data transmission nodes 171 ) (corresponding to the VNFs) without associating any electronic aggregator nodes - consequently, the reliability requirement becomes irrelevant.

[0188] Hence, computerized processing associated with this embodiment of the invention is simplified to placement of a given number of VNFs on a network topology while accounting only for bandwidth and flow delay

requirements. In practice this is achieved by setting the reliability R(G, j, C) = 1) and assuming N e 0. To solve the deployment problem, a computerized system executes the steps from Workflow 100 shown in FIG. 01 B, except for the “Association” step (step 105 shown in FIG. 01 B), according to an embodiment of the invention.

[0189] Additional Considerations Regarding Network Calculus

[0190] FIG. 05 illustrates an example of graphical computation of flow delay bound and backlog bound using network calculus concepts, according to an embodiment of the invention. The flow delay and backlog bounds respectively correspond to the maximum horizontal and vertical deviations between the arrival and service curves.

[0191] Network calculus (“NC”) comprises a system theory for

communication networks. An NC system can range from a simple queue to a complete network. With the models of a considered flow and of the service a so- called system can offer, three bounds can be calculated by network calculus, which are (i) the delay bound the flow will experience traversing the system, (ii) the backlog bound the flow will generate in the system, and (iii) the output bound of the flow after it has passed the system. The theory is divided in two parts: deterministic network calculus, providing deterministic bounds, and stochastic network calculus, providing bounds following probabilistic distributions. The discussion here only considers the former.

[0192] As an explanation for the motivation behind the optimization formulation (discussed herein) that is behind the flow delay checker 163, note that the arrival curve 501 and service curve 503 are the two key concepts in the network calculus. Among other things, the electronic flow delay checker 163 shown in FIG. 01A employs the curve 501 and the curve 503 in optimizing delay and backlog bounds. The arrival curve a(t) 501 defines the upper bound of the injected traffic during any time interval. Suppose the total amount of data a flow sends during any time interval [t1, t2] is R(t2)-R(ti), where R(t) is the cumulative traffic function which define the traffic volume coming from the flow within the time interval [0, t]. A wide sense increasing function a(t) is the arrival curve of the flow if Vti, Ϊ2, 0 £ £t2, if it satisfies

R(t 2 ) - R(ti) £ a (t 2 - ti) (19)

[0193] In practice, a linear arrival curve is widely used to characterize the flow injection in a network. The modeling of a flow f uses a so-called arrival curve ft) = I bf df = d f t+ b f , where dt is the sustainable arrival rate and b f is the burstiness. The linear arrival curve 501 shown in FIG. 5 can be interpreted as that the flow can send bursts of up to k bytes but its sustainable rate is limited to d f B/s. A flow can be enforced to conform to the linear arrival curve 501 by applying regulators, for example, leaky-buckets.

[0194] The Service Curve y(t) 503 models a network element, e.g. a switch or a channel, expressing the service capability. Its general interpretation is less trivial than for the arrival curve 501. The service curve 7503 shown in FIG. 5 can be interpreted as that data of a flow has to wait up to T seconds before being served at a rate of at least rB/s. This type of service curve is denoted by g G , t and is referred to as a rate-latency service curve. [0195] From the two curves 501 , 503, the three above mentioned bounds can be calculated by the electronic flow delay checker 163, as shown in FIG. 5. The flow delay bound is graphically the maximum horizontal deviation between a(t) and y(t). Its general interpretation is less trivial than for the arrival curve 501 As shown in FIG. 5, the backlog bound corresponds to the largest vertical distance between a(t) and y(t). Consider the case with linear arrival curve 501 and rate-latency service curve 503, the flow delay bound can be calculated as

D* = T +b/r (20) and the backlog bound as

B * = b + Td. (21 )

[0196] Thus, D* and B* relate to each other as shown by equations (20), (21 ).

[0197] In the general case, the computation of the output bound a*, the arrival curve of the flow after traverse the system, is calculated by min-plus deconvolution between a(t) and y(t), as a*

[0198] The computation is not straightforward, since it involves the deconvolution operation defined by min-plus algebra. However, in the case that a flow is modelled by a linear arrival curve and served by a rate-latency service curve, one simply has a* = lb+rr, r = rt+b+rT.

[0199] Concatenation Property of Network Calculus

[0200] Assume a flow traverses two systems that provides service curve g \ and Y2 respectively. Then, the Equivalent Service Curve (ESC) by the

concatenated system is calculated by min-plus convolution between g and g 2 , as

[0201] In particular, if both g and g 2 are rate-latency service curves, e.g. g \ - fri, Ti and g·\ = ga,ii, we can simply have g= g·\®gz = fmin(ri,r2), ti + t2· A

computerized system can thus deduce the ESC for a given flow that traverses multiple network elements {e.g. switches) in a system by applying the

concatenation property.

[0202] Applying Network Calculus for Calculating the Delay Bound of a Deployment Plan

[0203] A computerized system such as the electronic flow delay checker 163 can apply Network Calculus in computing the delay and backlog verification step (step 1 1 1 shown in FIG. 01 B), as follows: 1. Assume that each computerized node (and/or application specific hardware) uses a certain guaranteed performance service discipline to schedule flows that sharing a data transmission link. How to handle different guaranteed performance service disciplines and their corresponding service curves are known to those skilled in the art. Generally speaking, such a service curve is a function of the reserved guaranteed service rate (X(K)) of the flow.

2. A path can traverse several nodes and data transmission links and can be viewed as a concatenation of systems. Thus, given the reserved service curve of a flow at each of such nodes, by applying the concatenation property, a computerized system can derive the ESC of the entire path for a target flow by applying equation (23). In general, the ESC of the path is still a function of reserved guaranteed service rate (X(K)) of the flow. However, X(K) needs to satisfy constraints (5) (6) (7) (9) (1 1 ).

3. A computerized system such as the electronic flow delay checker 163 can split and route a flow on a list of selected paths k . Suppose the arrival curve of a flow is f = l bf, df , after splitting, the computerized system such as the electronic flow delay checker 163 obtains a number of sub-flows {f K) }, with each one having the arrival curve ^k) = I b m , am- A computerized system such as the electronic flow delay checker 163 may assume b f w = b t by considering the worst case. The computerized system such as the electronic flow delay checker 163 also has the å KEk't d f(K > = d f (constraints (8)), indicating that the summation of all the sub-flow demands should be equal to the demand of their aggregated flow.

4. Now, given the arrival curve of a sub-flow and the corresponding service curve of the path for routing the sub-flow, the computerized system such as the 163 can calculate the delay bound (D*) and backlog bound (B*) for the sub-flow with equation (20), (21 ) respectively. Such bounds should satisfy the delay and backlog constraints (constraints (9) (10)).

[0204] One note regarding how burstiness relates to c. The burstiness of a flow is calculated in the electronic resource planner 164 in the Traffic Estimation step according to the definition of the linear arrival curve mentioned above. The burstiness directly impacts the worst-case flow delay (D*) of a flow, see equation (20).

[0205] As noted above, in some embodiments of the invention and under certain circumstances, queuing theory, for example, may be also used for calculating a stochastic delay bound instead of Network Calculus.

[0206] Introduction of the column generation heuristic algorithm for delay and backlog verification.

[0207] The delay and backlog verification problem can be formulated as the Mixed Integer Linear programming (MILP) problem specified in (24)-(33). By solving the optimization problem, if we obtain c ³ 1, it indicates that a feasible routing solution satisfy bandwidth, delay and backlog constraints is found. The number of binary variables d(K) in the problem could be potentially very large, since it equals to the number of possible paths in a network topology, which increases exponentially with the size of the network topology. maximize c (24) s.t. :

[0208] To solve such large-scale optimization problem, embodiment of the invention may perform computerized column generation in the electronic flow delay checker 163. The key idea of column generation is to split the original problem into two problems: a master problem and a subproblem. The master problem is the original problem but with only a subset of the variables being considered. The subproblem is created to find a new variable that could potentially improve the objective function of the master problem. Usually, the dual of the original problem is used in the subproblem with the purpose of identifying new variables. The kernel of the column generation method defines such an iterative process: 1 ) solving the master problem with a subset of the variables and obtaining the values of the duals, 2) considering the subproblem with the dual values and finding the potential new variable, and 3) repeating step 1 ) with the new variables that have been added to the subset. The whole process is repeated until no new variables can be found.

[0209] For the delay and backlog verification problem, intuitively, if the computerized system ignores the constraints on delay and backlog, the optimization problem would be reduced to a maximum concurrent flow problem as formulated in (15) - (18). And the dual problem is specified in (19)-(23). With the column generation method, the master problem is the maximum concurrent flow problem (specified in (15) - (18)) but with a subset of the paths k' Q K. The corresponding subproblem can use (20) to identify the new potential paths: the potential paths are the ones that violates (20).

[0210] However, because of delay and backlogs constraints (9) and (10), the new potential path variables need to additionally obey ts^ < D max - < B max . Otherwise, constraints (9) and (10) will certainly be violated if they are chosen for routing flows ( d(K ) = 1).

[0211] FIG. 06 illustrates a pseudo algorithm that will assist the potential path variables satisfy the delay and backlog constraints (9) and (10) while satisfying other appropriate conditions, according to an embodiment of the invention. [0212] As shown in step 603, the algorithm run in the electronic flow delay checker 163 relaxes the delay and backlog verification problem (which is MILP) to linear programming problem (LP) by relaxing the binary variable to [0, 1 ]

[0213] As shown in step 605, the electronic flow delay checker 163 initiates with a subset of paths k'. For example, the electronic flow delay checker 163 can initiate the subset by using the link propagation delay t e as edges weights and provide the corresponding shortest paths as k'.

[0214] As shown in steps 607-621 , the electronic flow delay checker 163 then begins a repetitive process for adding new paths:

1 ) solve the relaxed master problem and obtain values of the duals of the constraints imposed on edges and flows (step 607);

2) check whether dual constraint (20) hold for all the paths (step 609), if yes, add them into a set P (step 61 1 );

3) remove the paths from P that would certainly violate delay and backlog constraints (step 613);

4) check (step 615) if there are remaining paths in P. If yes (step 617), add them to the subset k', and repeat the process again from 1 ) (step 605).

[0215] Note that if the answer of step 609 and step 615 is no, it means no new paths can be found. In this case, we restrict the variable back to binary values {0, 1} (step 619), and solve the delay and backlog verification problem with the subset of the paths K’ found by the iterative process (step 621 ).

[0216] Note also that at step 609, when checking whether existing paths violate (20), the electronic flow delay checker 163 does not need to iterate over all the possible paths. The electronic flow delay checker can simply use l e as edge weights and use Dijkstra's Shortest Path First algorithm to get all the shortest paths with l e . If all the shortest paths satisfy (20) (that the distance of each path is larger than z f ), then any path K e k must have å eeK l e ³ z f (20). Otherwise, the electronic flow delay checker 163 just adds the shortest paths that violate (20) to P, as the step 2) suggests.

[0217] Naturally, embodiments of the invention may employ MILP solvers, such as CPLEX, in the electronic flow delay checker 163 to directly solve the delay and backlog verification problem (24)-(33) (Step 621 ). The inventors refer to such an approach a CPLEX-direct method. However, using a CPLEX-direct method is very time consuming, since it needs to consider all the possible flow paths, the number of which increases exponentially with the size of the considered network topology. In comparison, the CGH algorithm described here uses much fewer paths. Usually, even for large network topologies, the iteration process terminates within a few hundred rounds. Therefore, the number of paths being considered in >c'can be thousands of times less when compared to the number of paths used by the CPLEX-direct method, resulting in much shorter running time. Note that this computerized method is a heuristic algorithm, which means that there is no guarantee that the computerized method can find the optimal or approximate the optimal result. In one implementation of the algorithm, a running time reduction at the levels of 500x (or more) over the CPLEX-direct method can be achieved, while yielding nearly as optimal results as the CPLEX- direct method.

[0218] Further modifications of the invention within the scope of the appended claims are feasible. As such, the present invention should not be considered as limited by the embodiments and figures described herein. Rather, the full scope of the invention should be determined by the appended claims, with reference to the description and drawings.

[0219] Various embodiments of the invention have been described in detail with reference to the accompanying drawings. References made to particular examples and implementations are for illustrative purposes and are not intended to limit the scope of the invention or the claims.

[0220] It should be apparent to those skilled in the art that many more modifications of the computerized method and system described here besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except by the scope of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. [0221] Headings and sub-headings provided herein have been provided as an assistance to the reader and are not meant to limit the scope of the invention disclosed herein. Headings and sub-headings are not intended to be the sole or exclusive location for the discussion of a particular topic.

[0222] While specific embodiments of the invention have been illustrated and described, it will be clear that the invention is not limited to these embodiments only. Embodiments of the invention discussed herein may have generally implied the use of materials from certain named equipment

manufacturers; however, the invention may be adapted for use with equipment from other sources and manufacturers. Equipment used in conjunction with the invention may be configured to operate according to conventional methods and protocols and/or may be configured to operate according to specialized protocols. Numerous modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art without departing from the spirit and scope of the invention as described in the claims. In general, in the following claims, the terms used should not be construed to limit the invention to the specific embodiments disclosed in the specification but should be construed to include all systems and methods that operate under the claims set forth herein below. Thus, it is intended that the invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

[0223] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies and the definition of that term in the reference does not apply.