Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTIMAL DYNAMIC CLOUD NETWORK CONTROL
Document Type and Number:
WIPO Patent Application WO/2017/176542
Kind Code:
A1
Abstract:
Various exemplary embodiments relate to a network node in a distributed dynamic cloud, the node including: a memory; and a processor configured to: observe a local queue backlog at the beginning of a timeslot, for each of a plurality of commodities; compute a processing utility weight for a first commodity based upon the local queue backlog of the first commodity, the local queue backlog of a second commodity, and a processing cost; where the second commodity may be the succeeding commodity in a service chain; compute an optimal commodity using the processing utility weights; wherein the optimal commodity is the commodity with the highest utility weight; assign the number of processing resource units allocated to the timeslot to zero when the processing utility weight of die optimal commodity is less than or equal to zero; and execute processing resource allocation and processing flow rate assignment decisions based upon the optimal commodity.

Inventors:
TULINO ANTONIA (US)
LLORCA JAIME (US)
Application Number:
PCT/US2017/024940
Publication Date:
October 12, 2017
Filing Date:
March 30, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ALCATEL LUCENT USA INC (US)
International Classes:
G06F9/50; H04L29/08
Foreign References:
US20110154327A12011-06-23
Other References:
LI XIN ET AL: "QoS-Aware Service Selection in Geographically Distributed Clouds", 2013 22ND INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND NETWORKS (ICCCN), IEEE, 30 July 2013 (2013-07-30), pages 1 - 5, XP032513109, DOI: 10.1109/ICCCN.2013.6614176
Attorney, Agent or Firm:
SANTEMA, Steven, R. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A network node in a distributed dynamic cloud network, the node comprising:

a memory; and

a processor configured to:

observe a local queue backlog at the beginning of a timeslot, for each of a plurality of commodities;

compute a processing utility weight for each of a plurality of commodities based upon the local queue backlog of the plurality of commodities, the local queue backlog of another commodity, and a processing cost, wherein the other commodity is the succeeding commodity in a service chain including each of the plurality of commodities;

compute an optimal commodity using the processing utility weights, wherein the optimal commodity is the commodity with the highest utility weight;

assign the number of processing resource units allocated to the timeslot to zero when the processing utility weight of the optimal commodity is less than or equal to zero; and execute processing resource allocation and processing flow rate assignment decisions based upon the optimal commodity.

2. The network node of claim 1, wherein the processor is further configured to:

observe a neighbor queue backlog of a neighbor of the network node at the beginning of a timeslot, for each of a plurality of commodities; and compute a transmission utility weight for each of the plurality of commodities based upon the observed neighbor queue backlog, a local queue backlog of the network node, and a transmission cost.

3. The network node of claim 2, wherein the processor is further configured to: compute an optimal commodity using the transmission utility weights, where the optimal commodity is the commodity with the highest utility weight; and assign the number of transmission resource units allocated to the timeslot to zero when transmission utility weight is less than or equal to zero.

4. The network node of claim 3, wherein the processor is further configured to: execute transmission resource allocation and transmission flow rate assignment decisions based upon the optimal commodity; and introduce a bias term into the transmission utility weight, where the bias term represents the number of hops and/ or the geometric distance to the destination.

5. The network node of any of claims 1 to 4, wherein the processor is further configured to: for each commodity compute the processing utility weight as such: where the processing utility weight indicates the benefit of executing function to process commodity

into commodity at t , in terms of the local backlog reduction per

processing unit cost; compute the optimal commodity according to:

assign the number of allocated resource units

otherwise assign

perform the following resource allocation and flow rate assignment decisions:

6. A method of optimizing cloud control on a network node in a distributed dynamic cloud, the method comprising: observing a local queue backlog at the beginning of a timeslot, for each of a plurality of commodities; computing a processing utility weight for each of a plurality of commodities based upon the local queue backlog of the plurality of commodities, the local queue backlog of another commodity, and a processing cost, wherein the other commodity is the succeeding commodity in a senace chain including each of the plurality of commodities; computing an optimal commodity using the processing utility weights, wherein the optimal commodity is the commodity with the highest utility weight;

assigning the number of processing resource units allocated to the timeslot to zero when the processing utility weight of the optimal commodity is less than or equal to zero; and executing processing resource allocation and processing flow rate assignment decisions based upon the optimal commodity.

7. The method of claim 6, further comprising:

observing a neighbor queue backlog of a neighbor of the network node at the beginning of a timeslot, for each of a plurality of commodities; and

computing a transmission utility weight for each of the plurality of commodities based upon the observed neighbor queue backlog, a local queue backlog of the network node, and a transmission cost.

8. The method of claim 7, further comprising:

computing an optimal commodity using the transmission utility weights, where the optimal commodity is the commodity with the highest utility weight; and

assigning the number of transmission resource units allocated to the timeslot to zero when transmission utility weight is less than or equal to zero.

9. The method of claim 8, further comprising:

executing transmission resource allocation and transmission flow rate assignment decisions based upon the optimal commodity; and introducing a bias term into the transmission utility weight, where the bias term represents the number of hops and/or the geometric distance to the destination.

10. The method of any of claims 6 to 9, further comprising:

for each commodity computing the processing utility weight as such:

where the processing utility weight indicates the benefit of executing function to process commodity at time in terms of the local backlog

reduction per processing unit cost; computing the optimal commodity according to:

assigning

otherwise assigning

performing the following resource allocation and flow rate assignment decisions:

Description:
OPTIMAL DYNAMIC CLOUD NETWORK CONTROL

BACKGROUND

Distributed cloud networking enables the deployment of network services in the form of interconnected virtual network functions instantiated over general purpose hardware at multiple cloud locations distributed across the network. The service distribution problem may be to find the placement of virtual functions and the routing of network flows that meet a given set of demands with minimum cost.

SUMMARY

In light of the present need for optimal dynamic cloud network control, a brief summary of various exemplary embodiments is presented. Some simplifications and omissions may be made in the following summary, which is intended to highlight and introduce some aspects of the various exemplary embodiments, but not to limit the scope of the invention. Detailed descriptions of a preferred exemplary embodiment adequate to allow those of ordinary skill in the art to make and use the inventive concepts will follow in later sections.

Various exemplary embodiments relate to a network node in a distributed dynamic cloud, the node including: a memory; and a processor configured to: observe a local queue backlog at the beginning of a timeslot, for each of a plurality of commodities; compute a processing utility weight for a first commodity based upon the local queue backlog of the first commodity, the local queue backlog of a second commodity, and a processing cost; where the second commodity may be the succeeding commodity in a service chain; compute an optimal commodity using the processing utility weights; wherein the optimal commodity is the commodity with the highest utility weight; assign the number of processing resource units allocated to the timeslot to zero when the processing utility weight of the optimal commodity is less than or equal to zero; and execute processing resource allocation and processing flow rate assignment decisions based upon the optimal commodity.

Various exemplary embodiments relate to a method of optimizing cloud control on a network node in a distributed dynamic cloud, the method including observing a local queue backlog at the beginning of a timeslot, for each of a plurality of commodities; computing a processing utility weight for a first commodity based upon the local queue backlog of the first commodity, the local queue backlog of a second commodity, and a processing cost; where the second commodity may be the succeeding commodity in a service chain; computing an optimal commodity using the processing utility weights; wherein the optimal commodity is the commodity with the highest utility weight; assigning the number of processing resource units allocated to the timeslot to zero when the processing utility weight of the optimal commodity is less than or equal to zero; and executing processing resource allocation and processing flow rate assignment decisions based upon the optimal commodity.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to better understand various exemplary embodiments, reference is made to the accompanying drawings, wherein:

FIG. 1 illustrates an exemplary distributed cloud system including a distributed cloud environment and a controller configured to control distribution of cloud services within the distributed cloud environment;

FIG. 2 illustrates a network service chain 200; and

FIG. 3 depicts a high-level block diagram of a computer suitable for use in performing functions described herein. DETAILED DESCRIPTION

The description and drawings presented herein illustrate various principles. It will be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody these principles and are included within the scope of this disclosure. As used herein, the term, "or" refers to a non-exclusive or( i.e., and/ or), unless otherwise indicated (e.g., "or else" or "or in the alternative"). Additionally, the various embodiments described herein are not necessarily mutually exclusive and may be combined to produce additional embodiments that incorporate the principles described herein. Further, while various exemplary embodiments are described with regard to cloud networks it will be understood that the techniques and arrangements described herein may be implemented to facilitate network control in other types of systems that implement multiple types of data processing or data structure.

Distributed cloud networking builds on network functions virtualization (NFV) and software defined networking (SDN) to enable the deployment of network services in the form of elastic virtual network functions instantiated over commercial off the shelf (COTS) servers at multiple cloud locations and interconnected via a programmable network fabric. In this evolved virtualized environment, network operators may host a variety of highly adaptable services over a common physical infrastructure, reducing both capital and operational expenses, while providing quality of service guarantees. While this approach may be very attractive for network providers, it may pose several technical challenges. Chief among them may include how to efficiently assign network functions to the various servers in the network. These placement decisions may be coordinated with the routing of network flows through the appropriate network functions, and with resource allocation decisions that determine the amount of resources (for example, virtual machines) allocated to each function.

The problem of placing virtual network functions in distributed cloud networks may be formulated as a generalization of Generalized Assignment (GA) and Facility Location (FA), and a (0(1), (0(1) bi-criteria approximation with respect to both overall cost and capacity constraints may be provided in a prior art solution. In another prior art solution titled cloud service distribution problem (CSDP), the goal includes finding the placement of network functions and the routing of network flows that minimize the overall cloud network cost. The CSDP may be formulated as a minimum cost network flow problem, in which flows consume both network and cloud resources as they go through the required virtual functions. The CSDP may be shown to admit polynomial-time solutions under linear costs and fractional flows. However, both of these solutions include two main limitations:

A static scenario may be considered with a priori known demands. However, with the increasing heterogeneity and dynamics inherent to both service demands and the underlying cloud network, one may argue that online algorithms that enable rapid adaptation to changes in network conditions and service demands are essential.

• The solutions consider a centralized optimization. However, the complexity associated with the resulting global optimization problem and the need to have global knowledge of the service demands, limits the use of centralized algorithms, specially in large-scale distributed cloud networks and under time-varying demands.

Embodiments include a dynamic cloud network control (DCNC) algorithm based on augmenting a Lyapunov drift-plus -penalty control method, which had only been used in traditional (transmission) networks, to account for both transmission and processing flows, consuming network and cloud resources. However, several issues remain open for improvement: i) only expected time-averaged performance guarantees are previously considered, and it) the resulting network delay may be significant, especially in lightly loaded networks.

In the embodiments described herein: i) the bounds for time average cost and time average occupancy (total queue backlog) may be provided with probability 1 (instead of in expected time average); ii) a key shortest transmission-plus-processing distance bias extension to the DCNC algorithm may be designed, which may be shown to significantly reduce network delay without compromising throughput nor overall cloud network cost; and iii) simulation results may be presented that illustrate the effect of incorporating the shortest transmission-plus-processing distance bias into the DCNC algorithm, as well as its efficiency in reducing overall cloud network cost and delay for different parameter settings and network scenarios that include up to 110 clients.

Referring now to the drawings, in which like numerals refer to like components or steps, there are disclosed broad aspects of various exemplary embodiments.

FIG. 1 depicts an exemplary distributed cloud system including a distributed cloud environment and a controller configured to control distribution of cloud services within the distributed cloud environment.

The distributed cloud system (DCS) 100 includes a distributed cloud environment 109 having a set of distributed data centers (DDCs) 1101 - 11 ON (collectively, DDCs 110), a communication network (CN) 120, a client device (CD) 130, and a cloud service management system (CSMS) 140.

The DDCs 110 may be configured to support cloud services for cloud consumers. The cloud services may include cloud computing services, Infrastructure as a Service (IaaS), or the like. For example, the cloud services may include augmented reality services, immersive video services, real-time computer vision services, tactile internet services, network virtualization services (e.g., for one or more of routing, data flow processing, charging, or the like, as well as various combinations thereof), or the like. The DDCs 110 may include various types and configurations of resources, which may be used to support cloud services for cloud consumers. The resources may include various types and configurations of physical resources, which may be used to support various types and configurations of virtual resources. The DDCs 1101 - HOD may communicate with CN 120 via communication paths 1191 - 119D (collectively, communication paths 119), respectively.

The DDCs 110 include respective sets of physical resources (PRs) 1121 - 112D (collectively, PRs 112) which may be used to provide cloud services for cloud consumers. For example, PRs 112 of a DDC 110 may include computing resources, memory resources, storage resources, input- output (I/O) resources, networking resources, or the like. For example, PRs 112 of a DDC 110 may include servers, processor cores, memory devices, storage devices, networking devices( e.g., switches, routers, or the like), communication links, or the like, as well as various combinations thereof. For example, PRs 112 of a DDC 110 may include host servers configured to host virtual resources within the DDC 110( e.g., including server blades organized in racks and connected via respective top-of-rack (TOR) switches, hypervisors, or the like), aggregating switches and routers configured to support communications of host servers within the DDC 110 (e.g., between host servers within the DDC 110, between host servers of the DDC 110 and devices located outside of the DDC 110, or the like), or the like, as well as various combinations thereof. The typical configuration and operation of PRs of a datacenter( e.g., such as PRs 112 of one or more of the DDCs 110) will be understood by one skilled in the art. The PRs 112 of the DDCs 110 may be configured to support respective sets of cloud resources (CRs) 1131 - 113D (collectively, CRs 113) for cloud consumers. For example, CRs 113 supported using PRs 112 of a DDC 110 may include virtual computing resources, virtual memory resources, virtual storage resources, virtual networking resources (e.g., bandwidth), or the like, as well as various combinations thereof. The CRs 113 supported using PRs 112 of a DDC 110 may be provided in the form of virtual machines (VMs), virtual applications, virtual application instances, virtual file systems, or the like, as well as various combinations thereof. The allocation of CRs 113 of DDCs 110 may be performed by CSMS 140 responsive to requests for cloud services from cloud consumers [e.g., a request for a cloud service received from client device 130 or any other suitable source of such a request). It will be appreciated that the typical configuration and operation of VRs using PRs of a datacenter (e.g., such as configuration and operation of CRs 113 using PRs 112 of one or more of the DDCs 110) will be understood by one skilled in the art.

The DDCs 110 of cloud environment 109 may be arranged in various ways. The DDCs 110 (or at least a portion of the DDCs 110) may be distributed geographically. The DDCs 110 may be located at any suitable geographic locations. The DDCs 110 may be distributed across a geographic area of any suitable size (e.g., globally, on a particular continent, within a particular country, within a particular portion of a country, or the like). The DDCs 110 (or at least a portion of the DDCs 110) may be located relatively close to the end users. The DDCs 110 (or at least a portion of the DDCs 110) may be arranged hierarchically (e.g., with larger DDCs 110 having larger amounts ofPRs 112) and CRs 113 being arranged closer to the top of the hierarchy (e.g., closer to a core network supporting communications by the larger DDCs 110) and smaller DDCs 110 having smaller amounts of PRs 112 and CRs 113 being arranged closer to the bottom of the hierarchy ( e.g., closer to the end users). The DDCs 110 may be provided at existing locations (e.g., where the cloud provider may be a network service provider), at least a portion of the DDCs 110 may be implemented within Central Offices (COs) of the network service provider since, as traditional telecommunications equipment deployed in the COs has become more compact, real estate has become available at the COs and may be used for deployment of servers configured to operate as part of a distributed cloud system and, further, because such COs generally tend to be highly networked such that they may be configured to support the additional traffic associated with a distributed cloud system), standalone locations, or the like, as well as various combinations thereof. It will be appreciated that, although primarily presented with respect to an arrangement in which each of the DDCs 110 communicates via CN 120, communication between DDCs 110 may be provided in various other ways( e.g., via various communication networks or communication paths which may be available between DDCs 110). The DDCs 110 of cloud environment 109 may be arranged in various other ways.

The CN 120 may include any communication network(s) suitable for supporting communications within DCS 100( e.g., between DDCs 110, between CD 130 and DDCs 110, between CD 130 and CSMS 140, between CSMS 140 and DDCs 110, or the like). For example, CN 120 may include one or more wireline networks or one or more wireless networks, which may include one or more of a Global System for Mobile (GSM) based network, a Code Divisional Multiple Access (CDMA) based network, a Long Term Evolution (LTE) based network, a Local Area Network (LAN), a Wireless Local Area Network(s) (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), or the like. The CN 120 includes network resources 121 that may be configured to support communications within DCS 100, including support for communications associated with access and use of CRs 113 of DDCs 110 ( e.g., between DDCs 110, between CD 130 and DDCs 110, or the like). For example, network resources 121 may include network elements e.g., ( data routing devices, control functions, or the like), communication links, or the like, as well as various combinations thereof. The CD 130 is a client device configured to function within DCS 100. For example, CD 130 may be configured to communicate with CSMS 140 for purposes of requesting cloud services supported by cloud environment 109. For example, CD 130 maybe configured to communicate with DDCs 110 for purposes of accessing and using virtual resources allocated for CD 130 within DDCs 110 responsive to requests for cloud services supported by cloud environment 109. For example, CD 130 may be a thin client, a smartphone, a tablet computer, a laptop computer, a desktop computer, a television set-top-box, a media player, a gateway, a server, a network device, or the like.

The CSMS 140 may be configured to receive a cloud service request for CD 130 e.g., from CD( 130 or from a device that is requesting the cloud service on behalf of CD 130), determine a cloud resource allocation in response to the cloud service request for CD 130, and configure cloud environment 109 to support the cloud service requested for CD 130 e.g., allocat(ion of CRs 113 within DDCs 110 and, optionally, allocation of network resources 121 of CN 120). An exemplary embodiment of a method by which CSMS 140 may provide such functions is discussed below. The CSMS 140 may be configured to perform various other functions of the capability for distribution of a cloud service within a cloud environment.

Model and Problem Formulation - Cloud Network Model

In the embodiments herein, one may consider a cloud network modeled as a directed graph G = (V, E) with I V 1= N vertices and I E 1= E edges representing the set of network nodes and links, respectively. In the context of a cloud network, a node represents a distributed cloud location, in which virtual network functions (VNFs) may be instantiated in the the form of virtual machines (VMs) over COTS servers, while an edge represents a logical link e.g., IP link) ( between two cloud locations. One may denote by the set of neighbor nodes of

in G .

Cloud and network resources are characterized by their processing and transmission capacity and cost, respectively. In particular, one may define: the set of possible processing resource units at node i

the set of possible transmission resource units at

the capacity, in processing flow units, resulting from the allocation of k resource units (e.g., VMs) at node i the capacity, in transmission flow units, resulting from the allocation of k resource units ( e.g., 1G links) at link the cost of setting up k resource units at node the cost of setting up k

resource units at link

the cost per processing flow unit at node i the cost per transmission flow unit at link

Model and Problem Formulation - Network Service Model

In some embodiments, a network service may be described by a chain of VNFs. One may

denote by the ordered set of VNFs of service Hence, the tuple (

with identifies the m -th function of service . One may refer to a client as

a source-destination pair s,(d) , with A client requesting network service

implies the request for the network flows originating at source node s to go through the sequence of VNFs specified by before exiting the network at destination node d .

Each VNF has (possibly) different processing requirements, which may also vary among cloud locations. One may denote by the processing-transmission flow ratio of VNF That

is, when one transmission flow unit goes through VNF it occupies processing

resource units. In addition, the service model also captures the possibility of flow scaling. One may denote by the scaling factor of That is, the size of the output flow

of times larger than its input flow. One may refer to a VNF with as an expansion function, and to a VNF with as a compression function. Moreover, a processing delay (in time units) may be incurred in executing (

node i , as long as the processing flow satisfies the node capacity constraint.

One may note that the service model applies to a wide range of cloud services that go beyond

VNF services, and that includes, for example, Internet of Things (IoT) services, expected to largely benefit from the proximity and elasticity of distributed cloud networks.

One may adopt a multi-commodity-chain flow model, in which a commodity represents a network flow at a given stage of a service chain. One may use the triplet to identify a

commodity flow that may be output of the m -th function of service for client d . The source

commodity of service for client d may be identified by and the final commodity

delivered to d as illustrated in FIG. 2.

FIG. 2 illustrates a network service chain 200. Network service chain 200 may include φ & Φ composed of functions. Service takes source commodity and delivers final commodity after going through the sequence of functions

takes commodity and generates commodity

Model and Problem Formulation - Queuing Model

In some embodiments one may consider a time slotted system with slots normalized to integral units . One may denote by the exogenous arrival rate of commodity at node i during timeslot t , and by , referred to as average input rate. One may assume that may be independently and identically

distributed (i.i.d.) across timeslots.

At each timeslot t , every node makes a transmission and processing decision on all of its output interfaces. One may use to d(enote the assigned flow rate at link for commodity to denote the assigned flow rate from node i to its processing unit for commodity at time t , and to denote the assigned flow rate from

node i 's processing unit to node i for commodity at time t .

During network evolution, internal network queues buffer packets according to their commodities. One may define the queue backkg of commodity ά, φ,ηι) at n( ode i , , as the amount of commodity in the queue of node i at the beginning of timeslot t . The

process evolves according to the following queuing dynamics ([x] + may be used to

denote max

In addition, at each timeslot t , cloud network nodes may also make resource allocation decisions. One may denote by the binary variable indicating the allocation of k

transmission resource units at link in timeslot t, and by the binary variable

indicating the allocation of k processing resource units at node i in timeslot t . Model and Problem Formulation - Problem Formation

The problem formation goal may include designing a control algorithm that, given exogenous arrival rates with average input rate matrix may support all service demands while

minimizing the average cloud network cost. Specifically, one may require the cloud network to be rate stable, i.e.,

The dynamic service distribution problem (DSDP) may then be formulated as follows:

such that the cloud network may be rate stable with input rate λ (4)

where (5) describes the instantaneous commodity-chain constraints, (6) and (7) are instantaneous transmission and processing capacity constrains, and the cost function may be given by

In the following section, one may present a dynamic control algorithm that obtains arbitrarily close to optimal solutions to the above problem formulation in a fully distributed fashion. Dynamic cloud network control

In some embodiments, one may first describe a distributed dynamic cloud network control (DCNC) strategy that extends the Lyapunov drift-plus-penalty algorithm to account for both transmission and processing flow scheduling and resource allocation decisions. One may then show that DCNC provides arbitrarily close-to-optimal solutions with probability 1. Finally, one may present E-DCNC, an enhanced dynamic cloud network control algorithm that introduces a shortest transmission-plus-processing distance bias to reduce network delay without compromising throughput or average cloud network cost.

DCNC algorithm

Local transmission decisions: At the beginning of each timeslot t , each node i observes the queue backlogs of all its neighbors and performs the following operations for each of its outgoing links

1. For each commodity compute the transmission utility weight

b e a non _ negatlve

control parameter that determines the degree to which cost minimization is emphasized.

2. Compute the optimal commodity

3. If then, k * = 0. Otherwise,

4. Take the following resource allocation and flow rate assignment decisions:

Local processing decisions: At the beginning of each timeslot t , each node / observes queue backlogs and performs the following operations: 1. For each commodity compute the processing utility weight

The processing utility weight indicates the benefit of executing function to

process commodity into commodity at time t , in terms of the local

backlog reduction per processing unit cost.

2. Compute the optimal commodity

4. Take the following resource allocation and flow rate assignment decisions:

Observe from the above algorithm description that the finite processing delay may not be involved in the implementation of DCNC. The reason is that omitting in the scheduling

decisions of DCNC does not affect the throughput optimality or the average cost convergence. Delay Improvement via Enhanced Dynamic Cloud Network Control (E-DCNC) The DCNC algorithm determines the routes and the service distribution according to the evolving backlog accumulation in the network. However, queue backlogs have to build up in the appropriate direction before yielding efficient routes and service distribution, which may result in degraded delay performance.

The delay performance of multi-hop queuing networks may be improved by introducing a bias term into the weight of the dynamic control algorithm, where the bias term represents the number of hops or the geometric distance to the destination. Control decisions are then made based on the joint observation of the backlog state and the bias term. In order to leverage this technical track to reduce end-to-end delay in cloud networks, the bias term needs to capture the effect of both transmission and processing delay. Accordingly, one may propose E-DCNC, and enhanced DCNC algorithm designed to reduce cloud network delay. Specifically, for each queue backlog one may define a modified queue backlog

where denotes the shortest transmission-plus-processing distance bias term, and Tj may be a control

parameter representing the degree to which one may emphasize the bias with respect to the backlog. Furthermore, one may define

where represents the number of hops from node i to node j along the shortest path.

E-DCNC may be formed by using in place of in the DCNC algorithm, and

modifying the condition for choosing to be as follows (see step 3 of DCNC algorithm description):

• For local transmission decision

• For local processing decisions:

The motivation of changing the condition for setting may be to avoid unnecessary resource consumption when are positive, but the queue of commodity may be empty.

It may be shown that the throughput optimality and average resource cost efficiency of E- DCNC may be guaranteed. Moreover, as shown by simulation experiments, a significantly lower congestion level may be achieved under E-DCNC, demonstrating enhanced delay performance, particularly when the network is lightly loaded, without compromising overall throughput nor cloud network cost. In fact, note that with the bias term defined in (14) and under light traffic load, for commodities that require further processing, flows tend to be routed

along the path with the smallest combined transmission-processing delay, while for final commodities, data flows follow the shortest path to their corresponding destination

node d .

FIG. 3 depicts a high-level block diagram of a computer suitable for use in performing functions described herein.

The computer 300 includes a processor 302 e.g., a ce(ntral processing unit (CPU) and/ or other suitable processor(s)) and a memory 304 e.g., ran(dom access memory (RAM), read only memory (ROM), and the like). The computer 300 also may include a cooperating module/process 305. The cooperating process 305 may be loaded into memory 304 and executed by the processor 302 to implement functions as discussed herein and, thus, cooperating process 305 (including associated data structures) may be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, and the like.

The computer 300 also may include one or more input/output devices 306 (e.g., a user input device (such as a keyboard, a keypad, a mouse, and the like), a user output device (such as a display, a speaker, and the like), an input port, an output port, a receiver, a transmitter, one or more storage devices (e.g., a tape drive, a floppy drive, a hard disk drive, a compact disk drive, and the like), or the like, as well as various combinations thereof).

It will be appreciated that computer 300 depicted in FIG. 3 provides a general architecture and functionality suitable for implementing functional elements described herein and/ or portions of functional elements described herein. For example, the computer 300 provides a general architecture and functionality suitable for implementing one or more of an element of a DDC 110, a portion of an element of a DDC 110, an element of CN 120, a portion of an element of CN 120, CD 130, a portion of CD 130, CSMS 140, a portion of CSMS 140, or the like.

It will be appreciated that the functions depicted and described herein may be implemented in software (e.g., via implementation of software on one or more processors, for executing on a general purpose computer (e.g., via execution by one or more processors) so as to implement a special purpose computer, and the like) and/ or may be implemented in hardware (e.g., using a general purpose computer, one or more application specific integrated circuits (ASIC), and/ or any other hardware equivalents). It will be appreciated that at least some of the steps discussed herein as software methods may be implemented within hardware, for example, as circuitry that cooperates with the processor to perform various method steps. Portions of the functions /elements described herein may be implemented as a computer program product wherein computer instructions, when processed by a computer, adapt the operation of the computer such that the methods and/ or techniques described herein are invoked or otherwise provided. Instructions for invoking the inventive methods may be stored in fixed or removable media e.g., ( non-transitory computer-readable media), transmitted via a data stream in a broadcast or other signal bearing medium, and/ or stored within a memory within a computing device operating according to the instructions. It should be apparent from the foregoing description that various exemplary embodiments of the invention may be implemented in hardware and/ or firmware. Furthermore, various exemplary embodiments may be implemented as instructions stored on a machine-readable storage medium, which may be read and executed by at least one processor to perform the operations described in detail herein. A machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device. Thus, a machine-readable storage medium may include readonly memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.

It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principals of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

Although the various exemplary embodiments have been described in detail with particular reference to certain exemplary aspects thereof, it should be understood that the invention is capable of other embodiments and its details are capable of modifications in various obvious respects. As is readily apparent to those skilled in the art, variations and modifications can be affected while remaining within the spirit and scope of the invention. Accordingly, the foregoing disclosure, description, and figures are for illustrative purposes only and do not in any way limit the invention, which is defined only by the claims.