Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BIDIRECTIONAL DATA TRAFFIC CONTROL
Document Type and Number:
WIPO Patent Application WO/2017/119950
Kind Code:
A1
Abstract:
A system includes an egress apparatus communicatively coupled with an ingress apparatus via at least one bi-directional network connection established for a given site. Each of the ingress and egress apparatuses includes packet categorizer to categorize each of the egress data packets based on packet evaluation thereof with respect to prioritization rules. Packet routing control places each outgoing data packet (from the ingress or egress apparatus) in one of multiple according to the categorization of each respective packet to control sending the packets according to the priority of the respective queue into which each packet is placed.

Inventors:
JORDAN MICHAEL R (US)
BASART EDWIN (US)
FRITZ KENT (US)
TOVINO MICHAEL (US)
Application Number:
PCT/US2016/061838
Publication Date:
July 13, 2017
Filing Date:
November 14, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INSPEED NETWORKS INC (US)
International Classes:
H04L12/54; H04L47/6275
Foreign References:
US20090028141A12009-01-29
US20120020246A12012-01-26
US20070140128A12007-06-21
US20090303880A12009-12-10
US20030076838A12003-04-24
Other References:
See also references of EP 3400687A4
Attorney, Agent or Firm:
PITZER, Gary J. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method, comprising:

storing, in non-transitory memory, prioritization rules that establish a priority preference for ingress and egress of data traffic for a given site, the given site including a site apparatus to control egress of data traffic and a remote apparatus to control ingress of data traffic with respect to the given site, the site apparatus being coupled with the remote apparatus via at least one bidirectional network connection;

measuring capacity of the at least one network connection for each of egress and ingress of data traffic with respect to the given site;

at the site apparatus, the method comprising:

categorizing each packet in egress data traffic from the given site based on an evaluation thereof with respect to the prioritization rules;

placing each packet in one of a plurality of egress queues at the site apparatus according to the categorization of each respective packet and the measured capacity for egress of data traffic to thereby control sending the packets from the site apparatus to the remote apparatus via a respective network connection according to a priority of the respective egress queue into which each packet is placed; and

at the remote apparatus, the method comprising:

categorizing each packet in ingress data traffic for the given site based on an evaluation thereof with respect to the prioritization rules; and

placing each of the packets in one of a plurality of ingress queues at the remote apparatus according to the categorization of each respective packet and the measured capacity for ingress of data traffic to thereby control sending the packets from the remote apparatus to the site apparatus via a respective network connection according to a priority of the respective ingress queue into which each packet is placed.

2. The method of claim 1, wherein each of the categorizing and the placing is performed at each of the site apparatus and the remote apparatus via machine readable instructions implemented by an operating system kernel of the respective site apparatus and the remote apparatus.

3. The method of claim 2, wherein the method at each of the site apparatus and the remote apparatus further comprises:

evaluating each data packet within the operating system kernel to determine a type of the data traffic based on at least one of an internet protocol, port number or differentiated services code; and

marking the packet within the operating system kernel to specify the type of the data traffic and a respective network interface,

wherein the categorizing at each of the site apparatus and the remote apparatus is performed based on the marking.

4. The method of claim 1, wherein each of the site apparatus and the remote apparatus including a respective network interface for communicating the data traffic via each of a plurality of network connections, each of the respective network interfaces of the site apparatus and the remote apparatus including a respective set of different priority network queues,

at the site apparatus, the method further comprising:

assigning each session of data traffic to one of the plurality of network interfaces; storing network assignment data to specify which of the plurality of network connection each session of data traffic is assigned;

identifying a session of data traffic for each of the data packets; and selectively routing each of the data packets to its assigned network interface via the respective set of network queues thereof according to the network assignment data for the identified session of data traffic.

5. The method of claim 4, wherein the method at each of the site apparatus and the remote apparatus further comprises:

evaluating each packet within an operating system kernel to determine the session of data traffic to which each respective packet is assigned based on at least four of a source internet protocol (IP) address, a source port, a destination IP address, a destination port and a network protocol thereof the respective packet.

6. The method of claim 4, wherein, for each of the site apparatus and the remote apparatus, assigning each session of data traffic further comprises:

calculating capacity of each of the plurality of network connections available for outgoing data traffic, each session of data traffic being assigned to one of the plurality of network interfaces based on the calculated capacity.

7. The method of claim 6, wherein the calculated capacity is determined based on at least one of an active capacity measurement or passive capacity measurement for each of the plurality of network connections at of the site apparatus and the remote apparatus.

8. The method of claim 4, wherein the method at each of the site apparatus and the remote apparatus further comprising:

determining priority of a corresponding network session that is assigned to a given one of the plurality of network connections;

measuring network performance for the given one of the plurality of network

connections;

in response to determining that quality of the given one of the plurality of network connections is not within expected operating parameters based on the measured network performance thereof, reassigning the data packets of the corresponding network session to another one of the plurality of network interfaces, and updating the stored network assignment data accordingly, and

in response to determining the quality of the given one of the plurality of network connections is within expected operating parameters, the corresponding network session remaining at its assigned one of the plurality of network interfaces.

9. The method of claim 1, wherein the at least one network connection comprises at least one respective tunnel to encapsulate bidirectional communication of the data traffic between the site apparatus and the remote apparatus via the at least one network connection.

10. The method of claim 9, wherein the at least one network connection is a plurality of network connections between the site apparatus and the remote apparatus, each of the plurality of network connections comprises an associated tunnel to encapsulate the bidirectional

communication of the data traffic between the site apparatus and the remote apparatus.

11. The method of claim 10, wherein the site apparatus comprises a respective network interface for with each of its network connections, each respective network interface to provide outgoing data packets from the site apparatus to the remote apparatus via the associated tunnel thereof, and

wherein the remote apparatus comprises a respective network interface for each of its plurality of network connections, each respective network interface to provide outgoing data packets from the remote apparatus to the site apparatus via the associated tunnel thereof.

12. The method of claim 11, wherein each of the network interfaces accesses a different network under the control of a respective service provider.

13. The method of claim 11, wherein each of the site apparatus and the remote apparatus includes a set of network queues associated with each of its network interfaces,

at the site apparatus, the method further comprising:

assigning each session of data traffic to one of the plurality of network interfaces, each having a respective set of the egress queues;

storing network assignment data to specify which of the plurality of network interfaces each session of data traffic is assigned;

identifying a respective session of data traffic for each data packet; and selectively routing each data packet to its assigned network interface according to the network assignment data for the identified session of data traffic.

14. The method of claim 13, wherein, at the site apparatus, each of the plurality of network interfaces includes a set of egress queues, such that each outgoing data packet from an application operating in the given site is prioritized by placing it in one of the set of egress queues of its assigned network interface depending on the categorization of each respective outgoing data packet, and

wherein, at the remote apparatus, each of the plurality of network interface includes a set of ingress queues, such that each incoming data packet being sent to the application operating in the given site from the remote apparatus is prioritized by placing it in one of the set of ingress queues of its assigned network interface depending on the categorization of each respective incoming data packet.

15. The method of claim 10, each respective tunnel is configured to provide an encrypted channel for the bidirectional communication of the data traffic between the site apparatus and the remote apparatus.

16. The method of claim 1, wherein the categorizing at each of the site apparatus and the remote apparatus further comprises a stateful process to determine the categorization of at least one given session of the data traffic.

17. The method of claim 16, wherein the method at each of the site apparatus and the remote apparatus further comprises:

evaluating each data packet within the operating system kernel to determine a preliminary type for the data traffic;

passing information from the operating system kernel to a user-level application depending on the preliminary type of traffic, the user-level application operating in parallel with and offloading the operating system kernel;

determining, by the user-level application, at least one of the categorization and priority of the at least one given session of the data traffic; and

notifying, by the user-level application, the operating system kernel of the determined categorization and priority to control the placing of each respective packet.

18. The method of claim 1, wherein the method at one or more of the site apparatus and the remote apparatus further comprises: receiving, at a recipient, a set of packets from a sender, the recipient being one of the site apparatus and the remote apparatus, and the sender being the other of the site apparatus and the remote apparatus;

determining a quality score of the received packets; and

providing the sender with feedback based on the quality score.

19. The method of claim 18, further comprising at least one of:

modifying, at the sender, a dynamic capacity for a given network connection based on the feedback,

wherein the method at the sender further comprises:

reassigning one or more sessions of the data traffic to a different network interface based on the dynamic capacity,

changing the priority of one or more sessions of the data traffic based on the dynamic capacity, and/or

adjusting one or more queues based on the dynamic capacity.

20. The method of claim 1, wherein the site apparatus and the remote apparatus define a first egress/ingress pair, the method further comprising:

communicating data traffic between the first egress/ingress pair and another remote apparatus of at least a second egress ingress pair, wherein the second egress/ingress pair includes a respective site apparatus and the other remote apparatus, the respective site apparatus and the other remote apparatus operating according to the method of claim 1 to control the data traffic therebetween.

21. The method of claim 1, wherein the site apparatus and the remote apparatus defines a first egress/ingress pair associated with a first site, the method further comprising

communicating the data traffic from the remote apparatus to a corresponding remote apparatus of a second egress/ingress pair that is associated with a second site to provide a site-to- site path between the first and second egress/ingress pairs.

22. A system comprising:

an egress apparatus that is communicatively coupled with a remote ingress apparatus via at least one bi-directional network connection established for a given site, the egress apparatus comprising:

memory to store data, the data including machine readable instructions and prioritization rules that establish a priority preference for sending egress data packets from the given site;

one or more processors to access the memory and execute the instructions, the instructions comprising:

a packet evaluator to evaluate the egress data packets in outbound data traffic from the egress apparatus;

a packet categorizer to categorize each of the egress data packets based on the packet evaluation thereof with respect to the prioritization rules; and

packet routing control to place each of the egress data packets in one of a plurality of egress queues at the egress apparatus according to the categorization of each respective packet to thereby control sending the packets from the egress apparatus to the ingress apparatus via a respective network connection of the at least one network connection according to the priority of the respective egress queue into which each packet is placed;

the ingress apparatus comprising:

memory to store data, the data including machine readable instructions and prioritization rules that establish a priority preference for sending ingress data packets to the given site;

one or more processors to access the memory and execute the instructions, the instructions comprising:

a packet evaluator to evaluate the ingress data packets in data traffic being sent from the ingress apparatus to the egress apparatus of the given site;

a packet categorizer to categorize each of the ingress data packets based on the packet evaluation thereof with respect to the prioritization rules; and

packet routing control to place each of the ingress data packets in one of a plurality of ingress queues at the egress apparatus according to the categorization of each respective packet to thereby control sending the packets from the ingress apparatus to the egress apparatus via the respective network connection according to the priority of the respective egress queue into which each packet is placed.

23. The system of claim 22, wherein the at least one network connection comprises a plurality of network connections for at least one of the egress apparatus or the ingress apparatus, the instructions for each of the egress apparatus and the ingress apparatus further comprise: session network assignment control to determine to a session to which each outgoing data packet belongs and to assign each session to one of the plurality of network connections based on analysis of the plurality of network connections.

24. The system of claim 23, wherein session network assignment control further comprises: a capacity calculator to compute network capacity for each of the plurality of network connections available for outgoing data traffic, the session network assignment control assigning each session of data traffic to one of the plurality of network connections based on the calculated capacity, wherein the capacity calculator computes the capacity based on at least one of an active capacity measurement or passive capacity measurement for each of the plurality of network connections for the at least one of the egress apparatus or the ingress apparatus.

25. The system of claim 23, wherein session network assignment control further comprises a packet evaluator to evaluate each outgoing data packet and determine a session to which each respective outgoing data packet has been assigned and to store network assignment data specifying which of the plurality of network connections each session of data traffic is assigned, the session network assignment control sending outgoing data packets from the at least one of the egress apparatus or the ingress apparatus to the other ingress or egress apparatus via a given one of the plurality of network connections identified by the packet evaluator based on the network assignment data, such that all outgoing data packets associated with each respective session are sent via a common network connection to which each session is assigned.

26. The system of claim 25, wherein, at the egress apparatus, a set of multiple egress queues having different priorities are associated with each of the plurality of network connections, the packet routing control of the egress apparatus placing each of the outgoing data packets in one of the set of multiple egress queues associated with the a respective one of the plurality of network connections, which is identified by the packet evaluator, according to the categorization of each respective outgoing data packet.

27. The system of claim 25, wherein the packet evaluator identifies a corresponding network session having high-priority packets, the session network assignment control further comprising: a capacity calculator to measure network performance for at least a given one of the plurality of network connections to which the corresponding network session is assigned; and session link assignment function to reassign the corresponding network session from the given network connection to another one of the plurality of network connections in response to the capacity calculator determining that quality of the given one of the plurality of network connections is not within expected operating parameters, the session link assignment updating the network assignment data to associate the other one of the plurality of network connections with the corresponding network session.

28. The system of claim 23, wherein each session comprises data traffic between an application running within the given site and another application or service external to the given site, the packet evaluator at each of the egress apparatus and the ingress apparatus identifies the session to which each respective outgoing data packet is assigned based on at least five of a source internet protocol (IP) address, a source port, a destination IP address, a destination port and a network protocol thereof the respective data packet.

29. The system of claim 22, wherein the ingress apparatus resides in one of a cloud or a last mile of a service provider network to access a public wide area network, whereby the ingress apparatus and the egress apparatus cooperate to control bi-directional communication of the data traffic for the given site via the at least one connection.

30. The system of claim 22, wherein the packet evaluator, the packet categorizer, and the packet routing control are executable instructions implemented in an operating system kernel of each of the respective egress apparatus and the ingress apparatus.

31. The system of claim 22, wherein the at least one network connection comprises a plurality of network connections between the egress apparatus and the ingress apparatus, each of the plurality of network connections comprises an associated tunnel to encapsulate bidirectional communication of the data traffic between the egress apparatus and the ingress apparatus.

32. The system of claim 31, wherein each of network connection of the egress apparatus comprises a respective network interface to access its associated tunnel, the packet routing control of the egress apparatus sending outgoing data packets from the egress apparatus to the ingress apparatus via one of the tunnels thereof according to which network interface the outgoing data packet is sent, and

wherein each network connection of the ingress apparatus comprises a respective network interface to access its associated tunnel, the packet routing control of the ingress apparatus sending outgoing data packets from the ingress apparatus to the egress apparatus via one of the tunnels thereof according to which network interface the outgoing data packet is sent.

33. The system of claim 32, wherein each of the egress apparatus and the ingress apparatus includes a set of network queues associated with each of its network interfaces, each queue having a different priority used by the network interface for sending the outgoing data packets over an associated network, the instructions for each of the egress apparatus and the ingress apparatus further comprise:

session network assignment control to determine to a session to which each outgoing data packet is assigned and to identify one of the network interfaces for sending each respective outgoing data packet based on network assignment data for the determined session, the packet routing control to place each of the outgoing data packets in one of the set of multiple egress queues associated with the selected one of the network interfaces according to the categorization of each respective outgoing data packet and the prioritization rules.

34. The system of claim 22, wherein the packet categorizer for each ingress and egress apparatus further comprises a stateful process to categorize packets of at least one given session of the data traffic.

35. The system of claim 34, wherein for each ingress and egress apparatus,

the packet evaluator, executing in an operating system kernel, performs a preliminary evaluation of at least one data packet for the data traffic and passes information from the operating system kernel to a user-level application based on the preliminary evaluation, the user- level application comprising the categorizer operating in parallel with and offloading the operating system kernel;

the categorizer, executing as the user-level application, determines the categorization and priority of the at least one given session of the data traffic and notifies the operating system kernel of the determined categorization and priority for packets belonging to the given session.

36. The system of claim 22, further comprising a network analyzer at one of the ingress apparatus or the egress apparatus, the network analyzer configured to:

receive a set of packets from a sender apparatus, the sender apparatus being one of the site apparatus and the remote apparatus that is different from where the network analyzer is at; determine a quality score for the received packets; and

providing the sender with feedback based on the quality score.

37. The system of claim 31, further comprising a session network link assignment control at the sender apparatus, the session network link assignment control being configured to:

modify a dynamic capacity for a given network connection based on the feedback, the sender apparatus further configured to at least one of:

reassign one or more sessions of the data traffic to a different network interface based on the dynamic capacity,

change the priority of one or more sessions of the data traffic based on the dynamic capacity, or

adjust one or more queues based on the dynamic capacity.

38. The system of claim 22, wherein egress apparatus and the ingress apparatus defines a first egress/ingress pair associated with a first site, the ingress apparatus of the first egress/ingress pair communicating the data traffic with a corresponding remote apparatus of a second egress/ingress pair that is associated with a second site to provide a site-to-site path between the first and second egress/ingress pairs.

39. A method, comprising:

receiving, at a recipient, incoming traffic from a sender, the recipient being one of a site apparatus or a remote apparatus, the site apparatus and the remote apparatus defining an egress- ingress pair of apparatuses that communicate via at least one bi-directional network connection for a given site between the egress-ingress pair in which the site apparatus controls egress of data traffic with respect to the given site and the remote apparatus controls ingress of data traffic with respect to the given site, the sender being one of (i) the other of the site apparatus or the remote apparatus, (ii) an internal apparatus within the given site or (iii) an external apparatus outside the given site;

analyzing the incoming traffic from the sender to identify a quality issue associated with the incoming traffic;

determining that the identified quality issue pertains to at least one network connection for traffic that is external to the at least one bi-directional network connection for the given site between egress-ingress pair.

40. The method of claim 39, further comprising:

in response to determining that the identified quality issue pertains to resources external to the at least one bi-directional network connection for the given site between the egress-ingress pair, sending a notification to a predetermined entity associated with the given site to identify a location for the identified quality issue.

41. The method of claim 40, further comprising determining the location for the identified quality issue is at least one of within the given site, within a last mile connection, within a network backbone, or in a first mile, the notification specifying the determined location.

42. The method of claim 40, wherein the sender is the external apparatus or application outside the given site and the recipient has multiple connections to the external apparatus or application, one of which being used as a path to communicate at least one session of traffic from the recipient to the sender, the method further comprising, in response to determining that the identified quality issue pertains to the traffic external to the egress-ingress pair, changing a path for the at least one session of traffic being communicated from the recipient to the sender.

43. The method of claim 39, wherein analyzing the incoming traffic further comprises:

analyzing at least one packet in the incoming traffic to identify a type of the incoming traffic; and

if the type of the incoming traffic to the recipient is user datagram protocol (UDP) traffic, calculating at least one of latency, jitter or loss for the UDP traffic;

wherein the method further comprises analyzing the outgoing traffic from the recipient and, if the type of the outgoing traffic is transmission control protocol (TCP) traffic, monitoring re-transmissions in the TCP traffic.

44. The method of claim 39, wherein the ingress apparatus is located at a service provider network hub associated with a data center that provides a service for the given site, the method further comprises determining that the identified quality issue pertains to at least one of the service provided by the data center or a communication link between the network hub and the service provided by the data center.

45. The method of claim 44, further comprising:

in response to determining that the identified quality issue pertains to at least one of the service provided by the data center or the communication link between the network hub and the service, sending a notification to at least one predetermined entity; and

confirming health status of the communication link in response to the notification to ascertain whether the identified quality issue pertains to either an application in the data center or the communication link.

46. The method of claim 39, wherein the sender is the other of the site apparatus or the remote apparatus of the egress-ingress pair and the at least one bi-directional network connection for the given site includes a plurality of network connections, the method further comprises, in response to determining that the identified quality issue pertains to at least one session of traffic being sent over a given one of the plurality of network connections between the egress-ingress pair, changing a path for the at least one session of traffic being communicated between the recipient and the sender to another one of the plurality of network connections.

Description:
TITLE: BIDIRECTIONAL DATA TRAFFIC CONTROL

CROSS-REFERENCE TO RELATED APPLICATION

[0001 ] This application claims priority from U.S. Patent Application No. 15/148,469, filed May 6, 2016, and entitled BIDIRECTIONAL TRAFFIC CONTROL OF DATA

PACKETS, which claims the benefit of U.S. provisional application No. 62/276607, filed 8 January 2016, and entitled BIDIRECTIONAL TRAFFIC CONTROL OF DATA PACKETS, each of which applications is incorporated herein by reference.

TECHNICAL FIELD

[0002] This disclosure relates generally to systems and methods to provide bidirectional traffic control for data packets.

BACKGROUND

[0003] The last mile of the telecommunications network chain, which physically reaches the end-user's premises, is often the speed bottleneck in communication networks. That is, its bandwidth effectively limits the bandwidth of data that can be delivered to the end user. The type of physical medium that delivers the signals can vary according to the service provider. Examples of some physical media that can form the last mile connection for users can include copper wire (e.g., twisted pair) lines, coaxial cable lines, fiber cable and cell towers linking local cell phones to a cellular network. In a given communication network, the last mile links are the most difficult to upgrade since they are the most numerous and thus most expensive part of the system. As a result, there are abundant issues involved with attempting to improve

communication services over the last mile.

[0004] Connectionless communication networks employ stateless protocols to individually address and route data packets. Examples of connectionless protocols include user datagram protocol (UDP) and internet protocol (IP). While these and other connectionless protocols have an advantage of low overhead compared to connection-oriented protocols, they include no protocol-defined way to remember where they are in a "conversation" of message exchanges. Additionally, service providers implementing such protocols generally cannot guarantee that there will be no loss, error insertion, misdelivery, duplication, or out-of- sequence delivery of packets. These properties of connectionless protocols further complicate optimizations for communication sessions established between parties.

SUMMARY

[0005] This disclosure relates to systems and methods to control bidirectional data traffic for a site.

[0006] As one example, a method includes storing, in non-transitory memory, prioritization rules that establish a priority preference for ingress and egress of data traffic for a given site. The given site includes a site apparatus to control egress of data traffic and a remote apparatus to control ingress of data traffic with respect to the given site. The site apparatus is coupled with the remote apparatus via at least one bi-directional network connection. The method includes measuring throughput of the at least one network connection for each of egress and ingress of data traffic with respect to the given site. At the site apparatus, the method includes: categorizing each packet in egress data traffic from the given site based on an evaluation thereof with respect to the prioritization rules; and placing each packet in one of a plurality of egress queues at the site apparatus according to the categorization of each respective packet and the measured throughput for egress of data traffic to thereby control sending the packets from the site apparatus to the remote apparatus via a respective network connection according to a priority of the respective egress queue into which each packet is placed. At the remote apparatus, the method includes: categorizing each packet in ingress data traffic for the given site based on an evaluation thereof with respect to the prioritization rules; and placing each of the packets in one of a plurality of ingress queues at the remote apparatus according to the categorization of each respective packet and the measured throughput for ingress of data traffic to thereby control sending the packets from the remote apparatus to the site apparatus via a respective network connection according to a priority of the respective ingress queue into which each packet is placed.

[0007] Another example provides a system comprising that includes an egress apparatus that is communicatively coupled with a remote ingress apparatus via at least one bi-directional network connection established for a given site. The egress apparatus includes memory and a processor. The processor executes instructions that include a packet evaluator to evaluate the egress data packets in outbound data traffic from the egress apparatus and a packet categorizer to categorize each of the egress data packets based on the packet evaluation thereof with respect to the prioritization rules. Packet routing control places each of the egress data packets in one of a plurality of egress queues at the egress apparatus according to the categorization of each respective packet to thereby control sending the packets from the egress apparatus to the ingress apparatus according to the priority of the respective egress queue into which each packet is placed. The ingress apparatus includes memory and a processor, which executes instructions including a packet evaluator to evaluate the ingress data packets in data traffic being sent from the ingress apparatus to the egress apparatus of the given site. A packet categorizer categorizes each of the ingress data packets based on the packet evaluation. Packet routing control places each of the ingress data packets in one of a plurality of ingress queues at the ingress apparatus according to the categorization of each respective packet to thereby control sending the packets from the ingress apparatus to the egress apparatus.

[0008] As yet another example, a method includes receiving, at a recipient, incoming traffic from a sender. The recipient is one of a site apparatus or a remote apparatus. The site apparatus and the remote apparatus define an egress-ingress pair of apparatuses for a given site that communicate via at least one bi-directional network link between the egress-ingress pair in which the site apparatus controls egress of data traffic with respect to the given site and the remote apparatus controls ingress of data traffic with respect to the given site. The method also includes analyzing the incoming traffic from the sender to identify a quality issue associated with the incoming traffic. The method also includes determining that the identified quality issue pertains to the at least one bi-directional network connection for the given site between the egress-ingress pair or the quality issue pertains to resources external to the at least one bidirectional network connection for the given site between the egress-ingress pair.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 is a block diagram of a communication system implementing bi-directional traffic control.

[0010] FIG. 2 is a block diagram of an example of a link quality manager that can be utilized to control data traffic.

[0011 ] FIG. 3 is a block diagram illustrating an example of a session network assignment control. [0012] FIG. 4 is an example of packet prioritization and routing that can be utilized to implement link quality management.

[0013] FIG. 5 is a block diagram illustrating an example of a communication system and some physical network connections to enable bi-directional traffic control for a site.

[0014] FIG. 6 is a block diagram of a communication system illustrating examples of tunneling connections for bidirectional communication.

[0015] FIG. 7 is a block diagram illustrating examples of data paths that can be implemented via the tunneling connections in the communication system of FIG. 6.

[0016] FIG. 8 is a block diagram illustrating an example of a communication system that includes multiple egress/ingress pairs to provide multiple stages of bi-directional traffic control between a site and a cloud data center.

[0017] FIG. 9 is a block diagram illustrating an example of an enterprise communication system with multiple egress/ingress pairs connected between different sites of the enterprise.

[0018] FIG. 10 is an example of controls for link quality management that can be implemented.

[0019] FIG. 11 is a flow diagram illustrating an example of a method to assign a network connection for a given session.

[0020] FIG. 12 is a flow diagram illustrating an example of a method of reassigning network connections for outbound traffic.

[0021 ] FIG. 13 is a flow diagram illustrating an example method of localizing quality issue relating to incoming traffic.

DETAILED DESCRIPTION

[0022] This disclosure relates to systems and methods to control bidirectional data traffic for a site. As disclosed herein, this is achieved by controlling ingress and egress of data with respect to the site through a pair of distributed control apparatuses. For example, an egress control apparatus can be located at the site to control data egress from the site and a

corresponding ingress control apparatus can be spaced apart from the site at a predetermined location to control egress of data traffic to the site. The ingress control apparatus can be located in a cloud or other remote location from the site having a least one network connection to one or more high-bandwidth networks (e.g., the Internet). Each of the egress control apparatus and ingress control apparatus is configured to prioritize data packets that have been categorized as time sensitive and/or high-priority over other data packets. For example, the high-priority data packets can include interactive data traffic, such as voice data, interactive video, interactive gaming or time-sensitive applications. The egress control apparatus and ingress control apparatus for a given site cooperate with each other to provide bidirectional data traffic control for the given site in which higher priority data packets can be sent before other lower priority data packets, thereby maintaining a quality of service for predetermined types of traffic, such as including interactive media and other time-sensitive (e.g., real-time) traffic.

[0023] By way of example, each of the egress and ingress control apparatuses includes a link quality manager, which can be implemented at the operating system kernel level, to categorize and, in turn, determine a corresponding priority for each outbound data packet. Two or more queues can be configured to feed packets to each respective network connection, and there can be any number of one or more network connections. One of the queues is a high priority queue for sending traffic that the link quality manager categorizes as high priority traffic. Lower priority data packets can be placed in the other queue(s). The link quality manager prioritizes each of the data packets by placing it in a corresponding priority queue for sending the outbound packet to the other of the ingress/egress control apparatus. In this way, data packets categorized as high priority are place in the high priority queue and thus are sent before lower priority traffic, which is placed in other queues.

[0024] For example, in response to the user input to configure the priority of traffic, a plurality of packet categories can be established and the link quality manager can utilize the categories to categorize and prioritize traffic thus by splitting the routing functionality into separate egress/ingress control apparatus that exist at the site and then at the cloud, the prioritization can be implemented in a bidirectional manner. Thus by making interactive media (e.g., voice, video conferencing or other user defined applications) as high priority types of packets and by fixing the assignment of each respective session to a given communication link, the quality of the interactive or other user defined high priority types of traffic can be

communicated bi-directionally at high quality relative to other approaches. Thus packets identified as time sensitive requiring specifically high priority are placed in high priority queues for faster communication than other types of traffic. [0025] As mentioned, in some examples, an ingress or egress control apparatus can include more than one network connection for sending outbound data packets. To mitigate out of order and lost packets, the link quality manager implements session network assignment control to assign each session to a given one of the network connections. Packet prioritization and routing of data packets can be implemented for placing data packets in the appropriate priority queues implemented for each respective network connection. At each egress/ingress control apparatus for a given site, each outbound packet can be evaluated to determine if it matches an existing session. If no existing session is found, a new session can be created, such as by storing the session information in a corresponding session table.

[0026] In addition to the initial assignment for each respective session, the link quality manger can reassign an ongoing session under predetermined circumstances. For instance, in response to determining that capacity of a network has changed sufficiently to adversely affect transmission of high priority data packets (e.g., passive and/or active network quality

measurements), the corresponding session can be reassigned from a current network connection to another network connection. A failure of a network connection can result in all sessions assigned to such failed network being reassigned. The reassignment can be implemented according to the same or a different assignment method than is implemented for the original assignment. For each network connection that is operational, the corresponding packet prioritization and routing can be implemented to ensure high priority outbound packets are effectively sent ahead of lower priority packets. Since the prioritization and routing is performed at each of the egress control apparatus and the ingress control apparatus, a high quality of the time sensitive data bi-directional traffic can be maintained for the site.

[0027] FIG. 1 depicts an example of a communication system 10 that includes an egress control apparatus (also referred to as a site apparatus) 12 and an ingress control apparatus (also referred to as cloud apparatus) 14 that are configured to cooperate for providing bi-directional traffic control for a corresponding site 16. As used herein, a site can refer to a location and/or equipment that can access one or more wide area network (WAN) connections 18. For example, a site can be an enterprise, such as an office, business or home that includes a set of computing devices associated with one or more users. As another example, a site can be an individual user, such as may have access to one or more data networks (e.g., WiFi network and/or cellular data network) via one or more devices, such as a smart phone, desktop computer, tablet computer, notebook computer or the like. When a user has access to the WAN via more than one device, each respective device can itself be considered a site within the scope of this disclosure. Thus, as used herein, the site can be an endpoint or an intermediate location of the network that is spaced apart from the ingress apparatus (i.e., the egress control apparatus 12 and the ingress control apparatus 14 defines an egress/ingress pair that can be distributed across any portion of the network). As a practical matter, the egress/ingress pair tends to realize performance

improvements when located in the network across transitions from high to low capacity or other locations that present quality and/or capacity issues.

[0028] The connections 18 can provide internet or other wide area connections according to a data plan associated with the site (e.g., via contract or subscription to one or more internet service providers). As an example, the connection 18 can provide data communications via a wired physical link (e.g., coaxial cable, digital subscriber line (DSL) over twisted pair, optical fiber, Ethernet WAN) or a wireless physical link (e.g., wireless metropolitan network (WIMAN), cellular network) for providing bi-directional data communications with respect to the site 16. Each such physical link can employ one or more physical layer technologies to provide for corresponding transmission of data traffic via each of the respective the connections 18. The egress control apparatus 12 thus is located at the site and communicates with the ingress control apparatus 14 via its one or more connections 18. For the example where the site is implemented as a smart phone or other mobile computing device, such smart phone device can include the site apparatus 12 implemented as hardware and/or software to control egress of traffic with respect to the site (e.g., smart phone), and the site apparatus cooperates with a corresponding ingress apparatus 14 that is configured to control ingress of traffic with respect to such site, as disclosed herein. Since the smart phone is portable, its physical connections 18 can change according to the available set of one or more connections (e.g., one or more cellular and/or one or more local wireless LAN) at a given location where the smart phone resides. In some examples, the same logical connections can be maintained between the ingress and egress apparatuses 12 and 14 as the portable device moves from one location to another.

[0029] In some examples, such as where the site provides data communication for a plurality of users and/or user devices, the site can also include a local site network 20. For example, one or more applications 22 can be implemented at the site 16 for communicating data with one or more other external applications (e.g., an end user application or service) via the site network 20 through the egress control apparatus 12. Such external application can be implemented within a corresponding computing cloud (e.g., a high speed private and/or public wide area network, such as including the internet). The corresponding computing cloud may be private or public, or at a private data center or on servers within another enterprise site. Each of the respective applications 22 can be implemented as machine executable instructions executed by a processor or computing device (e.g., the IP phone, tablet computer, laptop computer, desktop computer or the like).

[0030] As disclosed herein, the egress control apparatus 12 is communicatively coupled with the ingress control apparatus 14 via a tunnel on one or more network connections 18 of a network infrastructure. The tunnel encapsulates an application's egress packets with a new header that specifies the destination address of the ingress control apparatus 14, allowing the packet to be routed to the ingress control apparatus before going to its ultimate destination included in each of the egress packets. The egress control apparatus 12 operates to control outbound data packets that are sent from the site 16, such as from the applications 22, the network 20 or the apparatus itself to another resource (e.g., an application executing on a device, such as a computing device). Specifically, the egress control apparatus 12 controls sending data packets via one or more egress links 26 of the network connection 18. Similarly, the ingress control apparatus 14, which is located in the cloud or other remote network connection, controls and manages ingress of data packets to the site 16 via one or more ingress links 28 of the network connection 18.

[0031 ] For example, each of the egress link 26 and the ingress link 28 for the site 16 can include one or more network connections hosted by a contracted network service provider (e.g., an internet service provider (ISP)). Thus, when each of the links 26 and 28 include multiple different network connections, each link can provide an aggregate network connection having a corresponding aggregate bandwidth allocation that is made available from the set of service providers according to each respective service plan and provider capabilities, much of which is outside the control of the site. For example, a service plan for a given service provider can be designed to provide the site (e.g., a customer) an estimated amount of upload bandwidth and another amount of download bandwidth. The upload and download bandwidth (e.g., a static available bandwidth) thus constrains the amount of data traffic via the portion of the egress connection 26 and ingress connection 28 attributable to the service plan from the given service provider. When the egress and ingress connections involve multiple connections, the constraints on data traffic are summed across each of the connections. While a service provider may provide the static bandwidth in terms of maximum or "guaranteed" bandwidth, in practice, each of the connections 26 and 28 can become saturated with data which can result in interactive data, such as video or other media streams developing excessive jitter and lose packets resulting in poor quality.

[0032] In some examples, the portion of the network 18 between the egress control apparatus 12 and the ingress control apparatus 14 can include the 'last mile' of the

telecommunications network for customers, such as corresponding to or including the physical connection from the site 16 to the provider's main network high capacity infrastructure. It is understood that a particular length of a connection 18 between the egress control apparatus and ingress control apparatus are not necessarily literally a mile but corresponds to a physical or wireless connection between subscriber's site 16 and the service provider network. For instance, a portion of the network 18 thus can correspond to copper wire subscriber lines, coaxial service drops, and/or cell towers linking to cellular network connections (including the backhaul to the cellular provider's backbone network). Portions of the service provider's network beyond the last mile connection 18, which are demonstrated in the cloud at 28, as corresponding to the highspeed, high-bandwidth portion of the cloud 24. For example, egress control apparatus 12 is located at the site 16 generally at an end of the last mile link and the ingress control apparatus 14 is located on the other side of the last mile link, such as in the cloud 24 connected with one or more networks' high capacity infrastructure, corresponding to link 28.

[0033] While the foregoing example describes the egress apparatus at an enterprise site and the ingress apparatus at the other side of a last mile link, in other examples, the

egress/ingress pair can be distributed at other locations of the network. For example, an egress/ingress pair 12, 14 can be provided across a peering point where separate WANs (e.g., internet networks) exchange bidirectional traffic between users of the network. As another example an egress/ingress pair 12, 14 can be provided across a portion of a backbone network that is known to exhibit congestion or failure. Thus, as used herein a given egress ingress pair can be provided across any portion of the network or across the entire network.

[0034] Each of the egress control apparatus 12 and the ingress control apparatus 14 can include hardware and machine-executable instructions (e.g., software and/or firmware) to implement a link quality manager 30 and 32, respectively. Each of the link quality managers 30 and 32 operate in the communication system to dynamically control outbound data traffic via each of the respective egress and ingress connections 26 and 28 by prioritizing how outbound data packets are sent across the link 18. As a result, the link quality managers 30 and 32 cooperate to provide bidirectional traffic control that realizes an increase quality for interactive as well as other types of data that may be identified as being important to the user. The link quality manager 30 can provide traffic control for both egress and ingress data packets, which can be programmable in response to a user input. For example, a user can specify one or more categories of data packets that are designated high priority data packets to be sent out over the link 18 before other lower priority data packets. In a simple example, there may be two categories of data: high-priority data and low priority data. For example, interactive and other time-sensitive data can be categorized as having priority over other traffic that can be categorized as low priority traffic. The low priority data can correspond to data that is either explicitly determined to be low priority or correspond to traffic having no priority. There can be any number levels of priority for a different categorization for data packets. In some examples where lower priority traffic is sent after high priority traffic (e.g., traffic categorized as interactive or time-sensitive), if the low priority queue becomes full (e.g., due to continually sending out high priority traffic via the network connection), the low priority traffic may be dropped (e.g., discarded) from its queue.

[0035] As mentioned, in some communications systems, the network connection 18 includes a plurality of different, separate physical connections from one or more service providers. Given multiple distinct network connections, each link quality manager 30 and 32 is further programmed to assign each data flow to a corresponding session, and each session can be assigned to a respective one of the network connections, such as by specifying its network interface in the control apparatus 12 or 14. As used herein, a session refers to a persistent logical linking of two software application processes (e.g., running as machine executable instructions on a device), which enables them to exchange data over a time period. A given session can be determined strictly by examining source and destination IP addresses, source and destination port number, and protocol. For example, transmission control protocol (TCP) sessions are torn down using protocol requests, and the link quality manager closes a session when it sees the TCP close packet. As another example, user datagram protocol (UDP) packets are part of one-way or two- way sessions. A two-way session is identified in response to the link quality manager 30 detecting a return packet from a UDP session, and is closed when there is no activity for some prescribed number of seconds. Sessions that have no return packets are one-way and are closed after, typically, a shorter number of seconds. The operating system kernel for each apparatus 12, 14 thus can open and close sessions.

[0036] In some examples where a plurality of different network connections form the egress connection (e.g., an aggregate egress connection) 26 in the network 18, the link quality manager 30 can assign each session to a given network connection when it is created. Similarly, where a plurality of different network connections form the ingress connection (e.g., an aggregate ingress connection) 28 in the network 18, the link quality manager 32 assigns each new session to a given network connection. Typically, each respective session uses the same network connection for outbound and inbound traffic at each control apparatus 12, 14. The assignment of sessions to a network can be stored (e.g., as a sessions table or other data structure) in memory. The network assignment for each session remains fixed for all data packets in that session until, if circumstances warrant, the session is reassigned to another of the plurality of available networks in the aggregate connection. Examples of some approaches that the link quality manager can use to assign sessions to one of the network connections can include a round robin assignment, a capacity and/or load-based assignment (i.e., "weighted round robin"), a static performance determination or dynamic capacity determination (see, e.g., FIG. 3).

[0037] As a further example, there can be a plurality of queues implemented for each network connection 26 to enable categorization and prioritization of the outbound data packets to be sent from the site (e.g., by one of the applications 22) via a selected connection. As used herein, each queue can used by a network interface driver to retrieve the data packets for sending out via a respective network connection according to the established priority for its queues. The queues for each network connection can be configured by and operate within the operating system kernel to facilitate processing of the data packets (e.g., in real time). The queues can include a data structure in physical or virtual memory. For instance, each queue can store data packets in a first-in-first-out (FIFO) data structure. The actual data packets from the IP stack can be stored in the queue or, in other examples, the queue can contain pointers to the data packets in the IP stack. For instance each queue consists of descriptors that point to other data structures, such as socket kernel buffers that hold the packet data. Such other data structures can be used throughout the kernel for processing such packets. The network interface driver for each network connection prioritizes all data packets in the high priority queue by sending them via the network before packets in each lower priority queue.

[0038] In order to enable placement of data packets in the appropriate priority queues, the link quality manager 30, in kernel space (for efficiency purposes), categorizes each of the outbound data packets, such as provided from one or more of the applications 22. The categorization can be based upon predefined rules that are programmed (e.g., via a corresponding interface in user space) into the link quality manager 30 in response to a user input. In some examples, the user input can correspond to a set of default categorization rules, such as to prioritize interactive types of communication (e.g., voice communication, video communication and/or other interactive forms of communication). Example of information from data packets that can be analyzed by the link quality manager 30 for data categorization and resulting prioritization can include IP address (e.g., source and/or destination), port numbers, transport protocols, quality of service information (e.g., Differentiated Services Code Point (DSCP)), packet content. The link quality manager 30 can apply the rules to the analyzed information to ascertain a categorization for each data packet and, in turn, specify a corresponding level of prioritization queue into which the data packet is placed for sending out via the assigned network connection from the egress control apparatus 12 to the ingress control apparatus 14. In some examples, there may not be enough information within the packet itself, and the link quality manager may require additional packet analysis to determine whether or the packet is part of a high priority application's traffic and, based on such additional analysis prioritize such packet properly. In some examples, the additional analysis is implemented by handing off the packet and associated kernel-level data to a user-level application (e.g., via a corresponding application program interface (API)). For example, the link quality manager 30 can interpret a SIP call setup request to determine the port number for a voice call, which preliminary information determined at the kernel level can be utilized by the user-level application along with other information obtained over one or a series of data packets for such session to categorize the session as a voice (e.g., high-priority) session. The user-level application may also determine a priority for such session, and then returns the categorization and/or priority information to the kernel via the API for further processing (e.g., by the kernel). For the example of transmitting UDP packets, heuristics can be utilized by the link quality manager 30 to determine if the packet is voice (or another high-priority category). By interpreting other protocols, particular traffic can be correctly identified. For the example of SIP, the link quality manager 30 can identify a SIP packet and then in a subsequent SIP packet, the link quality manager 30 can ascertain a port number for the UDP traffic, which can be used to categorize the session with particularity.

[0039] In response to data packets received from the egress control apparatus 12, the ingress control apparatus (in the cloud 24) removes packets from the tunnel, strips off the tunnel IP header, and performs a routing function to send the packet to the destination application according to the original packet header information (e.g., based on the defined destination address). This packet readdressing mechanism allows traffic to flow from the site application to its destination via the remote ingress apparatus 14. To enable receipt of incoming traffic originating from an external application to the cloud ingress control apparatus, instead of the application at the site, the site's application DNS can be modified (e.g., by the ingress or egress control apparatus, or the site administrator) to the IP address of the remote ingress control apparatus 14. Thus the site will receive incoming connections via the ingress control apparatus.

[0040] While the foregoing traffic flow was explained from the perspective of the link quality manager 30 at the egress control apparatus 12, the link quality manager 32 at the ingress control apparatus operates in a substantially similar manner with respect to the ingress data packets sent to the site (e.g., site applications 22). That is, the link quality manager 32 performs categorization and prioritization to send data packets to the egress control apparatus 12 via one or more network connections 28.

[0041 ] FIG. 2 depicts an example of a link quality manager 50 that can be utilized to control outbound data traffic over one or more network connections 52, demonstrated as networks 1 through N, where N is a positive integer. Thus, there can be one or more networks. The outbound traffic is provided in as outbound data packets (e.g., IP packets or other data units) 54, which can be provided to the link quality manager 50 from an application executing on a computing device (e.g., corresponding to applications 22). Thus, the type of data traffic will depend on the application that provides the data. The link quality manager 50 can correspond to the link quality manager 30 or 32 that is utilized in the egress/ingress control apparatus 12 or 14, respectively, disclosed with respect to FIG. 1. Thus, reference can be made back to FIG. 1 for additional context associated with how the link quality manager may be used in a communication system. In some examples, the functions of the link quality manager relating to moving a session from one network to another, as disclosed herein, can be implemented only at the site egress apparatus to facilitate session tracking and management, and the associated ingress control apparatus differs in that it is not required to move sessions to other networks. However, in situations where the ingress control apparatus also operates as an egress control apparatus, as part of another egress-ingress pair, such functionality can be included as part its corresponding egress control apparatus.

[0042] The link quality manager 50 processes each outbound data packet 54, analyzes the packet and sends the packet out on a selected network 52 connection corresponding to a physical layer. In the example of FIG. 2, the link quality manager 50 includes a session network assignment control block 56. The session network assignment control block 56 can be implemented in hardware, software or firmware (e.g., corresponding to machine readable and executable instructions). Each outbound packet 54 either belongs to an existing session or is assigned to a new session. The session network assignment control 56 analyzes each packet to determine to which session the packet belongs or creates a new session if the packet is not part of an existing session. The network assignment control 56 select which of the networks 52 for sending each data packet based on which network the corresponding session has been assigned (e.g., as described in network assignment data). If only one network 52 is available, all sessions would be assigned to such network for sending outbound packets.

[0043] As disclosed herein (see, e.g., FIG. 3), information in each packet can be evaluated to ascertain whether the packet has already been assigned to a session. If the analysis of a given packet data indicates that it belongs to an existing session, the network assignment control 56 identifies a network (e.g., by specifying a network interface) for the given packet, and the identified network is used in subsequent processing by packet prioritization and routing function 58. If the outbound packet 54 does not match an existing session, the session network assignment control block 56 creates a new session for that packet and other packets that may follow and belong to the same session. The network assignment control 56 can also tear down (e.g., close) an existing session after it has been completed (e.g., remove the session from a session table). For example, a given session comes into existence when a packet arrives from the site network and is not found in the session table. Sessions can be removed (e.g., closed) either by observing a TCP packet closing a connection that corresponds to an open session. As mentioned previously UDP packets are connectionless and are timed out and removed after a prescribed time interval with no additional packets in the session. If a UDP packet arrives that was previous in a session, which has already timed out, another session can be created.

[0044] In addition to assigning a given session to a respective network to which it will remain assigned for all subsequent packets in the given session, the network assignment control block 56 can also reassign a session. As disclosed herein, for example, packets in certain sessions can be determined (e.g., by packet prioritization/routing block 58) to be high priority packets. In certain conditions, as defined by a set of rules, network assignment control block 56 can reassign a session to a different network connection 52. For example, when it is has been determined that quality over the currently assigned network cannot be maintained for timely delivery of outbound packets of high-priority time- sensitive sessions, the session network assignment control block 56 can reassign the session to a different available network. For example, the determination to reassign can be based on active and/or passive measurements for the outbound traffic that is provided via each of the respective networks 52. While the priority, which can be determined by the packet prioritization/routing block 58, can be utilized to reassign ongoing sessions based upon the active/passive measurements over the network 52, all sessions over a given network can be reassigned if it is determined that the given network connection is lost of if it drops below a predetermined threshold. Under normal operating conditions where multiple network connections remain available, however, only those packets determined to be high priority packets (e.g., any packet determined to have sufficient priority - other than low priority traffic or traffic having no priority), as disclosed herein, are analyzed for reassignment to another network.

[0045] Where multiple available networks exist, the packet prioritization/routing block

58 utilizes the network assignment from session network assignment control block 56 to control to which network (e.g., selected from network 1 through Network N) is to be utilized for sending the packets. The packet prioritization/routing block 58 categorizes each of the packets and determines a corresponding priority for sending each data packets via its assigned network. In order to facilitate the prioritization of the outbound packets over the corresponding networks 52, the link quality manager 50 can instantiate plurality of queues 60 for each of the respective networks 1 through N. For instance, at least a high priority queue and a low priority queue. The low priority queue(s) can receive traffic that is categorized, explicitly or implicitly, as something other than the high priority. As disclosed herein, the packet prioritization/routing block 58 thus can place data packets that are determined to be high priority, time- sensitive data in the high priority queue 60 and other data to one or more available low priority queues.

[0046] As mentioned, the packet prioritization/routing block 58 and the respective queues 60 can be implemented within the operating system kernel based on user instructions. Additionally or alternatively, some functions outside of the kernel can be used. For example, a corresponding service can provide a user interface for implementing and configuring for traffic control in the communication system. In response to a user instructions via the user interface, the service can establish a set of rules to define data and metadata to categorize and queue the outbound data packets 54 for each network connection. For example, the link quality manager 50 can implement a kernel-level interface to program operating system kernel functions with rules and user-level applications that establish methods for analyzing and categorizing data packets. The packet prioritization/routing block 58 is further programmed to place the data packets in the appropriate queue for each respective network 52 based on the determined categorization.

[0047] For the example IP data packets, the packet prioritization/routing 58 can employ rules that evaluate IP headers, and depending upon certain categories of traffic derived from the IP headers, the packet prioritization/routing can evaluate additional information that may be provided in the pay load or other parts of the respective packets. For instance, a UDP packet can be evaluated to determine its port number, and the port number used to categorize the packet. As another example, identification of a TCP packet can trigger inspection of the payload to determine that it is web traffic for a predetermined web application (e.g., a customer relationship management (CRM) software service like Salesforce or Microsoft Office365), which is considered time- sensitive, high-priority to users at the site. As yet another example, the packet prioritization/routing block 58 can analyze packet headers to categorize interactive media data packets, such as voice and/or video, as time- sensitive, high-priority traffic and, in turn, place such interactive media data packets into the high priority queue of the assigned network. As a further example, the packet prioritization/routing block 58 examines DNS names and well- known IP addresses, which can be preprogrammed or learned during operation, to help identify the application and, in turn, categorize the packets to determine an appropriate priority of such packets. [0048] As still another example, certain application DNS names or IP addresses can be determined as interactive or time-sensitive traffic as to require prioritization. These names and IP addresses can be programmed in response to a user input and/or be learned in response to application of the prioritization/routing control to other traffic a priori. Regardless, such DNS names and IP addresses can be stored as part of the prioritization rules and utilized as part of the packet prioritization/routing block 58 to facilitate sending traffic with priority to and/or from corresponding locations.

[0049] Since priority and non-priority applications use the same protocols, the packet prioritization/routing block 58 can identify traffic that is sent to or received from to well-known domain names. As another example, the packet prioritization/routing block 58 can identify traffic based on its resource location, such as can be specified as a uniform resource indicator, such as a uniform resource locator (URL) and/or uniform resource name (URN). For example, a given service provider (e.g., Facebook) uses a variety of applications, including messaging, live two-way voice and live video communication. Business video may be high priority, but Facebook videos are considered non-business. So, we have to have some way of watching the traffic to Facebook to identify that the UDP video is theirs, not the company's video

conferencing application.

[0050] In these and other types of packets there may be no information in the header indicating whether it is voice or video, especially real-time, interactive video compared to a recording or a one-way broadcast. As a result, to identify a phone call or interactive video, for example, the packet prioritization/routing 58 has to evaluate the SIP traffic used to set up the call and then do deep packet inspection in order to see the IP address and port number for categorizing such voice or video session. That is, the packet prioritization/routing 58 performs a stateful method of categorization, which is not normally done for IP traffic. However, in some examples, the egress/ingress pair is located near the edge of the network (e.g., last mile or first mile), network connections tend to be lower than at the network core. Consequently, processor computing speeds are sufficient to enable the stateful method of packet categorization at each of the egress and ingress control apparatuses to provide a scalable solution to categorize packets and their respective sessions.

[0051 ] By way of further example, the packet prioritization/routing 58 implements stateful packet inspection (e.g., deep packet inspection - DPI). As disclosed herein, the stateful method of packet inspection is facilitated since a significant portion of it can be performed when the session starts, and if it can be marked as low priority on the 1 st packet, then a state for the session can be set at low cost. In other more complicated types of traffic (e.g., a Facebook session that results in UDP traffic) that is to be marked with an associated priority, the packet prioritization/routing 58 implements a method to track such types of traffic (e.g., Facebook sessions) in parallel, which can involve multiple sessions due to the possibility of many protocols for a given type of traffic.

[0052] As an example, the packet prioritization/routing 58, operating at the kernel level, signals an event to a categorizer operating outside the kernel (e.g., a user-level application), which can run in the same or another processor. In some cases, a substantial amount of traffic (e.g., a plurality of packets or predetermined information extracted from packets from one or more sessions) can be sent to the user-level categorizer in real time to enable the categorizer to identify a session's priority according to established priority rules. In response to identifying the session's priority, the user-level categorizer notifies the kernel-level packet prioritization/routing to set its priority accordingly. Thus, in some examples, the categorization for a given session can implement a stateful process that is performed as a user-level process operating in parallel with and offloading kernel level functions to identify an application associated with a respective session and mark its priority accordingly. Thus, by offloading the categorization and/or deeper packet inspection from the OS kernel to such user-level application(s), stateful packet inspection is facilitated.

[0053] The categorization can be used within the operating system kernel, such as by adding metadata (e.g., marking the packet to specify a categorization or type) associated with each data packet, for placing the data packets from the IP stack into corresponding queues. A network interface identifier can also be added to the data packet as part of the metadata used in the operating system kernel to enable routing of each data packet to its assigned network 52. The metadata can remain with the packet in the queues 60 or it may be stripped off as the packets are placed into the appropriate queues. The packet prioritization/routing block 58 thus places the packets in the appropriate priority queue associated with the network to which the session has been assigned based on the prioritization of each respective session, which prioritization may be set by user-level functions, kernel level functions or a combination thereof. [0054] As one example, the information in the queues 60 includes pointers to address locations in the IP stack to enable the network 52 to employ its drivers to access the queued outbound packets from the IP stack according to the prioritization queue into which each packet has been placed. In this way, the marking and categorization of each of the packets, will result in being placed in a respective queue is not implemented in the IP stack itself but only within the operating system kernel to facilitate and enable the network to retrieve packet in the appropriate priority. Thus the network assignment control 56 can specify which network interface a given packet is to be provided based upon its session assignment information. The packet

prioritization/routing block 58 can in turn place the packet in the appropriate queue for the identified network interface according to the categorization implemented by the packet prioritization/routing block.

[0055] As mentioned the corresponding link quality manager 50 is implemented in each of the egress control apparatus 12 and ingress control apparatus 14 such that the categorization, prioritization and routing of packets occurs in both egress/ingress directions in respective to the site. A network driver or other interface for each network 52 can retrieve data packets from its high-priority queue before packets from any lower priority queue. As a result, lower priority traffic is sent later and, depending on overall network capacity may be dropped. The packets can be sent out via the network 52 to the other ingress or egress control apparatus, as disclosed herein.

[0056] FIG. 3 depicts an example of session network assignment control 56 such as disclosed with respect to FIG. 2. Thus, the session network assignment control 56 can be implemented within the egress control apparatus 12 as well as the ingress control apparatus 14. As disclosed herein, the session network assignment control 56 implements initial session assignment and subsequent reassignment of sessions to available networks. As mentioned, the functionality of the session network assignment control 56 can be implemented in the operating system kernel space.

[0057] The session network assignment control 56 includes a packet evaluator 70 to inspect predetermined information in respective data packets to determine which existing session the packet belongs or that the packet corresponds to a new session. For example, the packet evaluator 70 can define each session from IP header data as a session tuple, including a source IP address, a source port, a destination IP address, a destination port and a network protocol. The session network assignment control 56 can utilize the packet information (e.g., the session tuple) to determine if the outbound packet matches an existing session. For example, the packet evaluator 70 can compare the session tuple for each outbound data packet with stored session data 72 to determine whether or not an existing session exists for each respective outbound packet. If no session exists, a session generator 74 can generate a new session based upon the determined session information mentioned above. The session generator 74 can store the session tuple for each new session in the session data 72. For example, the session generator 74 can store the session data 72 in a data structure, such as a table, a list or the like, to indicate a current state for each existing session. A session terminator 75 can be provided to close an open the session. For example, in response to terminating a session, such as in response to a command to close a session or the session timing out, the session terminator can remove session information or otherwise modify the session table to indicate the session is no longer open.

[0058] As disclosed herein, each of the outbound packets are assigned to a respective network connection (e.g., communication link) that is determined for each session. The assigned network can be stored in network assignment data 76. In some examples, the network assignment data 76 can be stored as part of the session table in the session data 72. In other examples, the network assignment information for a given session can be stored separately. For instance, the session information operates as an index to look up the network assignment for each session. To control network assignment for each session, the control 56 can include a session link assignment function 78.

[0059] The session link assignment function 78 includes an initial assignment block 79 programmed to control initial session assignment and a session reassignment block 81 to control reassignment of each respective session that has already been opened. The initial assignment block 79 can implement various functions and methods according to the particularly networks that might exist as well as the number of networks available for sending outbound traffic. As one example, the initial assignment block 79 employs a simple round-robin algorithm to arbitrarily assign each session to a respective network in an arbitrary order. Each session can be assigned in a listed order of available networks and upon reaching the end of the list the session link assignment can begin again at the beginning.

[0060] In other examples, the assignment functions 79, 81 can employ different selection algorithm for sessions that have been categorized as high-priority sessions (e.g., high-priority, time-sensitive traffic) compared to lower priority sessions or sessions having no priority. As an example, if the initial assignment function 79 is assigning a priority session, the assignment function can evaluate a plurality of available links for a suitable link, such as a given link that has the best track record or current score meeting the required quality level for this session category. The categorization of the session can be determined dynamically for each packet (e.g., by packet evaluator 70) or for an existing session, defined by session data 72 (e.g., as determined by packet evaluator 70). To implement such selective assignment for different session categories, the initial assignment function 79 thus can utilize session network analysis 80 to ascertain network characteristics and utilize its characteristics in the assignment of each session. For the example of UDP media sessions, network analysis can determine quality based on measurements of latency, jitter, and/or loss of data packets. For the example of a TCP session, quality can be measured by observing session latency, throughput and/or packet re-transmissions from one end of the session.

[0061 ] For example, the network analysis 80 can include a capacity calculator 82 to compute an indication of capacity (e.g., in terms of bandwidth, such as bytes per second, or a normalized index value) for each respective network. The network analysis 80 can also include a load evaluator 84 to evaluate and determine indication of network load that is being sent over each available network. The network analysis 80 can utilize the determined capacity and load of each respective network to statically or dynamically estimate network capacity for each network. The estimated network capacity can be utilized by the initial assignment function 79 of the session link assignment 78 to assign a given media session to a corresponding network, such that each new session is assigned to an available network with a larger capacity (e.g., a capacity meeting a threshold capacity for the respective session category being assigned).

[0062] By way of further example, the initial assignment function 79 can assign new sessions to networks based on a static performance ratio of each network. For instance, each service provider oftentimes specifies a maximum bandwidth for a given user's connection. This may be specified in a contract (service level agreement) or published online or elsewhere. The maximum available bandwidth thus can be provided as input data to capacity calculator 82 of network analysis 80, to compute a corresponding static ratio of relative performance for each of the available network connections. The network analysis 80 can compute respective static performance ratios for each network according to its fractional part of the aggregate bandwidth. As an example, assume that network A has 10 Mbps rated performance and network B has 3 Mbps rated performance. In this case, that session link assignment 78 would choose network A 10/13ths of the time and network B 3/13ths of the time for new session assignments. Such static ratios can be computed and utilized for session assignments in each of the ingress and egress control apparatuses for sending traffic for the given session.

[0063] As another example, the capacity calculator 82 can determine a dynamic capacity estimate for each of the network connections. The dynamic capacity estimate provides an indication of available capacity for each network corresponding to unused bandwidth. For instance, the capacity calculator is programmed to compute an estimate of capacity by measuring the amount of data (e.g., bytes) recently transmitted over a given network in a time period, and subtracting that rate from the statically provided bandwidth (e.g., as specified by the service provider). The session link assignment 78 thus can compare the estimated capacity and assign each new session to the network having the most available bytes per second.

[0064] The parameters used to determine network capacity can be fixed or they can be variable. Thus, the set capacity in a given network link (e.g., one of multiple network links available for egress traffic) can be decreased, such as in response to detecting that the dynamic capacity is insufficient to meet current demands for the given network link or if a session is assigned (or reassigned) to the given network. In other cases, the set capacity can periodically be increased such as in response to not decreasing capacity in a time window. In this way, the capacity calculator 82 can more effectively identify times of increased or reduced capacity for each network, which enables the session link assignment 78 to effectively and efficiently assign sessions to the available network connections.

[0065] The network analysis function 80 can also include a failure detector 86 to detect whether one or more networks has experienced a failure, which may be temporary or permanent. If the failure detector 86 detects that a given network has failed, it can be marked as down such that the initial assignment function 79 assigns no new sessions to the down network. The computations used by the network analysis 80, such as capacity and load calculators mentioned above, can also be adjusted to reflect such down network. As an example, the failure detector 86 can ascertain if a network is down by periodically sending a ping request to a well-known host (e.g., google.com or other service) via each network connection. If there is no response when the request is sent over a given network, the given network can be marked as down. This can be repeated by the failure detector 86 at a desired testing interval or at another programmable time period. Once the testing is successful, the status of the given network can be changed from down back to an operational status. The network status thus can be used to enable the link quality manager of the respective ingress or egress control apparatus to send outbound traffic via the given network that has been assigned for each session.

[0066] In addition to the initial or original assignment of a session to a given network

(e.g., implemented by initial assignment function 79), the session reassignment block 81 is programmed to reassign a session from a currently assigned network to another network based upon the network analysis function 80 applied with respect to an open session. Since a communication system implementing the bi-directional traffic control disclosed herein includes an apparatus at the site as well as in the cloud (e.g., a last mile connection or other remote location), systems and methods disclosed herein have the ability to determine and understand a measure of network performance in both directions for each session. Thus, the network analysis function 80 may cooperate with information that is received from the remote apparatus. For instance, the network analysis function 80 can monitor traffic that is sent out from its location via a given network, as mentioned with the respect to the initial session assignment. The network analysis 80 can perform static measurements, active measurements or a combination of static and active measurements for each of the available networks. As used herein, active measurements involve creating additional traffic that is sent in the outbound traffic via one or more of the networks for the purpose of making such measurements, whereas passive measurements evaluate measurements made on existing traffic being sent out of one of the egress or ingress control apparatus that is implementing the session network assignment control block 56. Examples of some types of measurements that can be utilized by the network analysis function 80 to determine whether network link connection reassignment is necessary for high priority or time- sensitive data sessions can include network failure, local path sojourn time and jitter. In one example, the network analysis 80 can perform an active measurement of network capacity by sending test data of a predetermined size (e.g., one MB) to its associated control apparatus and determine the travel time for the test data to arrive. The travel time can be divided by the size of the test data to determine capacity.

[0067] By way of further example, the failure detector (or another function) 86 can be programmed to send a ping from its egress or ingress control apparatus to a predetermined recipient. For instance, the predetermined recipient for egress or ingress control apparatus can be the associated ingress or egress control apparatus. The ping can be a simple request for an acknowledgement response, for example, that the sender uses to ascertain whether or not a given network connection is up or down. Where a given egress or ingress control apparatus includes multiple connections, the ping can be provided via each connection periodically. As one example, to ensure connections are maintained for interactive and/or real-time media traffic, such as voice and/or video conferencing, the ping can be sent via each network connection at an interval. The ping interval can be set to a default level or be user programmable to a desired time (e.g., about every 300 milliseconds, every second or at another interval). Since the ping requires a response from a recipient (e.g., ingress or egress control apparatus), it corresponds to an example of an active measurement.

[0068] As another example, the session network assignment control 56 can include a path sojourn time calculator 88 to measure queue sojourn time. The queue sojourn time is an example of a passive measurement that can be used for session reassignment. The path sojourn time calculator 88 can measure the time that it takes for a given outbound packet to travel along the path (or at least a portion of the path) through the link quality manager (e.g., link quality manager 50) to the network (e.g., network 52). As one example, the path sojourn time calculator 88 can include a clock and determine the sojourn time measurement as the difference in clock values from when a given packet is input into a respective queue until when the given packet is output from the respective queue. In some examples, the path sojourn time calculator 88 can measure the sojourn time with respect to packets that pass through the high-priority queue for each network. In other examples, the path sojourn time calculator 88 can measure the sojourn time with respect to packets that pass through the lower priority queues. The network analysis function 80 can be programmed to determine the quality of the media traffic being measured from the measured sojourn times for data traffic sent through the respective queues.

[0069] For example, the network analysis function 80 can compare the sojourn time with respect to one or more thresholds. The sojourn time threshold can be set as a function of the bandwidth of the particular network link that the queue is coupled to output data packets. So long as the network analysis function 80 determines that the sojourn time is sufficiently short (e.g., less than a predetermined threshold), then it indicates that the high-priority traffic may have good quality. That is, short sojourn time means a time that is somewhat longer than packet transmission time. A sojourn time threshold can be set based on expected link speeds, which can be determined by the capacity calculator 82. . For instance, when congestion occurs in the path between the ingress/egress apparatus, the rate at which packets are sent out via a given network will slow down, resulting in a corresponding increase in sojourn time.. Thus, the network analysis 80 can monitor the progress of data packets through the queues and determine whether to increase or decrease the load for each network link.

[0070] Traffic may be bursty (e.g., exhibiting intermittent times of increased data traffic), so sojourn time may need to be measured for multiple packets over a several second time period (e.g., a moving measurement time window). In this example, an outlier time of about 200 ms during the measurement time window exceeds the 100 ms threshold, and thus can indicate poor quality to trigger the session link assignment function 78 to reassign the session to another network link. To mitigate the frequency of session reassignment, the session link assignment function 78 can be programmed to require multiple outliers during a prescribed time period. For instance, the session link assignment function 78 can be programmed to reassign a given session if Q packets (e.g., where Q is a positive integer) exceed the sojourn time threshold during a prescribed time period (e.g., about 1 second).

[0071] As another example, the session network assignment control 56 can include a jitter calculator 90 to quantify jitter for each of the plurality of network connections. The jitter calculator can measure far end jitter and/or near end jitter. Jitter refers to a variation in the delay of received packets, such as can be determined when a sender (e.g., the source that that sends media data from one of the egress or ingress control apparatus) transmits a steady stream of packets to the recipient (e.g., the other of the ingress or egress control apparatus). The jitter calculator 90 can calculate jitter continuously as each data packet is received from its source via one of the network connections.

[0072] For example, the jitter calculator 90 can compute jitter as an average of the deviation from the network mean packet latency. As a further example, the jitter calculation performed by jitter calculator 90 can be implemented according to the approach disclosed in real time control protocol (RTCP). For instance, jitter calculator 90 can compute an inter-arrival jitter (at the recipient apparatus) to be the mean deviation (e.g., smoothed absolute value) of the difference in packet spacing at the receiver compared to the sender for a pair of packets. Other forms of jitter calculations may be used. An active jitter measurements can be implemented by having the far end (e.g., recipient) compute jitter for each packet in a high-priority, time critical session. The recipient can transmit an indication of the computed jitter back to the sender. Alternatively, the timing data used to determine jitter itself can be sent back to the sender, which can be

programmed compute the corresponding jitter. In other examples, the packet sent from the egress control apparatus can be sent to the ingress control apparatus and returned to the egress apparatus to compute an indication of jitter.

[0073] In response to determining back at the sender that the computed far end jitter for a given session exceeds a predetermined jitter threshold, the session link assignment function 78 can reassign the given session to another network link. Additionally, for the example of RTP encoded data, the RTP packets have a sequence number. In response to one or more packets in the sequence omitted from the received media, the recipient control apparatus can determine if there are missing packets and return a count indicating the number of missed packets as well to the sender, which can be used by network analysis function 80 to trigger reassignment if the number of dropped packets within a time interval exceeds a threshold number.

[0074] The computed jitter can provide both a quality measure and a network down indicator. If no jitter measurement packets are received via a given network, for example, the failure detector 86 can determine that the given network is down. When a network link is determined to be down (e.g., by failure detector 86), all sessions currently assigned to such link (e.g., as stored in network assignment data 76) are reassigned to one of the available networks according to session assignment methods disclosed herein.

[0075] Additionally or alternatively, the jitter calculator 90 at a given egress or ingress control apparatus can compute near end jitter on arriving traffic for a given session via each of the network connections. Similar computations at the recipient of the media traffic that is sent to recipient can thus be performed to compute the near end jitter. The network analysis can employ the near end jitter that is computed locally to determine whether jitter for a given network connection exceeds a prescribed threshold or is down. In response, the session link assignment function 78 can reassign the session to a different available network for use in communicating media traffic for such session between ingress and egress control apparatuses. In situations where the properties analyzed by the network analysis 80 (e.g., capacity, load, failure, loss and/or jitter) relate to traffic received via link that is not between the egress apparatus and ingress apparatus, additional network analysis can be performed to localize the problem associated with the network traffic, such as disclosed herein (see, e.g., FIG. 13). Thus a notification can be sent to administrators within and/or external to the site to help triage the problems so that appropriate action can be taken to mitigate the issue.

[0076] In the example of FIG. 4, the OS kernel 100 implements the packet

prioritization/routing function 58 to control prioritizing of outbound data packets 102 that reside in the IP stack 104. For example, the packets in the IP stack 104 are provided from local applications 106 that provide outbound data traffic to one or more remote endpoints (e.g., remote applications executing on respective processing devices). For example, the application 106 can be executed within a site where the packet prioritization/routing function 58 is implemented in the egress control apparatus or the application 106 can be implemented in another computing device remote from the site where the corresponding packet is received over a corresponding network, such as a wide area network (e.g., the internet). An input interface (not shown) can provide the outbound packets from the stack to the OS kernel 100 for processing by the packet prioritization/routing function 58. The packets 102 in the IP stack 104 thus are outbound packets to be sent via a corresponding network connection 108 and according to the prioritization of the packets implemented by the packet prioritization/routing function 58.

[0077] The packet prioritization/routing function 58 includes a packet evaluator 110, a packet categorizer 112 and a priority queuing control 114. Each of the prioritization/routing functions 110, 112, and 114 can be implemented as kernel level executable instructions in the OS kernel 100 to enable real-time processing and prioritization of the packets 102. The packet prioritization/routing function 58 also utilizes session network assignment data 116 such as can be determined by the session network assignment control 56 (FIG. 3). As mentioned, the session network assignment control 56 can specify a network interface for each session to which each one of the packets 102 will be sent. For example, the packet evaluator 110 evaluates each outbound packet 102 relative to the session network assignment data 116 to ascertain whether the outbound packet belongs to an existing session. If a packet does not belong to an existing session, a new session will be created and that session will be assigned to a given network interface, such as described with respect to FIG. 3. If only a single network interface exists (e.g., N=l), each session is assigned to the common network interface.

[0078] The packet evaluator 110 executes instructions (e.g., kernel level packet inspection) to evaluate certain packet information for each packet 102 in the stack 104, which information may be different for different types of packets and depending on the prioritization rules 118. The packet categorizer 112 uses the packet information from the packet evaluator 110 to categorize the packet according to the type of traffic to which the packet belongs. The packet evaluator 110 can evaluate IP headers for each of the outbound packets upon receipt via the corresponding input interface. As one example, the packet evaluator 110 can evaluate IP headers in the packet 102 to determine the protocol (e.g., TCP or UDP), and the determined type of protocol further can be utilized by the packet evaluator to trigger further packet evaluation (e.g., deeper inspection) by the packet evaluator that is specific for the determined type of protocol. For instance, in response to detecting a UDP packet, the packet evaluator 110 can further inspect contents of the packet to identify the port number, and the packet categorizer 112 can categorize the UDP with a particular packet categorization based upon its identified port number. In other examples, the packet categorizer can determine a category or classification to be utilized for a UDP packet based upon evaluation of the packet's DSCP value.

[0079] As another example, in response to the packet evaluator 110 detecting the outbound packet is TCP data, the packet evaluator 110 can look at the payload to determine if it is web traffic and, if so, which particular application may have sent it or to which application it is being sent. For example, certain applications can be specified as high priority data in the corresponding prioritization rules 118. As mentioned, for example, the prioritization rules 118 can be programmed in response to a user input entered via a graphical user interface 120 (e.g., implemented as part of a control service). The prioritization rules 118 thus can be programmed in response to the user input, which rules can be translated to corresponding kernel level instructions executed by the packet prioritization/routing function 58 to control prioritize routing of each of the outbound packets. Based on the evaluation of each data packet, the packet categorizer 112 determines corresponding categorizations that are be assigned to each of the packets to enable prioritized routing.

[0080] By way of example, the packet categorizer tags each of the packets, such as by adding priority metadata to each packet, specifying the categorization for each respective packet. The priority queuing control 114 thus can employ the priority metadata, describing one or more categorizations of the packet, to control into which of the plurality of queues 122 the outbound packet is placed to be sent over its assigned corresponding network. As an example, within the OS kernel 100, each data packet can be processed as kernel data consisting of pointers to actual packet data that may reside outside the kernel. The packet categorizer 112 can add kernel-level header information to the kernel data (pointer), corresponding to the metadata describing the classification of the respective data packet to enable further kernel processing.

[0081 ] As an example, assuming there are a plurality of networks (e.g., N greater than or equal to 2), the network interface 124 associated with each of the network connections 108 thus can be fed data packets from a plurality of queues 122, including one high priority queue and one or more lower other priority queues. The particular network to which the outbound packet is ultimately placed is determined based upon the session network assignment data 116 (e.g., determined by packet evaluator 110). For instance, the session network assignment data 116 can specify a network interface card (NIC) or other network ID used by packet routing/prioritization block to route the data packet to the specified network. The network identifier can be added as part of the packet metadata to the packet information based on the packet evaluator 110 and used by the packet prioritization/routing function 58 to control the routing. Alternatively, each session identifier (e.g., session multi-tuple) can map directly to a network interface, which can be used by the packet prioritization/routing to route each packet to a selected network without adding metadata.

[0082] While the packet inspection and processing can be implemented in the OS kernel- level functions 110, as mentioned above, in other examples, such processing can be passed via an API to a user-level application (e.g., one of the applications 106 or another application - not shown), offloading the OS kernel, to categorize and/or determine a priority for the session. The user-level application may be within the same processor as executing the OS kernel. In other examples, the application may be executed by a different processor including residing within the network interface 124.

[0083] In some cases, the queuing control can be implemented to address quality issues that can be determined in addition to or as an alternative to quality measures computed for an established session. For example, packet prioritization/routing 58 can examine latency, jitter, and loss on the packets arriving from the IP Stack 104, such as to enable packet

prioritization/routing to identify and address quality issues before (or separately from) inspecting a given packet that is assigned to a given session. For the example of an egress apparatus, the quality issue may pertain to within an enterprise site or site device. For the example of an ingress apparatus, the issue can relate to traffic flow in a WAN backbone or within a cloud data center. Thus, the measurements and evaluation of quality for each of the network connections 108 and corrective action, such as reassigning sessions to different links, can extend beyond (e.g., be broader than) the quality of traffic flowing between an established pair of egress and ingress control apparatuses (i.e., an egress/ingress pair).

[0084] The packet categorizer 112 employs the prioritization rules 118, which are programmed in response to user input or default rules may be used, to categorize the type of traffic for each outbound data packet. For example, the packet categorizer 112 can add kernel- level metadata that specifies the type of traffic based on the packet evaluator 110. The priority queuing control 114 operates to send the outbound data packet to the appropriate one of the queues 122 for the network interface 124 that has been specified in response to the network assignment data 116. For instance, queuing control 114 can utilize the classification header (e.g., kernel level metadata) for each network to place the packet data in the queue having the appropriate priority according to the categorization associated with each data packet. The network driver accesses the outbound data packet from the high priority queue for sending over its assigned network 108, and then sends lower priority data from the one or more lower priority queues over its assigned network. Since all outbound packets for a given session are sent over the same network connection out of order packets can be mitigated.

[0085] The set of priority queues 122 associated with each respective network interface

124 can establish the same or different priority for queuing the outbound packets to each respective network connection. As disclosed herein, the categorization that specifies the type of packet can include any information utilized by the queuing control 114 sufficient to ascertain into which of the plurality of queues 122 the outbound packet is queued for sending over its corresponding network. In some examples, the packet prioritization/routing function 58 can place the data packet from the IP stack 104 into its respective queues 128 as prioritized based upon the categorization and session determined for each respective packet.

[0086] In other examples, each of the queues 122 can be populated with pointers (e.g., to physical memory address) to the data packet within the IP stack 104 to enable each NIC 124 to retrieve and sent out each of the respective data packets from the IP stack based on the pointers stored into the queues identifying the priority of the outbound data packets. For example, the pointers can identify the headers, pay load and other portions of each respective data packet to enable appropriate processing of each data packet by the NIC 124 of each respective data packet. As a further example, each NIC 124 can also employ corresponding network drivers to retrieve the data from the respective queues and to send the outbound packets over the corresponding network connections 108. The drivers can further be configured to first send out all data packets from the high priority queue prior to accessing data packets that are in the one or more lower priority queues. In this way time-sensitive high priority packets will be sent over each network before low priority data packets are sent over each network.

[0087] In some examples, the categorization for certain high priority data packets can be inserted into the data packet itself (e.g., as metadata) to enable downstream analysis of network quality and/or capacity for a respective network connection. For example, since high priority packets may be moved from one network to another network in response to detecting insufficient capacity or performance, outbound high priority data packets can be tagged or marked to enable their identification as high-priority packets at the receiving egress or ingress control apparatus to which the packets are sent via each network connection 108. Such tagging or marking can enable further analysis thereof by a corresponding network analysis function (network analysis 80 of FIG. 3). In this way, the network connection for high priority data packets can be managed dynamically to help improve and maintain quality of service for time- sensitive network traffic that is transmitted between each egress/ingress pair (e.g., between control apparatuses 12 and 14 associated with a respective site). As disclosed herein, examples of high priority packets can include interactive voice, interactive video applications or other data traffic deemed by a user to be time- sensitive compared to other data traffic.

[0088] FIG. 5 depicts an example of another communication system 150 that include an ingress control apparatus 152 and an egress control apparatus 154 associated with a given site. As mentioned, the given site may be an office, home, business that supports one or more users or an individual user. In this example, each of the ingress/egress control apparatus 152 and 154 are connected to each other via a corresponding network 156. The network 156 can correspond to a WAN, such as the public internet or other WAN that is at least partially outside control of the site. In the example of FIG. 5, the egress control apparatus 154 is located at a site having a plurality of network connections via corresponding network interface cards demonstrated as NIC_1 through NIC_N, where N is a positive integer greater than or equal to 2. The ingress control apparatus 152 controls ingress of data packets to the site and is connected to the egress control apparatus via a corresponding network interface cards demonstrated at NIC_1 through NIC_P, where P is a positive integer greater than or equal to 2. In some examples N and P are the same or N and P may be different such that each of the ingress and egress control apparatuses may have the same or different number of network connections. Additionally or alternatively each of the NICs can communicate via the same or different types of physical layers or they may be different depending upon the available network connections for each apparatus 152 and 154. In some of the following examples, the NICs 158 implemented at the egress control apparatus 154 may be referred to as site NICs and the NICs 160 implemented at the ingress control apparatus 152 may be referred to as cloud NICs.

[0089] Regardless of the implementation of the NIC 158 or 160, each of the egress NICs

158 are logically connected (e.g., via a corresponding IP address) with the ingress control apparatus 152 via the one or more ingress NICs 160. Similarly, for outbound traffic from the ingress control apparatus 152 to the site, each of the NICs 160 are communicatively coupled to the egress apparatus via one more of the NICs 158. The ingress apparatus 152 includes a link quality manager 162 and the egress control apparatus 154 also includes a link quality manager 164, each of which operates to control packet prioritization/routing of outbound traffic as disclosed herein.

[0090] The transmission of outbound packets from each of the ingress and egress control apparatuses 152, 154 can be facilitated between the apparatuses by creating communication tunnels through the network 156. For example, tunneling can be established from the egress control apparatus 154 via each of the P networks to the ingress control apparatus 152. Similarly, a tunnel can be established from the ingress control apparatus 152 via each of the P networks to the egress control apparatus 154. That is, the ingress and egress control apparatuses 152 and 154 operate as endpoints for each respective tunnels. As a further example, OS kernel code (e.g., corresponding to packet prioritization/routing and/or session network assignment control) can consider that each tunnel a respective interface 158, 160 via which packets for a given session can be communicated. Thus, the link quality managers 162, 164 can evaluate and mark packets within the operating system kernel to specify the type of the data traffic and a respective network interface. As a result, the categorization and prioritized routing can be performed efficiently based on the marking (e.g., kernel level metadata) at each of the respective apparatuses 152 and 154. As mentioned, in some examples, the processing of the data packets to determine categorization and/or priority thereof can be executed by a user-level application operating in parallel with and offloading the operating system kernel.

[0091 ] As an example, the Open VPN protocol acts as a wrapper to encapsulate a communications channel using various other network protocols (e.g., Open VPN uses TCP or UDP) for communicating data packets between ingress and egress control apparatuses 152, 154. The tunnel thus provides a virtual point-to-point link layer connection between ingress and egress control apparatuses 152, 154. In some examples, the tunnels can be implemented as secure (e.g., Open VPN and IPsec) tunneling to provide for encrypted communication of the data packets. In other examples, the tunnels can communicate data without encryption and, in some examples, the applications communicating can implement encryption for the packets that are communicated via the tunnels. As yet another example, encryption can be selectively activated and deactivated across respective tunnels in response to a user input. In either case, the performance of the traffic communicated via the tunnel depends on the network link(s) between tunnel endpoints.

[0092] By way of further example, a tunnel can be created for outbound traffic from each of the sites' NICsl58 (1 - N) to a corresponding one of the ingress NICs 160 (1 - P). Similarly, for outbound traffic from the ingress control apparatuses 152 each NIC can be communicatively coupled via the network 156 through a tunnel created from each respective NIC 160 to a corresponding NIC 158. As mentioned, since N and P are not necessarily the same, it is possible that outbound traffic from multiple NICs at one of the site or cloud can be received at an endpoint corresponding to a common NIC at the other cloud or site. Additionally, each path through the network 156 remains under control of one or more service providers that implement the network 156, which further can involve network peering (e.g., at peering points) to enable inter- network routing among such service providers. From the perspective of each ingress and egress control apparatus 152, 154, however, a logical tunnel is established for each network connection to facilitate the transport of the outbound data packets. Thus, other than using a given NIC for sending/receiving data packets, the actual data path for packets through the network 156 is outside of the control of each ingress and egress control apparatus 152, 154.

[0093] FIGS. 6 and 7 illustrate examples of tunneling that can be implemented between the ingress control apparatus 152 and egress control apparatus 154 of FIG. 5. In the example of FIGS. 6 and 7, it is presumed that the ingress control apparatus implements NICs to access networks maintained by a plurality of service providers (e.g., ISPs), demonstrated at SPA SP D , and SP B - The egress control apparatus 154 implements NICs to access another set of networks to networks demonstrated as SPi and SP 3 . The combination of networks SPA SP D , SP B , SPI and SP 3 collectively define at least a portion of the network 156 of FIG. 5 (the portion exposed directly to each of the ingress and egress control apparatuses 152, 154. In these examples, various different connections can exist between respective service provider networks as demonstrated herein, such as can vary according to network peering. Thus, depending upon the network connections, data can travel over a various paths between the ingress control apparatus and the egress control apparatus as well as from the egress control apparatus and the ingress control apparatus.

[0094] As demonstrated in the example of FIG. 7, for the example of three network connections at ingress control apparatus 152 and two network connections at egress control apparatus 154, there exists numerous combinations of possible paths between each of the respective service providers (i.e., between each of SPA SP D and SP B and each of SPi and SP 3 ) to route data traffic communicated between the ingress and egress control apparatuses. While each ingress and egress control apparatus 152, 154 can determine to which network each outbound packet is sent, according network assignment methods disclosed herein, the egress and ingress control apparatus cannot control the paths between service provider networks. For example, one or more additional networks (not shown) could exist between and of the service provider networks SPA SP D , SP B , SPI and SP3 illustrated in FIGS. 6 and 7, which can add one or more layers of unknown routing paths for data communicated between the respective control apparatuses 152, 154. The particular routing paths through both known and unknown networks collectively affects quality of service for each data packet that is communicated.

[0095] Referring back to FIG. 5, the system 150 includes quality management services

170 that can include global analytics 172. Global analytics 172 can include one or more service programmed to perform network analysis for data packets transmitted between each pair of ingress and egress control apparatuses 152, 154. For instance, the analytics 172 can be utilized to compute quality of service with respect to data traffic communicated between ingress and egress control apparatuses 152, 154. As a result, the global analytics can determine which network connection can afford improved network link quality for different types or

categorizations of data packets. The network analytics 172 can be similar to the network analysis 80 disclosed with respect to FIG. 3. However, in addition to performing such analytics with respect to high priority traffic sent over any of the network connections between a single set of ingress and egress control apparatuses 152 and 154, the analytics 172 can perform such analysis globally based on traffic communicated across a plurality of different sites, each of which includes at least one ingress-egress control apparatus pair. The global analytics can also perform such analytics on other parts of the network 156, such as the WAN backbone, which can affect traffic quality between ingress and egress apparatuses 152 and 154.

[0096] Based on the global analytics 172 operating on a global scale, the quality management services 170 can ascertain actual metrics regarding network speed that spans across multiple different service providers thereby enabling more intelligent usage of network bandwidth for a given network site depending on the particular service provider networks that are implicated for traffic sent through the network 156. For example, the analytics 172 can compute global network metrics for each of the respective service providers. The metrics can be provided to respective link quality managers 162 and 164 of each ingress and egress control apparatus, which metrics can be utilized to enable intelligent network assignment of high priority traffic sessions to those NICs providing network connections predetermined (e.g., by analytics 172) known a priori to provide improved network quality and speed. As mentioned, the aggregate network quality data determined by the global analytics, whether determined for a single site having a plurality of network connections or more globally for a plurality of sites, affords significant advantages since such information is not available to individual service providers. This is generally since different network providers do not tend to share actual network quality and speed information with their competitors.

[0097] In addition to creating tunnels for each of the outbound network connections for each ingress and egress control apparatus 152, 154, a separate tunnel can be created as a control channel between the respective control apparatuses, such as a connection between a selected pair of NICs 158 and 160. The control channel can be utilized to send information to facilitate dynamic reassignment and prioritization of outbound data packets for each respective ingress and egress control apparatus 152, 154. In some examples, the control channel (e.g., implemented as a tunnel between the respective egress and ingress control apparatuses associated with a given site) can be an ultra-high priority channel that takes precedence over other data traffic including, in some examples, over the high-priority time sensitive data that is provided to the high priority queues. For instance, a control channel queue thus could be implemented (e.g., in one of the queues 122 of FIG. 4) as the highest priority type of queue. This is because by making the control channel the highest priority, the determination and dynamic (e.g., real-time) reassignment of sessions to different network connections can be facilitated based on the shared metrics relating to network performance. As a result, the available performance, speed and bandwidth provided by the network connections available at each ingress and egress control apparatus 152, 154can be dynamically utilized more effectively and efficiently to optimize quality of service for higher priority, time-sensitive data traffic.

[0098] FIG. 8 is a block diagram illustrating an example of a communication system 200 that includes multiple egress/ingress pairs that implement provide multiple stages of bidirectional traffic control between a site 202 and a cloud data center 204. The site 202 includes an egress control apparatus 206 which implements a link quality manager for controlling egress of data traffic with respect to the site, as disclosed herein. As mentioned the site 202 can correspond to an enterprise, such as a business, office or home, or an individual device (e.g., smart phone). The egress control apparatus 206 is connected with an ingress/egress control apparatus 210 via one or more network connections. The ingress/egress control apparatus 210 can be located apart from the site 202, such as in "last mile" connection or within the WAN backbone. From the perspective of the site 202, the ingress/egress control apparatus 210 includes a link quality manager 212 to control ingress of data traffic to the site. Thus, the egress control apparatus 206 and the ingress/egress control apparatus 210 defines an egress/ingress pair that operates to control bidirectional control of traffic therebetween. Various examples of session assignment, session reassignment and prioritization and routing that can be implemented by the egress control apparatus 206 and the ingress/egress control apparatus 210 are disclosed herein (see, e.g., FIGS. 1-7 and the associated descriptions).

[0099] The ingress/egress control apparatus 210 is coupled to the cloud data center 204 via one or more network connections. In the example, of FIG. 8, the cloud data center includes an ingress control apparatus 214. The ingress control apparatus 214 may reside in the WAN backbone, within the cloud datacenter or another location near the data center, for example. The ingress control apparatus 214 at the data center 204 includes a link quality manager 212 to control ingress of data traffic to the ingress/egress control apparatus 210, and the ingress/egress control apparatus 210 further is configured to control egress of traffic from the ingress/egress control apparatus 210 to the cloud data center via the network connection(s) therebeween. That is, ingress/egress control apparatus 210 operates as a site apparatus to control egress of data packets from the ingress/egress control apparatus 210. Thus, the ingress/egress control apparatus 210 and the ingress control apparatus 214 define another egress/ingress pair that operates to control bidirectional control of traffic therebetween. Similar to egress/ingress pair 206, 210, the egress/ingress pair 210, 214 controls bidirectional traffic, such as including any of the examples of session assignment, session reassignment and prioritization and routing disclosed herein (see, e.g., FIGS. 1-7 and the associated descriptions). While the example of FIG. 8 demonstrates two egress/ingress pairs for traffic control between the site and the data center 204, there can be any number of two or more such egress/ingress pairs in the traffic path.

[00100] By way of example, one or more applications running within the site 202 can subscribe to and implement one or more services 218 provided by the cloud data center 204. As an example, the services 218 implemented in the cloud data center 204 can be considered high- priority, time- sensitive in nature as to be afforded priority over many other categories of data. Thus, the link quality managers 208, 212 and 216 at each stage of the traffic path between the site application and the cloud service 218 can be programmed to prioritize packets

communicated to and from the cloud service 218. Each egress/ingress pair can also prioritize other time-sensitive, high-priority packets over lower priority traffic or traffic having no priority, as disclosed herein.

[00101 ] As a further example where multiple network connections exist between respective egress/ingress pairs, tunneling can be utilized to provide each respective connection, as disclosed with respect to FIGS. 5-7. Since multiple tunnels exist between the site and the cloud data center (e.g., one set between egress control apparatus 206 and ingress/egress control apparatus 210 and another set between ingress/egress control apparatus 210 and ingress control apparatus 214), the number of different combinations of potential tunnel paths increases exponentially. Each tunnel can correspond to a respective logical network interface used by kernel level functions for routing each data packet to an assigned tunnel. As a result, further efficiencies can be achieved by selecting various combinations of tunnels for each egress/ingress pair for each respective session. Each tunnel thus can be independently assigned and reassigned for routing data packets for a given session according to capacity and quality measures determined for each respective tunnel, as disclosed herein. [00102] As another example of multiple egress/ingress pairs, FIG. 9 is a block diagram illustrating an example of an enterprise communication system 220. The enterprise system includes multiple egress/ingress pairs connected between different sites 222 and 224 of the enterprise, demonstrated at enterprise site A and enterprise site B. Each site 222, 224 can be part of the enterprise system 220, such as corresponding to an office, a home, or an individual device (e.g., smart phone). While two such sites 222 and 224 are illustrated in the example of FIG. 9, there can be any number of two or more sites to collectively form the enterprise system (or at least a portion thereof). The sites can be distributed across a geographic region, which may include multiple states or even different countries. Each site 222, 224 can utilize an

egress/ingress pair to control bidirectional traffic with respect to the respective site, as disclosed herein. There can be additional egress/ingress pairs to control traffic at other parts of a path, such as to a data center as in the example of FIG. 8.

[00103] In the example of FIG. 9, the site 222 includes a site apparatus 228 that implements a link quality manager 232 for controlling egress of data traffic with respect to the site 222. The site apparatus 228 is connected with a cloud apparatus 230 via one or more network connections (e.g., wired and/or wireless). The site apparatus 228 and cloud apparatus 230 thus defines an egress/ingress pair to control bidirectional traffic, such as according to any combination of the examples of session assignment, session reassignment and prioritization and routing disclosed herein (see, e.g., FIGS. 1-7 and the associated descriptions). The cloud apparatus 230 thus can be connected to or implemented within the cloud to send and receive traffic via the network 226 on behalf of the site 222. The cloud apparatus 230 can be located apart from the site 222, such as in "last mile" connection or within a WAN backbone of an associated network 226.

[00104] The other site 224 is similarly configured to operate in the enterprise system 220. The site 224 includes a site apparatus 236 that implements a link quality manager 240 for controlling egress of data traffic with respect to the site 224. The site apparatus 236 is connected with an associated cloud apparatus 238 via one or more network connections (e.g., wired and/or wireless). The site apparatus 236 and cloud apparatus 238 defines another egress/ingress pair to control bidirectional traffic with respect to the site 224. As mentioned, each of the site apparatus 236 and the cloud apparatus 238 can control sending out data packets to each other over their available network connections according to any combination of the examples of session assignment, session reassignment and prioritization and routing disclosed herein (see, e.g., FIGS. 1-7 and the associated descriptions). The cloud apparatus 230 can be located apart from the site 222, such as in "last mile" connection or within a WAN backbone of an associated network 226 to send and receive traffic via the network 226 on behalf of the site 224.

[00105] For the example of inter-site communications between sites 222 and 224, such communication can thus result in communication from one egress/ingress pair to the other egress/ingress pair. In some examples, the bidirectional control between site and cloud apparatuses can be managed as disclosed herein. For communication over the connections between site apparatus 228 and cloud apparatus 230, the cloud apparatus operates as an ingress control apparatus to control traffic sent to the site. At the other site, for communication over the connections between site apparatus 236 and cloud apparatus 238, the cloud apparatus 238 operates as an ingress control apparatus to control ingress traffic being sent to the site 224 and the site apparatus 236 controls egress traffic being sent from the site 224.

[00106] By implementing egress/ingress pairs for each site operating in the enterprise system 220, inter-site communication of data traffic can be maintained at a high-level of quality. That is, the benefits result from session assignment, session reassignment and prioritization and routing disclosed herein can be duplicated across multiple connections to increase overall quality of service. Additionally, where multiple network connections exist between respective egress/ingress pairs (between site apparatus 228 and cloud apparatus 230 and between site apparatus 236 and cloud apparatus 238), tunneling can be utilized to provide a selected connection for each session, such as disclosed with respect to FIGS. 5-7. Since multiple tunnels exist between each site apparatus and cloud apparatus, a greater number of tunnel combinations for a given inter-site communication session. Each tunnel thus can be independently assigned and reassigned for prioritized routing data packets for a given session according to capacity and quality measures determined for each respective tunnel, as disclosed herein.

[00107] As a further example, each site 222 and 224 can include a respective site network 244 and 246. Each site network 244, 246 can implement services or other resources that can be accessed by an application within the same site as such network or with a different site. For example, an application running in the site 222 can employ an inter- site communication session to access services or other resources implemented by the site network 246. The bidirectional traffic control implemented by each egress/ingress pair affords an increased quality of service. An alternative configuration to a cloud apparatus per enterprise site is to share a single cloud apparatus among a number of sites, as well as a mix of paired sites with associated sharing sites. In addition, a given cloud apparatus can be "multi-tenant" and shared among a number of unrelated enterprise sites or other types of sites.

[00108] FIG. 10 depicts part of a communication system 250 that includes an example of quality management services 252 for managing bi-directional traffic for one or more sites as disclosed herein. The quality management services 252 can correspond to service 170 described with respect to the example of FIG. 5. The system 250 includes an egress control apparatus 254 at a site (e.g., a customer site) that is connected to a site network 256 (e.g., a local network). A plurality of device (e.g., desktop computers, tablet computers, laptop computers, phones, conferencing systems and the like - not shown) can be connected to the network 256 and run any number and type of applications. Such applications can access resources (e.g., other applications or services) external to the site, such as disclosed herein. The egress control apparatus 254 is further connected to an ingress control apparatus 258 such as can be located in the cloud or other remote location (e.g., last mile connection).

[00109] The egress and ingress apparatuses 254, 258 are connected to each other via a plurality of network connections, demonstrated at 260. The physical links that form the set of network connections 260 can be wired connections (e.g., electrically conductive or optical fiber) as well as wireless connections. For example, the network connections 260 can include any combination of physical layer links such as Tl, DSL, 4G cellular, or the like. As mentioned above, tunneling can be provided via each link for communicating data packets between each of the control apparatuses 254 and 258. In addition to tunneling to provide logical connections 260 for data traffic, a separate control channel tunnel can be established between the respective apparatuses 254 and 258 via one of the links. Each tunnel can be implemented as a secured communication link or an unsecured communication link. An unsecured communication link can be utilized when sufficient security is implemented by the respective networks and systems to which the ingress control apparatus and egress control apparatus are implemented. Each of the ingress and egress control apparatuses can include link quality managers to control network traffic dynamically, such as disclosed herein.

[00110] While the connections 260 between each of the ingress and egress apparatuses 254 and 258 are demonstrated as corresponding through data tunnels that can involve network peering for exchanging traffic between separate internet networks, it is to be understood that each of the respective tunnels can include respective "last mile" network connections provided by respective service providers to the end-user (e.g., customer site) to provide connections to a WAN (e.g., internet) according to a service plan. Additionally, or alternatively, the connections 260 can include the "first mile" near data center (cloud or enterprise) network connections, and/or within the "backbone" providing the long distance network connections. For instance, each of the service plans can provide a minimum or maximum bandwidth designated by each respective service provider according to service plan specification requirements. The amount bandwidth can be may be fixed or variable depending upon network operating parameters and contract requirements usage. In many cases, bandwidth is variable within a range even though some minimum bandwidth may be specified for each end-user's service plan.

[00111 ] In the example of FIG. 10, the ingress control apparatus 258 in the cloud (e.g., public and/or private cloud) is demonstrated as being connected to a plurality of service providers demonstrated as SPl, SP2 and SP3 (e.g., via corresponding network interfaces, such as in FIG. 5). While three service providers are demonstrated in this example, there can be any number of one or more, as determined according to service contracts of the site. The quality management service 252 can further monitor each of the connections to which each of the ingress control apparatus 258 and the egress control apparatus 254 are connected.

[00112] For example, the quality management services 252 can include a service monitor 262. The service monitor 262 can monitor aspects of performance for each respective connection via the corresponding service providers SPl, SP2 and SP3. The physical

monitoring, for example, can be performed via the ingress control apparatus 254 for each site (e.g., any number of one or more sites) implemented in the system 250. Thus, the service monitor 262 can be implemented in each network interface to provide performance information associated with each network connection (e.g., including bandwidth, network capacity and the like).

[00113] Additional performance information for each customer site can be collected at a connection control 264. The connection control 264, for example can provide performance information to the service monitor 262 based upon control and network usage information received from each ingress control apparatus 258 and egress control apparatus 254. For instance, the connection control 264 can operate as a cloud service that communicates with each of the egress and ingress control apparatuses 254, 258 via corresponding control channel (e.g., via secure or unsecure tunneling). As mentioned, the control channel can correspond to a highest priority channel implemented via tunneling between egress and ingress control apparatuses 254, 258 to ensure that the control information is continuously fed to the connection control 264. In some examples, a separate connection can be made between the egress control apparatus 254 and the connection control, such as a dedicated secure tunnel. The performance information for the egress and ingress control apparatus 254 and 258 operating for the site and performance information collected by the service monitor 262 for each network can be stored in a database 268.

[00114] An analytics service 270 is programmed to compute various performance metrics, including global metrics for each service provider's network and/or local metrics associated with each respective site. The analytics service 270 thus can correspond to a cloud implementation of the analytics 172 described with respect to FIG. 5. The performance metrics can include current and historical global performance data for each network SP 1, SP 2 and SP 3 that is utilized by the egress and ingress control apparatus 254 and 258 for each site implemented in the system 250. As mentioned, there can be any number of sites, each having an egress/ingress pair, as well as other egress/ingress pairs at other network locations. Additionally, the analytics service 270 can compile and compute performance metrics for data traffic communicated between the egress and ingress control apparatus 254 and 258 for each respective site. For example, an authorized user can employ the GUI 224 running at a user input device (e.g., computer or other device) 276 to access the analytics service 220 to select a set of metrics associated with a particular site or portion of the site. In response to the user query via the GUI 224, the analytics 220 can access the database 268, compute one or more selected performance metrics and display the requested user information (e.g., a performance dashboard) at the GUI.

[00115] The performance metrics, for example, can provide an indication of actual network bandwidth utilized in a time interval and/or for one more network connections. The performance metric can also be computed for one or more type of traffic identified by the user (e.g., in response to a user input), such as corresponding to high-priority traffic, to provide an indication of network performance related to specific type of traffic selected. In some cases, the GUI can be utilized to ascertain information for each service provider's network (e.g., statistical performance information) based on the aggregated performance information collected for each of the plurality of sites. Such global network information can enable users to understand capacity and performance metrics among a plurality of different service providers.

[00116] Additionally, the configuration and corresponding functions implemented by each egress and ingress control apparatus 254 and 258 can be set by the quality management services 252. For example, a rule manager service 222 can define the rules and configuration information for the egress and ingress control apparatus 254 and 258 at each site in response to user input (e.g., entered by an authorized site administrator via GUI 224). The rules and configuration data for each site can be stored in the database 268. The rules and performance configuration data stored in the database 268 for each site can be updated dynamically during system operation, such as in response to user input modifying rules or adding new network connections. The connection control service 264 in turn can provide configuration information to program each respective apparatus 254, 258, which can include specifying what network analysis information is shared between the ingress and egress control apparatuses via the logical control channel. Connection control 264 can also perform path changes for one or more sessions based on the analytics 270 (e.g., jitter, latency and/or packet loss).

[00117] By way of example, a user can employ a GUI 274 to identify and define which types of information and data traffic are considered to be high priority, different levels of priority may be established by the administrator in response to user input. As a further example, the configuration information can include IP address for each of the ingress and egress control apparatus as well as specific resource location identifiers (e.g., URLs) to enable tunneling to be established and maintained between egress and ingress control apparatuses 254, 258. The rules manager 222 can in turn update and modify the rules in the rule and configuration data in the database 268. If rules and/or configuration information changes for a given site, updated rules and configuration information can be provided to a given egress and/or control apparatus consistent with the updates.

[00118] During operation, the quality management services 252 can further employ the analytics service 220 to monitor the rules and performance and configuration data in the database 268 to determine an indication of performance for the aggregate set of connections 260 between the ingress and egress control apparatus 258 and 254, respectively. For instance, the indication of performance can indicate performance metrics with respect to the outbound traffic that is sent from each control apparatus 254, 258 to the other via the aggregate tunnel provided by network connections 260. The analytics service 270 thus can monitor the performance and configuration information that is acquired over time to determine whether any changes may be needed to the rules and configuration information stored in database 268. Any changes to the rules and configuration data 268 can be provided to the connection control 264 for updating the ingress control apparatus 258 and egress control apparatus 254, such as via a corresponding control channel. Additionally, far end quality analysis for one or more sites can be provided to the analytics service 270, which can help determine whether patch changes may be needed for any sessions. The analytics service 270 can also determine an indication of capacity and/or quality of service for one or more network connections, which can be sent to ingress and egress control apparatuses via the control channel (or other connection) and utilized to control initial session assignment as well as reassignment.

[00119] In view of the structural and functional features described above, certain methods will be better appreciated with reference to FIGS. 11, 12 and 13. It is to be understood and appreciated that the illustrated actions, in other embodiments, may occur in different orders or concurrently with other actions. Moreover, not all features illustrated in FIGS. 11, 12 and 13may be required to implement a method. It is to be further understood that the following method can be implemented in hardware (e.g., one or more processors, such as in a computer or computers), software (e.g., stored in a computer readable medium or as executable instructions running on one or more processors), or as a combination of hardware and software.

[00120] FIG. 11 is a flow diagram illustrating an example method 300 for network transport and session link assignment, such as can be implemented by session network assignment control 56 (e.g., see FIGS 2 and 3). The method begins at 302 in which an outgoing data packet is received. The packet is received during corresponding interface to kernel level transport functions (e.g., placed in the IP stack via API) such as disclosed herein.

[00121 ] At 304, the received outgoing packet is evaluated. The evaluation can be based upon header information in the packet such as sufficient to describe a session (e.g., source IP address, source port, destination address, destination port, and protocol). Based on the evaluation at 304, a determination is made at 306 as to whether a session already exists for the received and evaluated packet. If no session already exists at 306, the method proceeds to 308. At 308 a new session is created. Creating a new session can include creating an entry in a session table (or other data structure stored in memory) that specifies a session according to the session-identifying data evaluated at 304.

[00122] At 310, the new session is assigned to a network. The network assignment for a given session can be made (e.g., by session network assignment control 56) according to various methods as disclosed herein. For example, the session assignment can be based on a simplified round-robin approach to which the session is assigned to one of a plurality of available networks. In other examples, the assignment can be based on network capacity or other network analysis (e.g., network analysis 80), as disclosed herein. In some examples, available network capacity for each of the available network connection for the ingress and egress control apparatus can be calculated by determining network saturation and a capacity calculator (e.g., capacity calculator 80) can determine remaining capacity for each network connection. As another example, a passive measurement of capacity can be determined by calculating a queue sojourn time such as to ascertain which network has the most unused capacity. For instance, the network having the shortest sojourn time a given queue (e.g., one of the high-priority queues) can indicate such network as having the most unused network capacity. The queue sojourn time that data travels through a path within a given control apparatus may be determined differently for different types of packets and protocols. As mentioned, the categorization of packets may be determined based on the packet evaluator at 304 or other methods disclosed herein, which may be implemented by kernel-level code and/or by user-level code via an interface. As another example, the assignment at 310 can be based upon a weighted round robin or the weight is adjusted according to network available capacity such as according to the approaches disclosed herein.

[00123] As an example, the capacity for a given network connection can be a variable parameter. For example, the capacity can be set to a default level in each direction with respect to the egress and ingress control apparatuses. The capacity can be decreased in response to one or more quality measures, as disclosed herein, indicating quality is below a threshold level. The capacity thus can be decreased until quality issues no longer exist. The capacity can also be adjusted upward (e.g., increased) if there are no capacity decreases made during a prescribed time interval. The session assignment at 310 for a new session as well as subsequent session reassignment (see FIG. 12) thus can evaluate the variable capacity in each upstream and downstream direction for respective network connections in determining to which network connection the session will be assigned. [00124] If a session already exists at 306 and subsequent to assigning the session for the received packet to a network (at 310), the session proceeds to 312 in which the outgoing packet is sent via its assigned network. The network assignment of each session is maintained for the life of the session, which can vary largely depending on the type of traffic. In this way, all subsequent packets for a given session remain over the same network connection, unless the session is reassigned (see, e.g., FIG. 12).

[00125] FIG. 12 is a flow diagram illustrating an example method 350 for reassigning a session from one network to another. The method 350 begins at 352 by determining a priority of packets. Thus, the method 350 can be utilized to reassign network connections for sessions that include a type of data packets determined (e.g., based on rules applied by packet evaluator 70, 110) to be of sufficiently high priority. In some examples, there can be two levels of priority (e.g., high and low) for categorizing outgoing data packets. In other examples, one or more types of outgoing data packets can be categorized as a single high priority level, while other types of packets are categorized into one or more other lower priority levels. In this way the prioritization of packets for a given session can be used to define the session priority at 352. As disclosed herein, the categorization of the outgoing data packets is used to place each outgoing data packet into an appropriate priority level queue and, in turn, send the respective packet out via the associated network connection to which the session is currently assigned (see, e.g., FIG. 11).

[00126] At 354 network performance is measured. The measure of network performance can be implemented according to one or more various approaches disclosed herein. For example, the measure of network performance can be a passive measurement that does not involve extra transmission of data to perform the measurement. Passive measurement, for example, may involve calculating a sojourn time of data packets for a given session through a path that exists within a given ingress or egress control apparatus. Sojourn time can be computed based on counting clock signals from when an outbound packet for given session enters the IP stack through a time when it is sent out of a given high priority queue over its assigned network. A threshold can be established to provide a range of sojourn time that indicates a sufficiently good quality. In some cases, traffic can be busy such that the sojourn time may need to be measured for a plurality of data packets of the given high priority session over a time interval (e.g., multiple seconds). [00127] Additionally, or alternatively, the measure of network performance for a given session can include one or more active measurements. As mentioned, an active measurement can include monitoring communication across a portion of a network. For example, an active measurement can be implemented by pinging a predetermined resource location (e.g., a server, such as google.com) in the cloud in which the ping is sent through the assigned network connection for a given session. Another active measurement technique to provide an indication of quality for voice or other high-priority data traffic is to measure jitter. For example, far end jitter can be measured for a critical session (e.g., session determined at 352 as having a high priority) such as by the ingress control apparatus receiving the data packet that is transmitted as the outbound data from the egress control apparatus using a particular protocol, and is sent back to the egress control apparatus. In the other direction, the measurement packet(s) are sent from the ingress control apparatus (e.g., apparatus 258) to the egress control apparatus (e.g., apparatus 254) and returned from the egress apparatus back to the ingress control apparatus via a corresponding link. In one example, the egress control apparatus analyzes latency, jitter, and loss for the downstream part of a given session, and a protocol from the ingress control apparatus via any of the service provider networks (e.g., SP 1, SP 2, SP 3) can be utilized to ascertain similar network characteristics on the upstream part of the given session. The jitter thus can be computed with respect to the arriving traffic, which further can be compared to a corresponding jitter threshold to provide a measure of network performance for session traffic. Such analysis to measure performance can be implemented with respect to each ongoing session, for example, which has been determined to be high priority.

[00128] Based upon the measured network performance and applicable thresholds, a determination can be made at 356 whether the quality is maintained for a respective session . If the quality is determined at 356 to be not maintained, the method continues at 358 to implement a reassignment. As mentioned, the determination of quality for a given network connection can be based on passive and/or active measurements. For the example where the measured network performance includes sojourn time (e.g., a passive measure), if the sojourn time exceeds the established threshold time, and poor quality can be identified and utilized to determine (at 356) that sufficient quality is not being maintained for the session as to trigger session reassignment.

[00129] At 358, the available networks can be analyzed such as including analysis of available network capacity for sending outbound data packets for a given session. Based upon the analysis at 358, the method proceeds to 360 in which the corresponding session is reassigned to a new available network. The assignment can be based on the available capacity such as can be determined by a capacity calculator (e.g., capacity calculator 82 of FIG. 3; similar to assignment at 310 of FIG. 11). The session can be reassigned by updating the session assignment data at 362. After completing the reassignment process at 362, the method can return to 352 to monitor data packets and identify the high priority packets. Similarly, if it is determined at 356 that sufficient quality is maintained for a given session, the method can proceed from 356 back to 352. The method can run and update the assignment data dynamically based upon the method 350. The method 350 can be implemented with respect to each session to enable reassignment of high-priority sessions from one network connection to another.

[00130] FIG. 13 depicts an example of a method 400 for localizing quality issue associated with incoming traffic. At 402, the method includes receiving, at a recipient, incoming traffic from a sender. In the example of FIG. 13, the recipient is either a site apparatus or a remote apparatus, where the site apparatus and the remote apparatus define an egress-ingress pair of apparatuses for a given site that communicate via at least one bi-directional network link between the egress-ingress pair. The site apparatus controls egress of data traffic with respect to the given site and the remote apparatus controls ingress of data traffic with respect to the given site.

[00131 ] At 406, the incoming traffic at the recipient (from the sender) or outgoing traffic (to the sender) is analyzed to identify a quality issue associated with the traffic. The analysis (at the recipient) can include various types of analysis of network traffic, such as disclosed with respect to network analysis 80 of FIG. 3. The analysis can include determining latency, jitter loss for packets in the incoming traffic from the sender or retransmissions to the sender. As a further example, the analysis can vary depending on the type of traffic, which can be determined by packet evaluation (e.g., by packet evaluator 70). Thus, by identifying a type of the incoming traffic different forms of analysis can be performed. For example, if the type of the incoming traffic is UDP traffic, the analysis at 406 can include calculating jitter, latency and/or loss for the UDP traffic. Such calculated quality parameter thus can be used to quantify the quality issue and, such as by comparing the calculated value or values with respect to a corresponding threshold. The result of such comparison indicates that the calculated value(s) exceeds a threshold, it can be used to trigger appropriate action (e.g., changing a path of for connection). [00132] As another example, the analysis at 406 can include analyzing outgoing traffic from the recipient, including to determine a type of the outgoing traffic. For instance, if the type of the outgoing traffic is TCP traffic, the analysis at 406 can include monitoring re-transmissions in the TCP traffic, such as to indicate a quality issue associated with connection via which the outgoing traffic is being provided. Other approaches for quality analysis, including those disclosed herein may be employed at 406.

[00133] At 408, the method also includes determining a location for the quality issue. For instance, the method can determine that the identified quality issue pertains to the one bidirectional link between the egress-ingress pair. Alternatively, at 408, it can be determined that the identified quality issue pertains to resources external to the at least one bi-directional link between the egress-ingress pair. In response to determining that the identified quality issue pertains to one or more sessions of traffic being sent over a given one link between the egress- ingress pair, at 410, the path for such session of traffic between the recipient and the sender can be changed to another existing connection for the site. For example, the traffic medication can include reassigning a session to a different network link and/or changing a priority of data packets associate with a given session, such as disclosed herein.

[00134] In response to determining that the identified quality issue pertains to resources external to the at least one bi-directional link between the egress-ingress pair, at 412 a

notification can be sent to a predetermined entity associated with the given site. The notification, for example, can be sent to one or more network administrator (e.g., as an email, text message or other form of communication). The notification further can identify a location for the identified quality issue with greater specificity, which may be determined based on the identity of the sender. For example location for the identified quality issue that is not part of the link between the egress-ingress pair may reside at one or more of within the given site, within a last mile connection, within network backbone, and in a first mile, the notification specifying the determined location.

[00135] Is some examples, the sender is an apparatus or application outside the given site, and the recipient implementing the method 400 has multiple connections to the external apparatus or application, one of which is being used as a path to communicate one or more session of traffic from the recipient to the external sender. In this example, in response to determining that the identified quality issue pertains to the traffic external to the egress-ingress pair, at 414, a path for the at least one session of traffic that is being communicated from the recipient to the sender can be changed. The change can be implemented by moving the session from its current connection to another of the multiple connections, such as by reassigning the session to a corresponding network interface associated with the other connection. The change can be implemented in combination with or in place of the notification that is sent at 412.

[00136] As a further example, the remote ingress apparatus of the egress-ingress pair is located at a service provider network hub associated with a data center that provides a service accessed by the given site via the one or more links between the site apparatus and remote apparatus. In this example, based on the location of the ingress apparatus, the identified quality issue can be determined to pertain to the service being provided by the data center and/or a communication link between the network hub and the service provided by the data center. Thus, in response to determining that the identified quality issue pertains to at least one of the service provided by the data center or the communication link between the network hub and the service, the notification can be sent at 412 to one or more predetermined entities associated with the data center or service provider. The notification further can trigger an additional inquiry to a known administrator via an external communication mode (e.g., email, telephone call or the like) to confirm health status of the communication link between the network hub and the service, such as in response to the notification. The additional inquiry thus can help further localize the quality issue by ascertaining whether the identified quality issue pertains to either an application in the data center or the communication link itself.

[00137] As will be appreciated by those skilled in the art, portions of the systems and methods disclosed herein may be embodied as a method, data processing system, or computer program product (e.g., a non-transitory computer readable medium having instructions executable by a processor). Accordingly, these portions of the invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Furthermore, portions of the invention may be a computer program product on a computer-usable storage medium having computer readable program code on the medium. Any suitable computer-readable medium may be utilized including, but not limited to, static and dynamic storage devices, hard disks, optical storage devices, and magnetic storage devices. [00138] Certain embodiments are disclosed herein with reference to flowchart illustrations of methods, systems, and computer program products. It will be understood that blocks of the illustrations, and combinations of blocks in the illustrations, can be implemented by computer- executable instructions. These computer-executable instructions may be provided to one or more processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus (or a combination of devices and circuits) to produce a machine, such that the instructions, which execute via the processor, implement the functions specified in the block or blocks.

[00139] These computer-executable instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus (e.g., one or more processing core) to function in a particular manner, such that the instructions stored in the computer-readable medium result in an article of manufacture including instructions which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks or the associated description.

[00140] What are disclosed herein are examples. It is, of course, not possible to describe every conceivable combination of components or methods, but one of ordinary skill in the art will recognize that many further combinations and permutations are possible. Accordingly, the disclosure is intended to embrace all such alterations, modifications, and variations that fall within the scope of this application, including the appended claims.

[00141 ] As used herein, the term "includes" means includes but not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on. Additionally, where the disclosure or claims recite "a," "an," "a first," or "another" element, or the equivalent thereof, it should be interpreted to include one or more than one such element, neither requiring nor excluding two or more such elements.