Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LOAD BALANCING OF DATA PACKET FLOWS
Document Type and Number:
WIPO Patent Application WO/2016/119877
Kind Code:
A1
Abstract:
The invention relates to a method for operating a load balancing entity (100) in a network in which a central control entity (20) controls data packet flows forwarded in the network by forwarding elements (40). The load balancing entity (100) distributes the data packet flows amongst a plurality of service application entities (50a-f) according to a first weight distribution indicating how the data packet flows are distributed amongst the service application entities which each provide an instance of a service to one or more data packet flows. The method comprises the following steps: The load balancing entity detects a change from the first weight distribution to a new weight distribution. Then, the load balancing entity instructs the central control entity to command the forwarding elements (40) to forward new data packet flows to the service application entities in accordance with the new weight distribution. Furthermore, the load balancing entity identifies an affected data packet flow that had been previously forwarded, in accordance with the first weight distribution, to a service application entity different than one where the affected data packet flow would be forwarded in accordance with the new weight distribution. Further, the load balancing entity instructs the central control entity to command the forwarding elements (40) to direct incoming data packets belonging to the affected data packet flow to the service application entity in accordance with the first weight distribution and to direct a copy of at least a part of a first subsequent incoming data packet belonging to the affected data packet flow to the central control entity (20).

Inventors:
MOLINERO FERNANDEZ PABLO (ES)
JOHNSON BRADY ALLEN (ES)
LAGUNA RUBIO OSCAR (ES)
MARTINEZ DE LA CRUZ PABLO (ES)
NORIEGA DE SOTO RICARDO (ES)
TERRILL STEPHEN (ES)
Application Number:
PCT/EP2015/051933
Publication Date:
August 04, 2016
Filing Date:
January 30, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L29/08
Foreign References:
US20140372616A12014-12-18
Other References:
GUO ZEHUA ET AL: "Improving the performance of load balancing in software-defined networks through load variance-based synchronization", COMPUTER NETWORKS, ELSEVIER SCIENCE PUBLISHERS B.V., AMSTERDAM, NL, vol. 68, 26 February 2014 (2014-02-26), pages 95 - 109, XP029006248, ISSN: 1389-1286, DOI: 10.1016/J.COMNET.2013.12.004
CHEN YU-JIA ET AL: "Traffic-Aware Load Balancing for M2M Networks Using SDN", 2014 IEEE 6TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING TECHNOLOGY AND SCIENCE, IEEE, 15 December 2014 (2014-12-15), pages 668 - 671, XP032735187, DOI: 10.1109/CLOUDCOM.2014.37
Attorney, Agent or Firm:
BERTSCH, Florian (Thomas-Wimmer-Ring 15, München, DE)
Download PDF:
Claims:
Claims

1 . A method for operating a load balancing entity (100) in a network in which a central control entity (20) controls data packet flows forwarded in the network by forwarding elements (40), wherein the load balancing entity (100) distributes the data packet flows amongst a plurality of service application entities (50a-f) according to a first weight distribution indicating how the data packet flows are distributed amongst the service application entities which each provide an instance of a service to one or more data packet flows, the method comprising the steps of:

- the load balancing entity detecting a change from the first weight distribution to a new weight distribution,

- the load balancing entity instructing the central control entity to command the forwarding elements (40) to forward new data packet flows to the service application entities in accordance with the new weight distribution,

- the load balancing entity identifying an affected data packet flow that had been previously forwarded, in accordance with the first weight distribution, to a service application entity different than one where the affected data packet flow would be forwarded in accordance with the new weight distribution,

- the load balancing entity instructing the central control entity to command the forwarding elements (40) to direct incoming data packets belonging to the affected data packet flow to the service application entity in accordance with the first weight distribution and to direct a copy of at least a part of a first subsequent incoming data packet belonging to the affected data packet flow to the central control entity (20).

2. The method according to claim 1 , wherein each forwarding element (40) contains a pipeline with a plurality of tables (41 -44) containing rules for forwarding the data packets flows, wherein at least one of the plurality of tables (41 -44) is accessed at each of the forwarding elements in order to determine how data packets of an incoming data packet flow are to be forwarded.

3. The method according to claim 1 or 2, wherein the step of instructing the central control entity to command the forwarding elements (40) to direct incoming data packets belonging to the affected data packet flow comprises the load balancing entity instructing the central control entity to command a storing of a learning rule in a learning table (43) of the forwarding elements (40) indicating that data packets belonging to the affected data packet flow are to be forwarded to the service application entity (50a-f) in accordance with the first weight distribution and that a copy of at least a part of the first subsequent incoming data packet belonging to the affected data packet flow is to be directed to the central control entity (20), and, from the latter, to the load balancing entity. 4. The method according to claim 3, wherein the step of the load balancing entity instructing the central control entity to command the forwarding elements (40) the storing of the learning rule comprises setting an idle timer value, upon which expiry, the forwarding elements remove the learning rule from the learning table (43). 5. The method according to claim 3 or 4, further comprising a step of receiving at the load balancing entity from the central control entity (20) the copy of at least part of the first subsequent incoming data packet of the affected data packet flow, a step of generating an exception rule, and a step of the load balancing entity instructing the central control entity to command the forwarding elements (40) a storing of the exception rule in an exception table (41 ) of the forwarding elements (40) indicating that all further incoming data packets of the affected data packet flow are to be forwarded to the service application entity (50a-f) in accordance with the first weight distribution.

6. The method according to claim 5, wherein the step of the load balancing entity instructing the central control entity to command the forwarding elements (40) the storing of the exception rule comprises setting an inactivity timer value, upon which expiry, the forwarding elements remove the exception rule from the exception table (41 ).

7. The method according to any one of claims 1 to 6, wherein the step of instructing the central control entity to command the forwarding elements (40) to forward the new data packet flows to the service application entities in accordance with the new weight distribution comprises the load balancing entity instructing the central control entity to command a storing of a weight rule in a weight table (44) of the forwarding elements (40) indicating how the new data packet flows are to be forwarded to the service application entities (50a-f) in accordance with the new weight distribution.

8. The method according to any one of claims 1 to 7, further comprising the step of the load balancing entity determining a function rule that, when applied to one or more data packet fields of a data packet flow, assigns a bucket number to the data packet flow from a range of bucket numbers, which allows the data packet flows to be distributed amongst the plurality of service application entities (50a-f) in dependence on the first or new weight distribution, and the step of the load balancing entity instructing the central control entity to command a storing of the function rule in a function table (42) of the forwarding elements indicating that a bucket number assigned by the function rule is added as metadata to all the data packets of one of the data packet flows and is used to identify a service application entity that provides the service to said one data packet flow.

9. The method according to any one of claims 1 to 8, wherein each forwarding element (40) comprises a pipeline with an exception table (41 ), a function table (42), a learning table (43) and a weight table (44), wherein the exception table (41 ) indicates for a data packet flow either forwarding a data packet to a particular service application entity, or exploring the function table (42), wherein the function table (42) indicates an entry to the learning table (43), wherein the learning table (43) indicates for a data packet flow either forwarding a data packet to a particular service application entity and a copy of at least a part of the data packet to the central control entity (20), or exploring the weight table (44), and wherein the weight table (44) indicates for a data packet flow forwarding a data packet to a particular service application entity.

10. The method according to claim 8, wherein the function rule corresponds to a hash function selected such that one bucket number of the range of bucket numbers is assigned to each source address of a range of possible source addresses from which the data packet flows are received.

1 1. The method according to claim 7, wherein the weight rules in the weight table (44) are generated by indicating to which service application entity the data packet flow should be forwarded in dependence of a bucket number, wherein the number of bucket numbers assigned to each of the service application entities is proportional to a weight of the corresponding service application entity defined by the weight distribution.

12. The method according to claims 3 and 1 1 , wherein the step of identifying the affected data packet flow comprises the step of detecting the bucket numbers for which the service application entity (50a-f), to which the affected data packet flow with the corresponding bucket number should be forwarded, is changed, and the step of instructing the central control entity to command the storing of the learning rule in the learning table (43) comprises the load balancing entity instructing the central control entity to command a storing of the bucket numbers and the assigned service application entities according to the first weight distribution in the learning table (43).

13. A load balancing entity (100) configured to distribute data packet flows in a network amongst a plurality of service application entities (50a-f) according to a first weight distribution indicating how the data packet flows are distributed amongst the service application entities (50a-f) which each provide an instance of a service to one or more data packet flows, wherein a central control entity (20) controls the data packet flows forwarded in the network by forwarding elements (40), the load balancing entity (100) comprising:

- a detector (130) configured to detect a change from the first weight distribution to a new weight distribution,

- a processing unit (120) configured to:

- instruct the central control entity (20) to command the forwarding elements (40) to forward new data packet flows to the service application entities (50a-f) in accordance with the new weight distribution,

- identify an affected data packet flow that had been previously forwarded, in accordance with the first weight distribution, to a service application entity different than one where the affected data packet flow would be forwarded in accordance with the new weight distribution,

- instruct the central control entity (20) to command the forwarding elements (40) to direct incoming data packets belonging to the affected data packet flow to the service application entity in accordance with the first weight distribution and to direct a copy of at least a part of a first subsequent incoming data packet belonging to the affected data packet flow to the central control entity (20).

14. The load balancing entity according to claim 13, wherein each forwarding element contains a pipeline with a plurality of tables containing rules for forwarding the data packets flows, wherein at least one of the plurality of tables (41 -44) is accessed at each of the forwarding elements in order to determine how incoming data packets of a data packet flow are to be forwarded.

15. The load balancing entity according to claim 13 or 14, wherein the processing unit (120) is configured to instruct the central control entity to command a storing of a learning rule in a learning table (43) of the forwarding elements indicating that data packets belonging to the affected data packet flow are to be forwarded to the service application entity in accordance with the first weight distribution and a copy of at least a part of the first subsequent incoming data packet belonging to the affected data packet flow is to be directed to the central control entity (20) and, from the latter, to the load balancing entity.

16. The load balancing entity (100) according to claim 15, wherein the processing unit (120) is configured, along with the instructing the central control entity to command the storing of the learning rule, to set an idle timer value, upon which expiry, the forwarding elements remove the learning rule from the learning table (43).

17. The load balancing entity (100) according to claim 15 or 16, further comprising a receiver (1 12) configured to receive the copy of at least a part of the first subsequent incoming data packet of the affected data packet flow from the central control entity (20), wherein the processing unit (120) is configured, in response to the receiving of the copy, to generate an exception rule and to instruct the central control entity to command a storing of the exception rule in an exception table (41 ) of each of the forwarding elements indicating that all further incoming data packets of the affected data packet flow are to be forwarded to the service application entity (50a-f) in accordance with the first weight distribution. 18. The load balancing entity according to claim 17, wherein the processing unit (120) is configured, along with the instructing the central control entity to command the storing of the exception rule, to set an inactivity timer value, upon which expiry, the forwarding elements remove the exception rule from the exception table (41 ). 19. The load balancing entity according to any of claims 13 to 18, wherein the processing unit (120) is configured to instruct the central control entity to command a storing of a weight rule in a weight table (44) of each forwarding element indicating how the new data packet flows are to be forwarded to the service application entities in accordance with the new weight distribution.

20. The load balancing entity according to any of claims 13 to 19, wherein the processing unit (120) is configured to determine a function rule that, when applied to one or more data packet fields of a data packet flow, assigns a bucket number to the data packet flow from a range of bucket numbers, which allows the data packet flows to be distributed amongst the plurality of service application entities (50a-f) in dependence on the first or new weight distribution, and to instruct the central control entity to command a storing of the function rule in a function table (42) of the forwarding elements indicating that a bucket number assigned by the function rule is added as metadata to all the data packets of one of the data packet flows and is used to identify a service application entity that provides the service to said one data packet flow.

21 . The load balancing entity (100) according to claim 20, wherein the processing unit (120) is configured to generate the weight rules in the weight table (44) by indicating to which service application entity the data packet flow should be forwarded in dependence of the bucket number, wherein the number of bucket numbers assigned to each of the service application entities is proportional to a weight of the corresponding service application entity defined by the weight distribution.

22. The load balancing entity (100) according to claims 15 and 21 , wherein the processing unit (120) is configured to identify the affected data packet flow by detecting the bucket numbers for which the service application entity, to which the affected data packet flow with the corresponding bucket number should be forwarded, is changed, and the processing unit (120) is configured to instruct the central control entity to command the forwarding elements a storing of the bucket numbers and the assigned service application entities according to the first weight distribution in the learning table (43).

23. The load balancing entity (100) according to any of claims 16 to 22, wherein the processing unit (120) is configured to generate the exception rule for the affected data packet flow including a flow identifier identifying the affected data packet flow, the flow identifier including at least a source address and a destination address of the affected data packet flow, and the processing unit (120) is configured to instruct the central control entity to command a storing in the exception table (41 ) of each flow identifier in connection with the exception rule to directly forward the corresponding data packet flow to the service application entity defined by the first weight distribution. 24. A computer program product comprising program code to be executed by at least one processor of a load balancing entity, wherein execution of the program code causes the at least one processor (120) to perform steps of a method according to any one of claims 1 to 12.

Description:
Load Balancing of Data Packet Flows Technical Field

The present invention relates to a method for operating a load balancing entity in a network in which a central control entity controls data packet flows forwarded in the network by forwarding elements. The invention furthermore relates to the corresponding load balancing entity configured to distribute the data packet flows in the network and to the computer program product.

Background Load balancing entities for different types of traffic are widely deployed in the Internet and in the telecommunication domain. The main usage of a load balancing entity is to distribute traffic, typically IP traffic, originated by clients, and targeted towards a particular service provided by different service application entities. For the load balancing, the load balancing entity applies different distribution algorithms.

The load balancing entity offers a single point of contact towards the clients, this point often being referred to as virtual IP (VIP). The VIP is either addressed directly by the clients, by making connections to the VIP's IP address for instance, or indirectly by the network transparently forwarding traffic from the clients to the VIP.

The load balancing entity may balance traffic for different services, each service being provided by a pool of service application entities. When a packet reaches the VIP, the load balancing entity may select a service instance, i.e. a service application entity providing the service, from the corresponding pool and forward the packet to the service application entity. Under some circumstances, it may optionally need to route the packet back to the client from the service application entity.

Load balancing entities can be classified based on the networking level they operate on, e.g. Layer 2 load balancing entities operate on the Ethernet level; and traffic is forwarded to the target service application entity by rewriting the destination MAC (Media Access Control) address of every packet. Layer 3 load balancing entities operate on the IP level; and traffic is forwarded to the target entity by rewriting the destination IP address. This process typically involves routing and rewriting of the Ethernet headers also.

Application layer load balancers operate on the application level. These load balancers typically inspect or decode the application protocol within the payload and select the target traffic instance using the information decoded from the payload. However, this requires the load balancer to know how to detect the application protocol and how to decode each application protocol. Packets belonging to the same client session are said to form a data packet flow. What client session means depends on the service. It is exemplarily assumed that the service is HTTP. For HTTP connections, a flow may comprise all the packets belonging to a single HTTP request/response interaction. Moreover, for HTTP connections which are tied together by a session cookie, a flow may comprise all the packets of all the HTTP request/response iterations which share the same session cookie.

Load balancers can be stateless or stateful, based on how the load balancer handles packets within the same flow. Stateless load balancers treat each packet within the flow individually, potentially distributing each packet to different target instances. This requires the target service application entities to be capable of dealing with individual packets of the flow. When using protocols which are not connection-oriented, such as UDP (User Data Protocol), this is fine. When using connection-oriented protocols such as TCP (Transmission Control Protocol), this does not work since TCP requires all packets belonging to the same connection to arrive at the same host and in proper sequence. Ultimately, the end application must support dealing with individual packets in the different service application entities.

Stateful load balancers, on the other hand, provide the same treatment to all packets within a flow, distributing all the packets to the same target instance. This is needed for most applications.

Load balancers may furthermore allow setting different weights to the service application entities, making it possible to send more traffic to those service application entities that have a higher capacity. Furthermore, load balancers can be implemented entirely in hardware, entirely in software or in a combination of hardware and software. Load balancers that operate on lower layers may be implemented in hardware and those operating in higher layers, which often imply a higher implementation complexity, may be implemented in a combination of hardware and software. Hardware implementations may be more suitable in those cases where higher performance is required.

Software defined networking (SDN) is an approach for deploying, configuring and managing networks. As opposed to traditional networking, where forwarding and control logic is tightly coupled, Software defined networking introduces abstractions to decouple the forwarding plane from the control plane. When decoupled in such a way, the control plane determines the logic to be applied to the traffic and the forwarding plane, also referred to as data plane, forwards the packets according to such logic. Control and data planes have to communicate using some protocol. OpenFlow is one of the possible ways to realize SDN. OpenFlow is a communication protocol between the control plane and the forwarding plane of the network. The entities responsible for the control plane management are called open flow controllers and the entities responsible for the forwarding plane, typically one or more switches are called OpenFlow Switches or Forwarding Element.

The OpenFlow protocol is run between the OpenFlow Controllers and the OpenFlow Forwarding Elements or Switches. It allows an OpenFlow Controller a programming of the Switch's forwarding tables, and receiving events and reports from the OpenFlow Switches. An OpenFlow Switch or Forwarding Element has several flow tables comprising a pipeline. Every flow table has several flow entries. Furthermore, every entry contains a flow match filter and a set of instructions indicating what to do with a received packet matching the flow match filter.

When packets arrive to the OpenFlow Switch, they traverse the flow tables across the pipeline. Each flow table is scanned for the flow match filter matching the packet. If a match occurs, the instructions associated to the corresponding flow entry are executed. If no match occurs, processing continues on the next flow table in the pipeline. When a packet has traversed the pipeline or when the flow instructions dictate so, the OpenFlow Switch may send the packet to one or more ports. Network applications are implemented in an OpenFlow Switch by provisioning the different flow tables in the pipeline according to the application logic. The programming is made by the OpenFlow Controller, which is ultimately driven by the network application.

Load balancing entities known in the art are complex networking components difficult to implement. The higher the networking level is and the more sophisticated the distribution algorithm is, the more complex the implementation becomes. Stateful load balancers, in turn, require additional logic to pin down the active sessions and additional memory to retain the flow state. Furthermore, some load balancers have strict performance requirements. For the above reasons load balancers are often implemented as a combination of hardware and software, the software being generally proprietary. This leads to the following problems: Proprietary solutions normally mean high costs, and as hardware is also involved, these load balancers are rather expensive. Furthermore, when an investment has been made in a combination of proprietary hardware and/or software, it is difficult to move to a different platform or benefit from a multivendor ecosystem. A further problem is a long release cycle: A proprietary solution which is partly based on hardware leads to longer release cycles, where it takes normally a long time to introduce new features or corrections.

Furthermore, a further problem can be seen in the fact that a proprietary system usually has its own operation and maintenance routines which may require specific support and training.

Accordingly, a need exists to alleviate at least some of the above-mentioned problems and to improve the operation of a load balancers, especially when a weight change for distributing data packet flows amongst a plurality of service application entities occurs.

Summary This need is met by the features of the independent claims. Further aspects are described in the dependent claims.

According to a first aspect, a method for operating a load balancing entity in a network is provided which, via a central control entity, controls data packet flows forwarded in the network by forwarding elements, wherein the load balancing entity distributes data packet flows amongst a plurality of service application entities according to a first weight distribution indicating how the data packet flows are distributed amongst the service application entities which each provide an instance of a service to one or more data packet flows . According to one step of the method, the load balancing entity detects a change from the first weight distribution to a new weight distribution. The load balancing entity furthermore instructs the central control entity to command the forwarding elements to forward new data packet flows to the service application entities in accordance with the new weight distribution. Furthermore, the load balancing entity detects an affected data packet flow that had been previously forwarded in accordance with the first weight distribution, to a service application entity different from the one where the affected data packet flow would be forwarded in accordance with the new weight distribution. Furthermore, the central control entity is instructed by the load balancing entity to command the forwarding elements to direct incoming data packets belonging to the affected data packet flow to the service application entity in accordance with the first weight distribution and to direct a copy of at least a part of a first subsequent incoming data packet, belonging to the affected data packet flow, to the central control entity. In particular, the central control entity may in turn forward said copy to the load balancing entity.

With the above-described method it is possible to effectively handle affected data packet flows which are affected by a change of the weight distribution from the first weight distribution to a new weight distribution. As a copy of a part of an incoming data packet belonging to an affected data packet flow is directed to the load balancing entity, the load balancing entity is able to take the appropriate measures to redirect data packet flows in such a way that the new weight distribution is taken into account while assuring that existing flows are not affected by the weight distribution change. According to a further aspect, the invention provides a corresponding load balancing entity configured to distribute data packet flows in a network amongst a plurality of service application entities according to a first weight distribution that indicates how the data packet flows are distributed amongst the service application entities, which each provide an instance of a service to one or more data packet flows. The load balancing entity comprising a detector configured to detect a change from the first weight distribution to a new weight distribution. Furthermore, the load balancing entity comprises a processing unit configured to operate as discussed above and in further detail below.

In an embodiment, the processing unit is configured to instruct the central control entity to command the forwarding elements to forward new data packet flows to the service application entities in accordance with the new weight distribution. Furthermore, the processing unit is configured to identify the affected data packet flow, as commented above, and to instruct the central control entity to command the forwarding elements to direct incoming data packets belonging to the affected data packet flow to the service application entity in accordance with the first weight distribution and to direct a copy of at least a part of a first subsequent incoming data packet belonging to the affected data packet flow to the central control entity.

In an embodiment, the processing unit may be configured to instruct the central control entity to command a storing of a learning rule in a learning table of the forwarding elements indicating that data packets belonging to the affected data packet flow are to be forwarded to the service application entity in accordance with the first weight distribution and a copy of at least a part of the first subsequent incoming data packet belonging to the affected data packet flow is to be directed to the central control entity and, from the latter, to the load balancing entity. In this embodiment, the processing unit may further be configured, along with the instructing the central control entity to command the storing of the learning rule, to set an idle timer value, upon which expiry, the forwarding elements remove the learning rule from the learning table.

The load balancing entity may further comprise a receiver configured to receive this copy of at least a part of the first subsequent incoming data packet from the central control entity, and the processing unit may further be configured, in response to receiving the copy, to generate an exception rule and to instruct the central control entity to command a storing of the exception rule in an exception table of each of the forwarding elements indicating that all further incoming data packets of the affected data packet flow are to be forwarded to the service application entity in accordance with the first weight distribution. In this embodiment, the processing unit may further be configured, along with the instructing the central control entity to command the storing of the exception rule, to set an inactivity timer value, upon which expiry, the forwarding elements remove the exception rule from the exception table.

In an embodiment, the processing unit may be configured to instruct the central control entity to command a storing of a weight rule in a weight table of each forwarding element indicating how the new data packet flows are to be forwarded to the service application entities in accordance with the new weight distribution.

In an embodiment, the processing unit may be configured to determine a function rule that, when applied to one or more data packet fields of a data packet flow, assigns a bucket number to the data packet flow from a range of bucket numbers, which allows the data packet flows to be distributed amongst the plurality of service application entities in dependence on the first or new weight distribution, and to instruct the central control entity to command a storing of the function rule in a function table of the forwarding elements indicating that a bucket number assigned by the function rule is added as metadata to all the data packets of one of the data packet flows and is used to identify a service application entity that provides the service to said one data packet flow.

In an embodiment, the processing unit may be configured to generate the weight rules in the weight table by indicating to which service application entity the data packet flow should be forwarded in dependence of the bucket number, wherein the number of bucket numbers assigned to each of the service application entities is proportional to a weight of the corresponding service application entity defined by the weight distribution.

In an embodiment, the processing unit may be configured to identify the affected data packet flow by detecting the bucket numbers for which the service application entity, to which the affected data packet flow with the corresponding bucket number should be forwarded, is changed, and the processing unit may further be configured to instruct the central control entity to command the forwarding elements a storing of the bucket numbers and the assigned service application entities according to the first weight distribution in the learning table.

In an embodiment, the processing unit may be configured to generate the exception rule for the affected data packet flow including a flow identifier identifying the affected data packet flow, the flow identifier including at least a source address and a destination address of the affected data packet flow, and the processing unit may further be configured to instruct the central control entity to command a storing in the exception table of each flow identifier in connection with the exception rule to directly forward the corresponding data packet flow to the service application entity defined by the first weight distribution. According to a still further aspect, there is provided a computer program product comprising program code to be executed by at least one processor of a load balancing entity, wherein execution of the program code causes the at least one processor to perform steps of the method discussed above with reference to the load balancing entity. Features mentioned above and features yet to be explained may not only be used in isolation or in combination as explicitly indicated, but also in other combination. Features and embodiments of the present application may be combined unless explicitly mentioned otherwise. Brief Description of the Drawings

Various features of embodiments of the present invention will become more apparent when read in conjunction with the accompanying drawings. Fig. 1 is an architectural overview over a network including a load balancing entity configured to cope with a change of a weight distribution used to distribute data packet flows amongst a plurality of service application entities. Fig. 2 shows a signaling flow in which the load balancing entity of Fig. 1 asks a central control entity to execute a request in a forwarding element. Fig. 3 shows a pipeline of forwarding tables which are accessed by a forwarding element in a defined order to determine how to forward a data packet of a data packet flow.

Fig. 4 shows a flowchart including the steps carried out by the load balancing entity in order to react to a change of a weight distribution.

Fig. 5 is a schematic view of a load balancing entity used to control the distribution of the data packet flows amongst a plurality of service application entities.

Detailed Description of Embodiments

In the following, embodiments of the invention will be described in detail with reference to the accompanying drawings. It is to be understood that the following description of embodiments is not to be taken in a limiting sense. The figures are to be regarded as being schematic representations and elements illustrated in the figures are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent for a person skilled in the art. Any connection or coupling between functional blocks, devices, components or other physical or functional units shown in the drawings or described hereinafter may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection, unless explicitly stated otherwise. Functional blocks may be implemented in hardware, firmware, software, or combinations thereof.

The invention describes a load balancing entity capable of load balancing traffic at layer 2 and layer 3. The load balancing entity can furthermore operate in a stateless and stateful mode. The invention especially relates to the stateful mode and the handling of data packet flows occurring during a change of a weight distribution used to distribute data packet flows amongst a plurality of service application entities as will be explained in further detail below.

Fig. 1 shows an architectural view of a software-defined network including a load balancing entity 100, the load balancing entity leverages software-defined networking making it capable of running on standard OpenFlow Switches and Controllers. Furthermore, it is possible that the load balancing entity is implemented on a pure software base. In the following, the invention will be described in connection with OpenFlow. However, it should be understood that the invention is not restricted to OpenFlow applications, other software-defined networks and a corresponding protocol may be used, e.g. the so-called ForCES uses predefined Logical Functional Blocks (LFB) which have input, outputs and a behavior. LFBs are wired together to make up the application logic. In this respect, the LFBs in ForCES may be functionally equivalent to the tables in OpenFlow.

Data packet flows originating from an access network 30, indicated as client traffic, are forwarded in the software-defined network by forwarding elements 40. The data packet flows received by the forwarding elements are forwarded to the service application entities 50a - 50f. The service application entities 50a - 50f provide a pool of servers and each service application entity provides an instance of a service to one or more data packet flows. Apart from the exemplary HTTP service commented above, other exemplary service may be the one provided by a pool of video streaming servers, wherein each video streaming server uses RTP (UDP over IP) and corresponds to a service application entity providing an instance of the service to one or more data packet flows. Another exemplary service may be a generic service that uses a client / server protocol on top of TCP/IP, and the generic service is clustered, the cluster comprising a pool of service application instances, the instances being capable of servicing client requests. Examples of such generic service may be database systems, instance messaging (IM) services and online gaming services.

In particular, a database system may be deployed in a cluster, wherein a cluster comprises several database server instances. Any database server instance can accept client connections. Client connections use a proprietary protocol over TCP/IP. This allows for remote clients querying the database, the clients not needing to be running on the same physical instance of the database server instance. Client connections can be load-balanced amongst the different database server instances. Stateful mode is required to preserve established connections and prevent the client/server communication from breaking.

A central control entity 20 controls the different forwarding elements and controls inter alia how a data packet flow received by a forwarding element 40 is forwarded in the network. In the embodiment shown, the forwarding element is an OpenFlow Switch and it can be either a pure or a hybrid OpenFlow Switch and may be implemented in hardware, in software, in firmware, or a combination thereof. The central control entity 40 can be an OpenFlow Controller and, in any case, may be implemented in hardware, in software, in firmware, or a combination thereof. Each forwarding element has one or more ports connected to the access network 30, where the traffic from the clients comes from. These ports are the virtual IP ports (VIP).

The plurality of service application entities of a single service are logically grouped together into a pool of service application entities. Each service instance, i.e. each service application entity, may be attached to a port of a switch which, in turn, is connected to a port of the forwarding element 40. Optionally, the service application entities could also be directly connected to a port of the forwarding element 40. The forwarding element 40 is connected to the central control entity 20. One possibility for a communication between the forwarding element and the central control entity is OpenFlow. However, as mentioned above, other protocols are possible. The central control entity 20 is furthermore connected to the load balancing entity 100. In the embodiment shown, the connection is obtained via an application programming interface, API, to be used by external network applications to program the underlying forwarding elements 40. The load balancing entity uses the central control entity to program the flow tables of the forwarding elements to perform the load balancing functionality. Furthermore, the load balancing entity has an operation and maintenance interface to an operation maintenance unit 10 to provision the different operational parameters. The load balancing entity implements the load balancing functionality by maintaining a set of flow tables in the forwarding elements. Some flow tables may be provisioned when this functionality is instantiated, while others may be dynamically provisioned as a result of a change in the distribution of weights and/or network events as will be explained in further detail below.

As can be deduced from Fig. 2, the load balancing entity 100 does not manage the forwarding element directly, but via the central control entity. Fig. 2 illustrates the signaling exchange between three entities shown in Fig. 1. When the load balancing entity needs to create, modify or remove flow entries on any of the tables of the forwarding element, the load balancing entity does the following:

It prepares a high level representation of the command and the details such as the contents of the flow entry (step 0). In step 1 , the load balancing entity 100 uses the northbound API to send the representation of the command to the control entity 20. The control entity then translates the high level representation of the command into a request in accordance with the protocol used, e.g. into an OpenFlow request (step 2). In the third step, the controller uses the interface towards the forwarding element 40, e.g. the OpenFlow interface to send the command to the forwarding element. In step 4, the forwarding element enforces the command and, in step 5, the forwarding element sends a response to the control entity 20.

When the forwarding entity needs to report an event to the load balancing entity, such as the report that a packet for a data packet flow is received, it does the following:

In step 10, the forwarding element 40 determines that it must report an event to the control entity 20. By way of example, this may happen when a data packet is received for which no forwarding commands can be found in the corresponding tables. Furthermore, as will be explained below, the load balancing entity 100 may instruct the forwarding element via the control entity to direct certain packets to the control entity. In step 1 1 , the forwarding element 40 sends the event to the control entity 20 over the corresponding interface, here the OpenFlow interface. In step 12, the control entity 20 determines that the load balancing entity is the recipient of the event and creates a high level representation of the message. In step 13, the control entity 20 sends the high level representation of the message to the load balancing entity 100.

The invention furthermore has several building blocks, which are described in more detail below.

In an embodiment, the load balancing entity 100 defines a hash function, as one or more function rules in a function table 42 shown in Fig. 3, which is used to distribute data packet flows (traffic flows) to a number of buckets. This hash function is carefully chosen such that it distributes service traffic evenly. By applying the law of large numbers, it is possible to guarantee that traffic flows are distributed evenly to the different buckets over time provided that the number of traffic flows is large enough. It should be noted that these buckets are not the same buckets as the action buckets defined by OpenFlow for group tables.

In an embodiment of the invention, the hash function uses the source IP address as hash input so that traffic coming from a given client IP address will always be assigned to the same bucket. This function is provisioned with the source IP address range to be used: It can be the whole IP address space or it can be limited to a number of sub-networks, which may be the case for a load balancing entity deployed in an operator or in a service provider network. The client's IP address range is normally known so that it can be assumed that the IP address range from which a request for the application of a service is received, is limited to a certain number of IP addresses. The requested service may be a HTTP traffic request, it may be a request for a web server or a media streaming server or any generic service as commented above.

In another embodiment, the hash function may also be implemented directly in the forwarding element. On software-based forwarding elements, any algorithm can be implemented. On some hardware-based forwarding elements, there may already be some hashing function in place that can be leveraged. Assuming that the hash function returns a number, it can be mapped to the number of buckets using a modulus operation with the number of buckets.

The load balancing entity 100 defines a weight function, as one or more weight rules in a weight table 44 shown in Fig. 3, which is used to assign a weight to the each service application entities. The load balancing entity 100 is furthermore responsible for controlling the transfer of the data packet flows to the service application entities 50a - 50f when a change in the weight distribution occurs, e.g. when one of the service application entities is not available for maintenance reasons or when other service application entities are added. To this end, the load balancing entity has a pin down function which operates when a change in the weights distribution occurs. The goal of this function is to preserve the original mappings prior to the change for established flows only, so these flows get pinned down and do not break. For newly established flows, however, the new weight distribution shall apply. In an embodiment of the invention, the flow pin down function is realized using one or more learning rules in a learning table 43 shown in Fig. 3, together with one or more exception rules in an exception table 41 shown in Fig. 3. When a change in the weight distribution occurs, the load balancing entity 100 may identify the buckets affected by such change. For each affected bucket, the load balancing entity 100 creates a trigger which will be fired for all the packets that the hash function assigns to the bucket. The trigger is set to be active for a fixed period of time. This makes the load balancing entity detect all the flows that have been affected by a weight distribution change and the load balancing entity can take action to preserve or pin down the original service instance, i.e. the original service application entity, so further packets from the same data packet flow match the entry in the exception table mentioned above. When the load balancing entity 100 is operating in a stateless mode, only the hash function and weight function are used. However, when the load balancing entity is operating in a stateful mode, the hash function, weight function and the flow pin-down function are used. Fig. 3 shows an implementation of the above described functions which are translated into four flow tables in the forwarding element's pipeline.

From left to right in Fig. 3, the first table 41 , table Te, is the exception table of the flow pin- down function. The second table 42, table Tf, is the function table realizing the hash function. The third table 43, named Twcl, is the learning table that realizes the weight change learning timer of the flow pin-down function, and the fourth table 44, named Tb, is the weight table that realizes the weight function. In the following, we will describe the operation of the load balancing entity in both the stateless and stateful mode.

The static or initial provisioning of the load balancing entity is explained with the help of an example. It is assumed that the load balancing entity 100 is balancing traffic to a pool of service application entities such as the entities 50a - 50f shown in Fig. 1 . In the present example it is assumed that four service application entities are used. Furthermore, a load balancing entity 100 is configured to use one hundred buckets. This number of buckets makes the weight distribution easier to understand as it can be assimilated to percentages. However, it should be understood that, if a different granularity is needed, the number of buckets can be increased or decreased as needed. Furthermore, it is assumed that the weight distribution to each one of the service application entities is 10 %, 40 %, 20 % and 30 % respectively. In such a case, the weight function shall assign buckets 1 to 10 to the first instance or first service application entity, 1 1 to 50 to the second service application entity, 51 to 70 to the third service application entity, and 71 to 100 to the fourth service application entity.

With this configuration the load balancing entity configures the Tb table 44 by creating one hundred flow entries. Each entry has a match filter with a single condition: The value of the meta data field is equal to a single bucket number between 1 and 100. In the example given, the bucket number is stored in the meta data field, the meta data being transient data which are processed when a data packet of a data packet flow passes through the forwarding elements.

Each table entry has a set of instructions associated, which tell the forwarding element 40 to send the data packet to the service application entity corresponding to the bucket number.

Depending on the layer the load balancing entity is operating on, the process of sending the data packet to the instance differs. When operating on layer 2, the instructions tell the forwarding element to modify the Ethernet header, particularly the destination MAC address. When operating on layer 3, the instructions tell the forwarding element to modify the Ethernet header, but also the IP header, effectively acting as a router. In the given example, the load balancing entity 100 is also configured to serve client traffic coming from a single class C IP sub-network, e.g. 192.0.2.0/24. This results in having a client IP address space of 256 addresses, ranging from 192.0.2.0 to 192.0.2.255.

With the above setup, the load balancing entity may configure the Tf table 42 by creating 256 entries. Every entry has a match filter with a single condition: The value of the source IP address is equal to a single IP of the range. The associated instruction to each entry is to set the meta data field with a particular bucket number.

In the example above, the first hundred entries may get bucket numbers from 1 to 100, the next hundred entries may get bucket numbers from 1 to 100, and the last 56 entries may get a unique random number between 1 and 100. Other distributions are possible, by way of example, by using bitmasks and source IP address filters, so the number of flow entries in the table is reduced. In the following a stateless operation of the load balancing entity with the help of the provisioning discussed above is explained in more detail. It is assumed that a data packet coming from a client IP address 192.0.2.1 1 reaches the forwarding element 40. The following processing occurs: - The forwarding element traverses the pipeline of Fig. 3 for the packet and first evaluates table Te 41. Since Te is empty, the forwarding element moves to the next table in the pipeline, table Tf 42.

The forwarding element performs a lookup on table Tf 42 and finds a match for the data packet source IP address. It executes the instructions associated to the table entry and sets the meta data field for the data packet to a bucket number equal to 1 1.

The forwarding element moves to the next table in the pipeline, table Twcl 43. Since table 43 is empty, this forwarding element moves to the next table in the pipeline, table Tb 44. The forwarding element performs a lookup on table Tb 44 and finds a match for the meta data field. It executes the instructions associated to the table entry. These instructions tell the forwarding element to rewrite the data packet headers to address the second service instance or second service application entity and send out the data packet.

The data packet leaves the forwarding element towards the second service application entity. In the following the stateful behavior of the load balancing entity is explained in more detail.

The stateful operation of the load balancing entity is explained further below based on the example given above for the stateless behavior. It is assumed that a data packet flow as the one discussed above with the stateless operation is established and data packets for the data packet flow are being processed by the forwarding element 40. At some point in time, the load balancing entity 100 has to change the weights distribution, for instance, as a result of an operation and maintenance action. It is assumed that the weight distribution changes from 10 %, 40 %, 20 % and 30 % to 25 %, 25 %, 20 % and 30 %. That is, whilst the first service application was assigned for buckets 1 to 10 in accordance with the previous weight distribution, the first service application is now assigned for buckets 1 to 25 in accordance with the new weight distribution. Whilst the second service application was assigned for buckets 1 1 to 50 in accordance with the previous weight distribution, the second service application is now assigned for buckets 26 to 50 in accordance with the new weight distribution. However, the third service application was assigned for buckets 51 to 70 in accordance with the previous weight distribution, and is still assigned for buckets 51 to 70 in accordance with the new weight distribution. Likewise, the fourth service application entity was assigned for buckets 71 to 100 in accordance with the previous weight distribution, and is still assigned for buckets 71 to 100 in accordance with the new weight distribution.

The load balancing entity does the following processing:

The load balancing entity 100 identifies which buckets are affected by the new weights distribution. It determines that buckets 1 1 to 25, which previously were associated to the second service application entity, are now associated to the first service application entity. These buckets with associated data packet flows must be pinned down. For these data packet flows, it has to be assured that the existing data packet flow which started before the change of the weight distribution is not interrupted and is directed to the same service application entity.

The load balancing entity 100 creates new table entries in table Twcl 43. Each entry has a match filter with a single condition: The value of the meta data field is equal to a single bucket number between 1 1 and 25. The instruction set, i.e. actions, associated to each entry contains two instructions, an instruction to send a copy of at least part of the data packet to the central control entity 20, also called packet-in instruction hereinafter, and a second instruction to forward the data packet to the original second service application entity, as previously done for data packets belonging to this data packet flow. All these table entries are set with a fixed timer, i.e. a so-called idle timer. This idle timer is configured such that the forwarding element removes the entries in this Twcl table 43 automatically once the timer expires.

The load balancing entity rewrites the Tb table 44 to reflect the new weight distribution. This implies that the entries with a meta data match filter indicating a bucket number value from 1 1 to 25 get instructions to send the packet to the first service application entity. These one or more entries for bucket numbers 1 1 to 25 are usable for new data packet flows, not previously treated, and that are assigned a bucket number within this range 1 1 to 25.

In the following, it is assumed that a packet from the same data packet flow arrives at the forwarding element. The following processing occurs:

The forwarding element traverses the pipeline for the packet and first evaluates Te table 41 . Since this table is empty, the forwarding element moves to the next table in the pipeline, Tf table 42. - The forwarding element performs a lookup on Tf table 42 and finds a match for the data packet source IP address. It executes the instructions associated to the table entry and sets the meta data field for the packet to a bucket number equal to 1 1 . - The forwarding element moves to the next table in the pipeline, Twcl table 43. The forwarding element performs a lookup on table Twcl 43 and finds a match bucket number in the meta data field. It executes the instructions associated to the table entry and the forwarding element sends the data packet to the original service application entity, i.e. the second service application entity, and sends a copy of at least part of the data packet to the central control entity 20. The central control entity 20 submits this copy to the load balancing entity 100.

The load balancing entity 100 instructs the control entity 20 to create a new table entry in the Te table 41 . The match filter is set to the 5-tuple of the data packet (source and destination IP addresses, respective ports and protocol). The instruction set, i.e. actions, associated to the table entry contains a single instruction to forward the packet to the original second service application entity.

The table entry is set with an inactivity timer, so the forwarding element 40 removes the entry shortly after the data packet flow is terminated.

The packet leaves the forwarding element 40 towards the second service application entity. No further processing of the pipeline occurs for this data packet.

Further data packets arriving at the forwarding element for the same data packet flow receive the following processing: - The forwarding element traverses the pipeline for the data packet and firstly evaluates the Te table 41 . The forwarding element performs a lookup on table 41 and finds a match for the 5-tuple. This forwarding element executes the instruction set associated to the table entry. - The data packet leaves the forwarding element towards the second service application entity. No further processing of the pipeline occurs for this data packet.

Further data packets arriving at the forwarding element for new data packet flows are handled by the load balancing entity using the new weight distributions in the Tb weight table 44 since there will be no matching occurring in the Te and Twcl tables 41 and 43.

There is a minor exception to the above for those new data packet flows arriving at the forwarding element while the Twcl learning table 43 has entries and the first data packet of the new data packet flow is assigned a bucket number that matches an entry in the Twcl table. In such cases the load balancing entity shall assign a service application entity according to the previous weight distribution. However, this only occurs while the idle timer for the entries in the Twcl table 43 has not expired. The exemplary operation of the load balancing entity, in both the stateless and stateful modes, has been described above with reference to an exception table 41 , a function table 42, a learning table 43 and a weight table 44, and with reference to respective table entries. Each table entry in a table comprises a match filter, or condition, and an associated set of instructions, or actions, to be carried out upon matching the match filter or condition. The match filter and the set of instructions in a table entry for a table are comprised in a rule for the table. That is, the exception table 41 may comprise an exception rule that indicates, for a data packet flow matching the condition, an instruction to forward a data packet to a particular service application entity, and a further exception rule that indicates, for data packet flows not matching any condition, an instruction to explore the function table 42. The function table 42 may comprise a function rule that indicates, for a data packet flow matching the condition, an instruction to include a specific metadata in all data packets of the data packet flow, and an instruction to explore the learning table 43 with the specific metadata. The learning table 43 may comprise a learning rule with the specific metadata as condition and, for a data packet flow matching this condition, the learning rule indicates an instruction to forward a data packet to a particular service application entity and an instruction to direct a copy of at least part of the data packet towards the load balancing entity via the central control entity. The learning table 43 may comprise a further learning rule that indicates, for data packet flows not matching any condition, an instruction to explore the weight table 44.

The weight table 44 may comprise a weight rule with a number of metadata as condition and, for a data packet flow matching this condition, the weight rule indicates an instruction to forward a data packet to a particular service application entity.

Fig. 4 summarizes the steps carried out by the load balancing entity when a weight change is occurring at the distribution of the data packet flows amongst the plurality of service application entities. Fig. 4 summarizes the most important steps. The detailed steps carried out have been explained above. In step S40 the method starts, and in step S41 a change of the weight distribution is detected or the load balancing entity determines that a change of the weight distribution is necessary, e.g. when one of the service application entities fails, or another service application entity is added, or if one of the service application entities is overloaded and the load should be reduced. In the next step S42, the load balancing entity 100 instructs the central control entity 20 to command the forwarding elements 40 to forward the new data packet flows to the service application entities in accordance with the new weight distribution. Additionally, in step S43 the affected data packet flows which are affected by the weight change are identified. As explained above, the load balancing entity 100 may identify which of the buckets are affected by the change of weight distribution. The data packet flows associated with the identified buckets must then be handled according to the previous weight distribution before the change. In step S44, the central control entity 20 is instructed by the load balancing entity 100 to command the forwarding elements to direct incoming data packets belonging to an affected data packet flow to the service application entity in accordance with the previous weight distribution and to direct a copy of at least a part of the first subsequent incoming packet belonging to the affected data packet flow to the central control entity. The method ends in step S45, when load balancing entity receives, via the central control entity, the copy of the first subsequent incoming packet belonging to the affected data packet flow, and the balancing entity instructs the control entity to create a table entry in the exception table 41 . Fig. 5 shows a schematic view of a load balancing entity 100 which can operate as discussed above. The load balancing entity comprises an input/output unit 1 10 comprising a transmitter 1 1 1 and a receiver 1 12. The input/output unit 1 10 represents the possibility of the load balancing entity 100 to communicate with other entities inside or outside the network controlled by the central control entity 20. The input/output unit may further be configured to operate as discussed above in connection with Fig. 1. The transmitter 1 1 1 provides the possibility to transmit messages or signaling to other entities, the receiver 1 12 provides the possibility to receive messages or signaling from other entities. The load balancing entity comprises at least one processing unit 120 which comprises one or more processors and which is responsible for the operation of the load balancing entity 100. The processing unit 120 can generate the commands that are needed to carry out the procedures of the load balancing entity discussed above or further below, in which the load balancing entity is involved. A detector 130 is schematically shown which detects when a change of the weight distribution occurs in distributing data packet flows amongst a plurality of service application entities. A memory 140 is provided which can store suitable program codes to be executed by the processing unit 120 as well as data required so as to implement the needed functionalities of the load balancing entity. In particular, the processing unit may be a processing entity or a processing pool, wherein the processing unit, the processing entity or the processing pool may comprise one or more processors. It should be understood that other entities may be provided in the load balancing entity 100, which were omitted in the discussion above for the sake of clarity. Furthermore, the functional features described above in connection with Fig. 5 may be implemented by hardware or software or a combination of hardware and software. From the above discussion, some particular embodiment can be provided by the present invention.

As discussed above, each forwarding element 40 can contain a pipeline with a plurality of tables 41 -44 which contain rules for forwarding the data packet flows, wherein at least one of the plurality of tables is accessed at each of the forwarding elements in order to determine how the data packets of an incoming data packet flow are to be forwarded.

Furthermore, the central control entity 20 is instructed to command the forwarding elements 40 to direct incoming data packets belonging to the affected data packet flow. This instruction step may comprise the step that the load balancing 100 entity instructs the central control entity 20 to command a storing of a learning rule in the learning table 43 of the forwarding elements 40 indicating that data packets belonging to the affected data packet flow are to be forwarded to the service application entity in accordance with the first weight distribution and that a copy of at least a part of the first subsequent incoming data packet belonging to the affected data packet flow is to be directed to the central control entity 20 and from the latter to the load balancing entity 100.

In an embodiment, the step that the load balancing 100 entity instructs the central control entity 20 to command the storing of the learning rule may comprise setting an idle timer value, upon which expiry, the forwarding elements remove the learning rule from the learning table 43. The idle timer helps to clear the learning table 43.

The load balancing entity 100 then receives from the central control entity the copy of the at least part of the first subsequent incoming data packet of the affected data packet flow. The load balancing entity 100 may then generate an exception rule and the load balancing entity may instruct the central control entity 20 to command the forwarding elements 40 to store the exception rule in an exception table 41. The exception rule can indicate that data packets of an affected flow are to be distributed according to the first weight distribution and not according to the new weight distribution.

Furthermore, the step of the load balancing entity instructing the central control entity to command the forwarding elements the storing of the exception rule may comprise setting an inactivity timer value, upon which expiry, the forwarding elements remove the exception rule from the exception table.

The inactivity timer helps to remove the entry from the exception table 41 after the data packet flow is terminated.

The step of instructing the central control entity 20 to command the forwarding elements 40 to forward the new data packet flows to the service application entities in accordance with the new weight distribution may comprise the step that the load balancing entity 100 instructs the central control entity 20 to command a storing of a weight rule in a weight table 44 of the forwarding elements 40 indicating how the new data packet flows are to be forwarded to the service application entities in accordance with the new weight distribution.

In the embodiment above, the load balancing entity 100 rewrites the Tb weight table 44 to reflect the new weight distribution.

Furthermore, the load balancing entity 100 may determine a function rule that, when applied to one or more data packet fields of a data packet flow, assigns a bucket number to the data packet flow from a range of bucket numbers, which allows the data packet flows to be distributed amongst the plurality of service application entities in dependence on the first or a new weight distribution. The load balancing entity may furthermore instruct the central control entity 20 to command a storing of the function rule in a function table 42 of the forwarding elements indicating that a bucket number assigned by the function rule is added as metadata to all the data packets of one of the data packet flows and is used to identify a service application entity that provides the service to said one data packet flow.

In the embodiment discussed above, the load balancing entity may configure the Tf function table 42 in which a value of the source IP address field may be exemplary associated with a particular bucket number.

Each forwarding element may comprise a pipeline with an exception table 41 , a function table 42, a learning table 43 and a weight table 44. The exception table 41 indicates for a data packet flow either forwarding a data packet to a particular service application entity or exploring the function table 42, wherein the function table 42 indicates an entry to the learning table. The learning table 43 indicates for a data packet flow either forwarding a data packet to a particular service application entity and a copy of at least a part of the data packet to the central control entity or exploring the weight table 44, wherein the weight table indicates, for a data packet flow, forwarding a data packet to a particular service application entity.

The different tables have been discussed in detail above in connection with Fig. 3, wherein tables 41 and 43 are mainly used in the stateful operation, whereas tables 42 and 44 are used in both stateless and stateful operation.

Furthermore, the function rule may correspond to a hash function selected in such a way that one bucket number of the range of bucket numbers is assigned to each source address of a range of possible source addresses from which the data packet flows are received.

Furthermore, the weight rules in the weight table 44 may be generated by indicating to which service application entity 50a-f the data packet flow should be forwarded in dependence of the bucket number, wherein the number of bucket numbers assigned to each of the service application entities is proportional to a weight of the corresponding service application entity defined by the corresponding weight distribution, i.e. the new or the first weight distribution.

As exemplary discussed above, the 256 IP source addresses were assigned to the different bucket numbers 1 to 100.

Furthermore, the step of identifying the affected data packet flow may comprise the step of detecting the bucket numbers for which the service application entity, to which the affected data packet flow with the corresponding bucket number should be forwarded, is changed, and the step of instructing the central control entity to command the storing of the learning rule in the learning table 43 may comprise the load balancing entity 100 instructing the central control entity 20 to command a storing of the bucket numbers and the assigned service application entities according to the first weight distribution in the learning table 43.

Furthermore, the first weight distribution and the new weight distribution can be determined by the load balancing entity 100. The exception table 41 indicates and includes an exception rule for each of the affected data packet flows including a flow identifier identifying the corresponding affected data packet flow. The flow identifier may include at least a source address and a destination address of the corresponding affected data packet flow, and each flow identifier is stored in connection with the exception rule to directly forward data packets of the corresponding data packet flow to the service application entity defined by the first weight distribution.

As discussed above, the 5-tuple of the packet may be stored in the Te exception table 41. The above described invention provides a load balancing solution that has several advantages. First of all, it may be based on standard open technologies, such as OpenFlow, meaning that the solution can be deployed in standard OpenFlow Switches and Controllers. However, it should be understood that any other standard open technology can be used. This provides the following advantages to network operators: First of all, the network operators are free to choose the runtime platform to use, wherein decision criteria such as cost and dimensioning can be incorporated into the selection of the platform to use. Furthermore, it is possible to benefit from common standard operation and maintenance tools and processes. Additionally, the network operators can evolve the solution without vendor lock-in meaning that the network operators are not restricted to certain vendor from which the system is purchased.

Furthermore, it is possible to incorporate the invention in a purely software-based solution so that it can be easily upgraded and it can be extended with custom functionality without the constraints of hardware systems or long vendor deployment cycles.

The stateful functionality, combined with the dynamic weights distribution change, allows the realization of interesting use cases such as a gradual soak-in and out of services, an adaptive scalability and dimensioning of the services. Furthermore, it allows traffic engineering and distribution and a simplified operation and maintenance of particular service applications in the entities.