Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND FIRST NETWORK NODE FOR HANDLING PACKETS
Document Type and Number:
WIPO Patent Application WO/2019/096370
Kind Code:
A1
Abstract:
A method and a first network node (110) of a first network (101) for handling packets from a second network node (120) of a second network (102) are disclosed. The first network node (110) manages a plurality of Drop Precedence levels for indicating precedence relating to dropping of packets, wherein each DP level is associated with a respective set of token buckets, wherein each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale. The first network node (110) receives (A030) a packet from the second network node (120). The first network node (110) further marks (A050) the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet. A corresponding computer program (1003) and a computer program carrier (1005) are also disclosed.

Inventors:
NÁDAS SZILVESZTER (HU)
TURÁNYI ZOLTÁN (HU)
VARGA BALÁZS (HU)
Application Number:
PCT/EP2017/079176
Publication Date:
May 23, 2019
Filing Date:
November 14, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L47/21
Foreign References:
US20150071074A12015-03-12
EP2413542A12012-02-01
EP2234346A12010-09-29
US20070153697A12007-07-05
Other References:
None
Attorney, Agent or Firm:
VALEA AB (SE)
Download PDF:
Claims:
CLAIMS

1. A method, performed by a first network node (110) of a first network (101 ), for

handling packets from a second network node (120) of a second network (102), wherein

the first network node (110) manages a plurality of Drop Precedence levels for indicating precedence relating to dropping of packets, wherein each DP level is associated with a respective set of token buckets, wherein each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale, wherein the method comprises:

receiving (A030) a packet from the second network node (120), and marking (A050) the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet.

2. The method according to claim 1 , wherein the method comprises:

handling (A060) the packet in accordance with the marked DP level.

3. The method according to claim 2, wherein the handling (A060) of the packet

comprises forwarding the packet or discarding the packet based on the marked DP level of the packet.

4. The method according to any one of the preceding claims, wherein a respective token bucket rate is assigned for each token bucket associated with said each DP level, wherein the token bucket rate decreases as time scale increases for said each DP level.

5. The method according to any one of the preceding claims, wherein said each DP level is associated with a respective packet pass probability.

6. The method according to the preceding claim, wherein a set of packet pass

probabilities are associated with the plurality of DP levels, and the set of packet pass probabilities comprises the respective packet pass probability for said each DP level, wherein a Service Layer Agreement is defined by the set of packet pass probabilities and a plurality of sets of time scales for a plurality of sets of token buckets.

7. The method according to the preceding claim, wherein the method comprises:

configuring (A010) the set of token buckets based on the set of packet pass probabilities and a fixed network capacity.

8. The method according to claim 6, wherein the method comprises:

determining (A020) a required network capacity based on the set of token buckets, the set of packet pass probabilities and the plurality of sets of time scales for the plurality of token buckets.

9. The method according to any one of the preceding claims, wherein the method

comprises:

checking (A040) an initial DP level of the received packet, wherein the marking (A050) of the packet comprises marking the packet with said least DP level, wherein said least DP level is greater than or equal to the initial DP level.

10. The method according to any one of the preceding claims, wherein the second

network (102) is different from the first network (101 ).

11. A first network node (110) of a first network (101 ), the first network node (110) being configured for handling packets from a second network node (120) of a second network (102), wherein the first network node (110) is configured for managing a plurality of Drop Precedence levels for indicating precedence relating to dropping of packets, wherein each DP level is associated with a respective set of token buckets, wherein each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale, wherein the first network node (110) is configured for:

receiving a packet from a second network node (120), and

marking the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet.

12. The first network node (110) according to claim 1 1 , wherein the first network node (110) is configured for handling the packet in accordance with the marked DP level.

13. The first network node (110) according to claim 12, wherein the first network node (110) is configured for handling the packet by forwarding the packet or by discarding the packet based on the marked DP level of the packet.

14. The first network node (110) according to any one of claims 11-13, wherein a

respective token bucket rate is assigned for each token bucket associated with said each DP level, wherein the token bucket rate decreases as time scale increases for said each DP level.

15. The first network node (110) according to any one of claims 11-14, wherein said each DP level is associated with a respective packet pass probability.

16. The first network node (110) according to the preceding claim, wherein a set of packet pass probabilities are associated with the plurality of DP levels, and the set of packet pass probabilities comprises the respective packet pass probability for said each DP level, wherein a Service Layer Agreement is defined by the set of packet pass probabilities and a plurality of sets of time scales for a plurality of sets of token buckets.

17. The first network node (110) according to the preceding claim, wherein the first network node (110) is configured for configuring the set of token buckets based on the set of packet pass probabilities and a fixed network capacity.

18. The first network node (110) according to claim 16, wherein the first network node (110) is configured for determining a required network capacity based on the set of token buckets, the set of packet pass probabilities and the plurality of sets of time scales for the plurality of token buckets.

19. The first network node (110) according to any one of claims 11-18, wherein the first network node (110) is configured for checking an initial DP level of the received packet, wherein the first network node (110) further is configured for marking the packet by marking the packet with said least DP level, wherein said least DP level is greater than or equal to the initial DP level.

20. The first network node (110) according to any one of claims 11-19, wherein the second network (102) is different from the first network (101).

21. A computer program (1003), comprising computer readable code units which when executed on a network node (110) causes the network node (110) to perform the method according to any one of claims 1-10. 22. A carrier (1005) comprising the computer program according to the preceding claim, wherein the carrier (1005) is one of an electronic signal, an optical signal, a radio signal and a computer readable medium.

Description:
METHOD AND FIRST NETWORK NODE FOR HANDLING PACKETS

TECHNICAL FIELD

Embodiments herein relate to management of packets according to a service level agreement for a telecommunication system, such as a system of routers and endpoints, a wireless communication system, a cellular radio system, a computer network system and the like. In particular, a method and a first network node for handling packets from a second network node are disclosed. A corresponding computer program and a computer program carrier are also disclosed.

BACKGROUND

A Service Level Agreement (SLA) is an agreement between a subscriber and a service provider. The agreement specifies service level commitments and related business agreements. The SLA can be set up in a standardized manner, such as according to Metro Ethernet Forum (MEF). SLAs of this kind are successfully used for Ethernet leased lines.

Technical Specification (TS) MEF 10.2, dated 27 October 2009, discloses that an SLA may specify a Committed Information Rate (CIR). Accordingly, as long as packets are sent with speed and burstiness, e.g. Committed Burst Size (CBS), according to the CIR, all those packets will be transmitted with a high probability, i.e. -100%. The CBS is a bandwidth profile parameter, which limits the maximum number of bytes available for a burst of service frames that are sent at the User Network Interface (UNI) speed to remain CIR-conformant. Furthermore, the aforementioned TS MEF 10.2 also discloses an Excess Information Rate (EIR). Those packets fitting into the EIR might be transmitted, but there is absolutely no guarantee for this. This kind of SLA is popular for leased Ethernet lines.

Dimensioning capacity of a physical link can be done by ensuring that the physical link has capacity enough to cater for a sum of all CIRs of the SLAs for services using the link. In case of a high number of services additional gains can sometimes also be achieved by use of statistical multiplexing, e.g. assuming that only a certain portion of all the services using the link will be active at a particular time period. However, when the number of services is relatively small e.g., in access/aggregation networks, it may not be possible to achieve the additional gains as explained for the high number of services. So called bursty traffic sources, e.g. bursty services, refers to when amount of traffic varies heavily between different time frames. For example, when peak air interface capacity increases in a cellular network, a backhaul serving that cellular network runs the risk of becoming a bottleneck. The peak air interface throughput cannot always be reached as it requires good conditions for transmission over the air. It is instead typical that the air interface provides a smaller air interface capacity than the peak air interface capacity. Dimensioning for these bursty services may be more cumbersome.

Within the field of telecommunication systems, it is well known to use token bucket based traffic marking/shaping in order to manage packets according to a certain SLA. Token buckets have a token bucket rate, which is associated with an allowed bitrate of traffic relating to a service provided under the certain SLA. The allowed bitrate is typically defined in bits per second (bits/s). The token buckets also have a bucket length, which is a maximum amount of tokens that a token bucket can hold. The bucket length represents a time scale of a bitrate measurement of the traffic, or in other equivalent interpretation, a maximum allowed burst size. The bucket length is usually defined in bytes or bits. The bucket length may be converted to time scale by dividing the bucket length in bits with the token bucket rate in bits/s.

It is well known that the bucket length is associated with the time scale of the bitrate measurements. In telecommunication systems, data, or traffic, that is to be transmitted is put in a buffer. The buffer typically has a given length, e.g. 1-1000 ms. If it is desired to avoid overfilling of the buffer, the time scale of the token bucket must be the same as, or smaller than, the length of the buffer. In this context, the term“in- profile” is often used to refer to that there are enough tokens in the token bucket in order to put a particular packet in the buffer, thereby allowing the packet to be sent. When an in-profile packet is handled, a number of tokens, corresponding to a size of the particular packet, is removed from the token bucket.

In order to avoid misinterpretation, it shall be noted that in the context of SLA, the term time scale may sometimes refer to time scale of measurements. In this sense, the time scale refers to how long a time interval for measuring of average packet loss shall be. Throughout this disclosure, time scale refers to the previously mentioned meaning, i.e. the time scale represented by the bucket length.

However, the token buckets are poorly suited for handling of bursty services. A problem is hence how to improve flexibility of SLAs such as to also be suited for bursty services.

SUMMARY

An object may therefore be how to improve flexibility of SLAs.

According to an aspect, the object is achieved by a method, performed by a first network node of a first network, for handling packets from a second network node of a second network. The second network may be different from the first network. The first network node manages a plurality of Drop Precedence (DP) levels for indicating precedence relating to dropping of packets. Each DP level is associated with a respective set of token buckets. Each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale. The first network node receives a packet from the second network node. The first network node marks the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet.

According to another aspect, the object is achieved by a first network node of a first network, the first network node being configured for handling packets from a second network node of a second network. The second network may be different from the first network. The first network node is configured for managing a plurality of Drop

Precedence levels for indicating precedence relating to dropping of packets. Each DP level is associated with a respective set of token buckets. Each token bucket of the respective set of token buckets for said each DP level is associated with the respective time scale. The first network node is configured for receiving a packet from the second network node. Moreover, the first network node is configured for marking the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet.

According to further aspects, the object is achieved by a computer program and a computer program carrier corresponding to the aspects above.

Thanks to that the first network node marks the packet with said least DP for which each token bucket associated with said least DP level includes the respective number of tokens that is greater than or equal to the size of the received packet, the first network node may be able to ensure that the second network node cannot send packets at highest possible rate all the time. The highest possible rate is given by an SLA for a service to which the received packet relates. The SLA provides the first network node with information about the token buckets, such as token bucket rate, and the respective time scale for each token bucket. For example, when the packet has been marked with said least DP level, all token buckets of the respective set of token buckets associated with said least DP level may be reduced with a size of the received packet. This may mean that, for a certain DP level, one token bucket - which may have a greatest time scale and thus also possibly a lowest token bucket rate - may run out of tokens before any other token bucket of that certain DP level. Accordingly, the DP level of the received packet may have to be increased. Depending on the token bucket rates and time scale, the first network node may according to some embodiments determine a probability for that the packet may be transmitted at such increased DP level.

As a result, the embodiments herein enable packet management beyond guaranteed and no guarantee at all, i.e. a certain probability of dropping packets may be ensured.

BRIEF DESCRIPTION OF THE DRAWINGS

The various aspects of embodiments disclosed herein, including particular features and advantages thereof, will be readily understood from the following detailed description and the accompanying drawings, in which:

Figure 1 is a schematic overview of an exemplifying system in which

embodiments herein may be implemented,

Figure 2 is a flowchart illustrating embodiments of the method in the first network node,

Figure 3 is a further overview of some aspects of the embodiments herein,

Figure 4 is a table illustrating drop levels, time scales and associated token buckets according to an exemplifying embodiment,

Figure 5 is an exemplifying diagram illustrating token bucket rate as a function of time scale,

Figure 6 is an overview illustrating packet handling according to some embodiments herein,

Figure 7 is a further exemplifying diagram illustrating token bucket rate as a function of time scale for an exemplifying SLA,

Figure 8 is a still further exemplifying diagram illustrating packet pass probability as a function of time scale,

Figure 9 is a yet further exemplifying diagram illustrating packet pass probability as a function of time scale, and

Figure 10 is a block diagram illustrating embodiments of the first network node.

DETAILED DESCRIPTION

Throughout the following description, similar reference numerals have been used to denote similar features, such as nodes, actions, modules, circuits, parts, items, elements, units or the like, when applicable. In the Figures, features that appear in some embodiments are indicated by dashed lines.

Figure 1 depicts an exemplifying system 100 in which embodiments herein may be implemented. In this example, the system 100 is a telecommunication system, such as a Global System for Mobile communication (GSM) system, a Long Term Evolution (LTE), Universal Mobile Telecommunication System (UMTS), a Worldwide

Interoperability for Microwave Access (WiMAX) system or evolutions thereof or the like,

The system 100 may comprise a first network 101 , such as a core network, a backhaul network or the like and a second network 102, such as a radio access network, an Universal Terrestrial Radio Access Network (UTRAN), evolved UTRAN or the like. The second network 102 may be different from the first network 101 , i.e. in that the second network 102 is not the same network as the first network 101. For example, a core network may be considered different than a radio access network even if owned by a same operator; but in other scenarios, the first and second networks may in fact being a same network. In particular, the second network 102 and the first network 101 may be two different networks that may, or may not, use the same technology, i.e. any one of the aforementioned networks or other networks known in the art. For example, the first and second networks 101 , 102 may be different domains.

The system 100 may comprise a first network node 110, such as a router, an endpoint, a radio network node, a forwarding device or the like, and a second network node 120, such as a router, an endpoint, a radio network node, a forwarding device or the like.

Typically, the first network 101 comprises the first network node 110 and the second network 102 comprises the second network node 120. The first network 101 may comprise one or more further network nodes 111 , 112 (only a few nodes denoted with reference numerals for reasons of simplicity). Likewise, the second network 102 may comprise one or more further network nodes 121 (only one shown for reasons of simplicity).

Furthermore, Figure 1 illustrates a site 130, which may be a client device, server device or any other perceivable recipient of traffic from the second network node 120.

Figure 2 illustrates an exemplifying method according to embodiments herein when implemented in the system 100 of Figure 1. The first network node 110 performs a method for handling packets from the second network node 120 of the second network 102.

The first network node 110 manages a plurality of Drop Precedence (DP) levels for indicating precedence relating to dropping of packets. Each DP level is associated with a respective set of token buckets. Each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale. Furthermore, each token bucket of the respective set of token buckets for said each DP level may be associated with a respective maximum bitrate, or token bucket rate. The respective time scale for said each DP level and/or the respective maximum bitrate for said each DP level may be given by an SLA, i.e. a flexible SLA to be used with the embodiments herein. A set of time scales comprises the respective time scale for said each token bucket associated with said each DP level. A set of token bucket rates may comprise a respective token bucket rate for, e.g. assigned to, each token bucket associated with said each DP level.

As an example, the plurality of DP levels may comprise DP =1 , whose packets are dropped last, DP=2, whose packets are dropped second last, DP=3, whose packets are dropped third last, etc.. For each DP level, there may thus be several time scales, e.g. 100ms, 1s, 1 min, 1 hour. There is one token bucket for each combination of the respective time scale and DP level. Token bucket rate of these token buckets may decrease as the time scale increase for the same DP level. The length of the bucket may increase as the time scale increases for the same DP level.

Furthermore, in order to set up some definitions that may be used in the following description, the token bucket rates of the set of token bucket rates may decrease as time scale increases for said each DP level, i.e. for one and the same DP level. Expressed differently, the set of token bucket rates of the respective set of token buckets for said each DP level may be decreasing for an increase among the set of time scales of said each DP level.

One or more of the following actions may be performed in any suitable order.

Typically, at least one of action A010 and action A020 below may be performed.

With at least these actions, said each DP level may be associated with a respective packet pass probability. Hence, a set of packet pass probabilities may be associated with the plurality of DP levels, and the set of packet pass probabilities comprises the respective packet pass probability for said each DP level.

Furthermore, with at least these actions, at least two services, with identical or different SLAs, are carried on one and the same physical link with a physical capacity, i.e. network capacity. The SLAs for said at least two services are known in terms of the set of packet pass probabilities, time scales and DP levels.

In some cases, such as in action A010, the set of token buckets for a given DP level, e.g. relating to a first one of said at least two services, may be configured to achieve, and ensure, the set of packet pass probabilities while a fixed network capacity is assumed. In this manner, it may be established what will happen to received packets at other DP levels, i.e. relating to a second one of said at least two services.

In other cases, the set of packet pass probabilities may be given by the SLA, i.e. the aforementioned flexible SLA. The flexible SLA may further define a plurality of sets of time scales for a plurality of sets of token buckets. In these cases, action A020 may be performed in order to determine the network capacity required to be able to offer the SLA with the set of packet pass probabilities.

Action A010

As above, in some embodiments, it may be desired to determine, or configure, the set of token bucket in view of the fixed network capacity. That is, one may wish to find out what SLAs can be offered with the fixed network capacity.

Therefore, the first network node 110 may configure the set of token buckets based on a set of packet pass probabilities and the fixed network capacity. In this manner, the first network node 110 may thus assign the set of token bucket rates.

Expressed differently, the set of token buckets for a certain DP level are configured to ensure the packet pass probabilities, while it is assumed that the fixed network capacity is shared among SLAs having different DP levels.

In this manner, a dimensioning of the token buckets is performed such as to ensure the packet pass probabilities for a given DP while a mix of SLA flows and the fixed network capacity are assumed.

Action A020

As above, in some embodiments, it may be desired to determine the required network capacity given a certain SLA. That is, one may wish to find out what network capacity is required in order to offer the certain SLA.

Therefore, the first network node 110 may determine a required network capacity based on the set of token buckets, the set of packet pass probabilities and the plurality of sets of time scales for the plurality of token buckets.

This may mean that the token bucket rates, time scales and packet pass probabilities may be fixed, i.e. as given by the certain SLA. The first network node 110 may then determine the required network capacity, i.e. physical network capacity.

Action A030

The second network node 120 may send a packet relating to a service for which a given SLA applies. Since the given SLA is known to the second node 120, the second network node 120 may or may not adapt its transmission rate of traffic data, e.g. bit rate relating to transmission of packets.

At any rate, the first network node 110 receives the packet from the second network node 120, i.e. any packet of the traffic data transmitted by the second network node 120.

Action A040

Before applying the given SLA applicable in the first network 101 for packets from the second network node 120, i.e. relating to a service for which the given SLA applies, the first network node 110 may check an initial DP level of the received packet. The initial DP level may be used in action A050 below. Action A050

Hence, when applying the given SLA, the first network node 110 marks the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet. This may mean that the first network node 110 considers all time scales for said least DP level before marking the packet with said least DP level. Hence, it is enough that only one token bucket for a particular DP level does not have enough tokens to cater for the received packet in order for the first network node 110 to increase the DP level by one and check the token buckets corresponding to the increased DP level to determine whether or not all these token buckets may cater for the received packet.

Furthermore, when action A040 has been performed, the first network node 110 may mark the packet while ensuring that said least DP level is greater than or equal to the initial DP level. Expressed differently, the marking A050 of the packet may comprise marking the packet with said least DP level, wherein said least DP level may be greater than or equal to the initial DP level.

In this manner, the initial DP level of the incoming packet may be limiting the minimum DP level that the received packet may be assigned with. As an example, an incoming packet may be marked with a DP level greater than one. This may mean that the outgoing DP level of that packet after action A050 has been performed may preferably never be less than the initial DP level of the received packet. This may be used to preserve lower drop precedence level buckets for other packets, which thus will be prioritized over those packets with high DP levels.

Furthermore, the first network node 110 may reduce the set of token buckets associated with said least DP level with an amount equal to a size of the received packet.

Action A060

The first network node 110 may handle the packet in accordance with the marked DP level.

As an example, the first network node 110 may handle the packet by forwarding the packet or by discarding the packet based on the marked DP level of the packet, i.e. said least DP level. As an alternative, the first network node 110 may put the packet on hold, e.g. store the packet and wait for some time and then make an attempt to send the packet.

Thanks to that a flexible SLA used with the embodiments herein may define a set of time scales for each DP level, high token bucket rates on short time scales allows for bursts of traffic to be transmitted. However, this may only be allowed when there are enough tokens in all token buckets associated with that DP level. Therefore, in order to save tokens for bursts, there is an incentive for the second network node 120 to transmit with lower bit rates, or even at higher DP levels. In this manner, tokens may be saved such that the bursts may be handled at the desired DP level.

Moreover, according to at least some embodiments, the set of packet pass probabilities provides verifiable guarantees, not only“always guaranteed” or“no guarantees at all” but rather according to probability values there between.

Keeping the above guarantees enable statistical multiplexing gain for the transport operator providing services using this SLA. This may e.g. mean that there may be a certain chance, or probability, of reaching an air interface peak rate, even when the link capacity is not much higher than the peak rate of a single air interface among a set of air interfaces that may be aggregated towards the first network node 110. In general, the embodiments herein may allow for reaching high rates, at least occasionally, even when a number of network nodes in the second network 102, such as the second network node 120 and other network nodes, are active all the time, or at least partially simultaneously.

Advantageously, the embodiments herein may co-exist with today’s less flexible SLAs. For the existing SLA, the embodiments herein may apply one time scale instead of a plurality of time scales for a particular DP level.

With reference to Figure 3, use of the embodiments herein together with dropping is illustrated.

Initially, in an action 301 , the first network node 110, such as an edge node, performs the packet marking as described with reference to Figure 2 above.

Next, in an action 302, a further network node, within the first network 101 , may perform dropping depending on the marking, e.g. similarly to action A060 above. The dropping may need to be performed due to a bottleneck on the physical link carrying the packets.

Furthermore, in an action 303, a yet another network node, also within the first network 101 , may similarly perform dropping.

In this fashion, the set of packet pass probabilities may be enforced at nodes within the first network 101.

In an example, as shown in Figure 4, token buckets for three time scales and three DP levels are illustrated. Further time scales may exist, but only three are shown, i.e. 100 ms t1 , 1 s t2 and 1 h. In this example, the length L of the token buckets is the bucket rate multiplied by the given time scale. A length L t1 of a token bucket for time scale t1 and DP level = 1 may be Ruoo ms * t1 , a length L 2, ti of a token bucket for time scale t1 and DP level = 2 may be R 2, 100 ms * t1 , a length l_ 3, t1 of a token bucket for time scale t1 and DP level = 3 may be R 3, 10 o ms * t1 , a length L 1 t2 of a token bucket for time scale t2 and DP level = 1 may be R-i , 1 s * t2, a length l_ 2, t2 of a token bucket for time scale t2 and DP level = 2 may be Ri , 1 s * t2, and a length l_ 3, t2 of a token bucket for time scale t2 and DP level = 3 may be Ri , 1 s * t2.

Referring to Figure 4, a high rate Ruoo ms is guaranteed for 100 ms. But if the second network node 120 maintains that high rate for a longer time, some of its packets will be marked with higher DP levels. Consequently, some of those packets will not have the smallest packet loss guarantee anymore.

Figure 5 depicts exemplifying token bucket rates. As mentioned before, the token bucket rates decrease as the time scale increases. The decreasing of the token bucket rates may be linear, non-linear or according to some function as deemed appropriate for any particular use of the embodiments herein. It may be noted that RTT is an abbreviation for Round Trip Time. The RTT may typically be used as time scale of the buffer, in which packets to be sent are put.

Table 1 below lists exemplifying token bucket rates and Table 2 below lists example bucket length for these token buckets. The example bucket lengths are calculated as the bucket rate multiplied by the time scale.

* Megabits per second

Table 1 example token bucket rates

Table 2 example token bucket lengths

For the“100 ms”-column above and DP = 1 : 125 kbyte = 10 Mbps * 100 ms / 8 bits/byte / 1000 ms/s * 1000 kbyte/Mbyte and so on for the further DP levels.

Figure 6 depicts marking of the incoming packets and falling back to higher DP levels in the case when the packet cannot fit into the buckets of a certain DP level. At an end of the handling of the packet, it may be that the packet is dropped or transmitted with best effort only.

Figure 7 illustrates an exemplifying SLA by means of a diagram. In this example, the SLA defines six different time scales where packet pass probability for DP level = 1 is 100 % (solid line), packet pass probability for DP level = 2 is 90% (dashed line) and packet pass probability for DP level = 3 is 50% (dash-dotted line).

For DP = 1 (100%), the token bucket rate is 10 Mbps. Assuming that there are 10 leased lines, e.g. 10 services sharing a common physical line. The leased lines are offered under the exemplifying SLA, which would mean that 10 * 10 Mbps = 100 Mbps or more would be required to offer 100% packet pass probability.

Figure 8 and Figure 9 illustrate probability (Pr) of that packet drop is too high, i.e. Pr (loss > allowed packet loss) for two different usage examples with the SLA configured as shown in Figure 7. The probability of that packet drop is too high may be seen as being roughly one minus the packet pass probability. With these usage examples, it is also assumed that there are 10 leased lines. In Figure 8, the link capacity, or network capacity, is 130 Mbps and in Figure 9, the link capacity is 140 Mbps. In Figure 8, the dash-dotted line disadvantageously rises above 50% at time scale equal to two, which means that the link capacity of 130 Mbps is not enough to cater for the exemplifying SLA’s DP level = 3. Also, the link capacity of 130 Mbps is not enough to cater for the exemplifying SLA’s DP level = 2, since the dashed line rises above 10% at time scale equal to one.

Flowever, as shown in Figure 9, a link capacity of 140 Mbps, i.e. an additional 40 Mbps as compared to the 100 Mbps required as above, would be able to double (see Figure 7) the service for all leased lines with 90% probability and triple the service with 50% probability. This is thanks to that the dash-dotted line never rises above 50% in Figure 9 and that the dashed line never rises above 10%.

With reference to Figure 10, a schematic block diagram of embodiments of the first network node 110 of Figure 1 is shown.

The first network node 110 may comprise a processing module 1001 , such as a means for performing the methods described herein. The means may be embodied in the form of one or more hardware modules and/or one or more software modules. The term“module” may thus refer to a circuit, a software block or the like according to various embodiments as described below.

The first network node 110 may further comprise a memory 1002. The memory may comprise, such as contain or store, instructions, e.g. in the form of a computer program 1003, which may comprise computer readable code units.

According to some embodiments herein, the first network node 110 and/or the processing module 1001 comprises a processing circuit 1004 as an exemplifying hardware module, which may comprise one or more processors. Accordingly, the processing module 1001 may be embodied in the form of, or‘realized by’, the processing circuit 1004. The instructions may be executable by the processing circuit 1004, whereby the first network node 110 is operative to perform the methods of Figure 2. As another example, the instructions, when executed by the first network node 110 and/or the processing circuit 1004, may cause the first network node 110 to perform the method according to Figure 2.

In view of the above, in one example, there is provided a first network node 110 of a first network 101 for handling packets from a second network node 120 of a second network 102. As mentioned, the second network 102 may be different from the first network 101 or may be the same network, wherein the first network node 110 manages a plurality of Drop Precedence levels for indicating precedence relating to dropping of packets, wherein each DP level is associated with a respective set of token buckets, wherein each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale. Again, the memory 1002 contains the instructions executable by said processing circuit 1004 whereby the first network node 110 is operative for:

receiving a packet from the second network node 120, and

marking the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet.

Figure 10 further illustrates a carrier 1005, or program carrier, which comprises the computer program 1003 as described directly above. The carrier 1005 may be one of an electronic signal, an optical signal, a radio signal and a computer readable medium.

In some embodiments, the first network node 110 and/or the processing module 1001 may comprise one or more of a receiving module 1010, a marking module 1020, a handling module 1030, a configuring module 1040, a determining module 1050, and a checking module 1060 as exemplifying hardware modules. Advantageously, the first network node 110 and/or the processing module 1001 may also comprise a sending module (not illustrated) for forwarding a received packet once marked with a least DP level. The term“module” may refer to a circuit when the term“module” refers to a hardware module. In other examples, one or more of the aforementioned exemplifying hardware modules may be implemented as one or more software modules.

Moreover, the first network node 110 and/or the processing module 1001 comprises an Input/Output unit 1006, which may be exemplified by the receiving module and/or the sending module when applicable.

Accordingly, the first network node 110 is configured for handling packets from the second network node 120 of the second network 102. As mentioned, the second network 102 may be different from the first network 101 or may be the same network. The first network node 110 is configured for managing a plurality of Drop Precedence levels for indicating precedence relating to dropping of packets. Each DP level is associated with a respective set of token buckets. Each token bucket of the respective set of token buckets for said each DP level is associated with a respective time scale.

Therefore, according to the various embodiments described above, the first network node 110 and/or the processing module 1001 and/or the receiving module 1010 is configured for and/or the instructions causes the first network node 110 to be operative for receiving a packet from the second network node 120.

The first network node 110 and/or the processing module 1001 and/or the marking module 1020 is configured for and/or the instructions causes the first network node 110 to be operative for marking the packet with a least DP level for which each token bucket associated with said least DP level includes a respective number of tokens that is greater than or equal to a size of the received packet.

The first network node 110 and/or the processing module 1001 and/or the handling module 1030 may be configured for and/or the instructions may cause the first network node 110 to be operative for handling the packet in accordance with the marked DP level.

The first network node 110 and/or the processing module 1001 and/or the handling module 1030 may be configured for and/or the instructions may cause the first network node 110 to be operative for handling the packet by forwarding the packet or by discarding the packet based on the marked DP level of the packet. In particular, and where applicable, the sending module (not illustrated) may be configured for forwarding the packet.

A respective token bucket rate may be assigned for each token bucket associated with said each DP level. The token bucket rate may decrease as time scale may increase for said each DP level.

Said each DP level may be associated with a respective packet pass probability.

A set of packet pass probabilities may be associated with the plurality of DP levels, and the set of packet pass probabilities may comprise the respective packet pass probability for said each DP level.

The first network node 110 and/or the processing module 1001 and/or the configuring module 1040 may be configured for and/or the instructions may cause the first network node 110 to be operative for configuring the set of token buckets based on the set of packet pass probabilities and a fixed network capacity.

The first network node 110 and/or the processing module 1001 and/or the determining module 1050 may be configured for and/or the instructions may cause the first network node 110 to be operative for determining a required network capacity based on at least one of the set of token buckets, the set of packet pass probabilities and the plurality of sets of time scales for the plurality of token buckets.

In some embodiments, the first network node 110 and/or the processing module 1001 and/or the checking module 1060 may be configured for and/or the instructions may cause the first network node 110 to be operative for checking an initial DP level of the received packet. The first network node 110 and/or the processing module 1001 and/or the marking module 1020, or another marking module (not shown), may be configured for and/or the instructions may cause the first network node 110 to be operative for marking the packet by marking the packet with said least DP level. Said least DP level may be greater than or equal to the initial DP level.

As used herein, the term“node”, or“network node”, may refer to one or more physical entities, such as devices, apparatuses, computers, servers or the like. This may mean that embodiments herein may be implemented in one physical entity. Alternatively, the embodiments herein may be implemented in a plurality of physical entities, such as an arrangement comprising said one or more physical entities, i.e. the embodiments may be implemented in a distributed manner, such as on cloud system, which may comprise a set of server machines.

As used herein, the term“module” may refer to one or more functional modules, each of which may be implemented as one or more hardware modules and/or one or more software modules and/or a combined software/hardware module in a node. In some examples, the module may represent a functional unit realized as software and/or hardware of the node.

As used herein, the term“computer program carrier”,“program carrier”, or “carrier”, may refer to one of an electronic signal, an optical signal, a radio signal, and a computer readable medium. In some examples, the computer program carrier may exclude transitory, propagating signals, such as the electronic, optical and/or radio signal. Thus, in these examples, the computer program carrier may be a non-transitory carrier, such as a non-transitory computer readable medium. As used herein, the term“processing module” may include one or more hardware modules, one or more software modules or a combination thereof. Any such module, be it a hardware, software or a combined hardware-software module, may be a determining means, estimating means, capturing means, associating means, comparing means, identification means, selecting means, receiving means, sending means or the like as disclosed herein. As an example, the expression“means” may be a module

corresponding to the modules listed above in conjunction with the Figures.

As used herein, the term“software module” may refer to a software application, a Dynamic Link Library (DLL), a software component, a software object, an object according to Component Object Model (COM), a software function, a software engine, an executable binary software file or the like.

The terms“processing module” or“processing circuit” may herein encompass a processing unit, comprising e.g. one or more processors, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or the like. The processing circuit or the like may comprise one or more processor kernels.

As used herein, the expression“configured to/for” may mean that a processing circuit is configured to, such as adapted to or operative to, by means of software configuration and/or hardware configuration, perform one or more of the actions described herein.

As used herein, the term“action” may refer to an action, a step, an operation, a response, a reaction, an activity or the like. It shall be noted that an action herein may be split into two or more sub-actions as applicable. Moreover, also as applicable, it shall be noted that two or more of the actions described herein may be merged into a single action.

As used herein, the term“memory” may refer to a hard disk, a magnetic storage medium, a portable computer diskette or disc, flash memory, random access memory (RAM) or the like. Furthermore, the term“memory” may refer to an internal register memory of a processor or the like.

As used herein, the term“computer readable medium” may be a Universal Serial Bus (USB) memory, a DVD-disc, a Blu-ray disc, a software module that is received as a stream of data, a Flash memory, a hard drive, a memory card, such as a MemoryStick, a Multimedia Card (MMC), Secure Digital (SD) card, etc. One or more of the

aforementioned examples of computer readable medium may be provided as one or more computer program products. As used herein, the term“computer readable code units” may be text of a computer program, parts of or an entire binary file representing a computer program in a compiled format or anything there between.

As used herein, the expression“transmit” and“send” are considered to be interchangeable. These expressions include transmission by broadcasting, uni-casting, group-casting and the like. In this context, a transmission by broadcasting may be received and decoded by any authorized device within range. In case of uni-casting, one specifically addressed device may receive and decode the transmission. In case of group-casting, a group of specifically addressed devices may receive and decode the transmission.

As used herein, the terms“number” and/or“value” may be any kind of digit, such as binary, real, imaginary or rational number or the like. Moreover,“number” and/or “value” may be one or more characters, such as a letter or a string of letters.“Number” and/or“value” may also be represented by a string of bits, i.e. zeros and/or ones.

As used herein, the terms“first”,“second”,“third” etc. may have been used merely to distinguish features, apparatuses, elements, units, or the like from one another unless otherwise evident from the context.

As used herein, the term“subsequent action” may refer to that one action is performed after a preceding action, while additional actions may or may not be performed before said one action, but after the preceding action.

As used herein, the term“set of may refer to one or more of something. E.g. a set of devices may refer to one or more devices, a set of parameters may refer to one or more parameters or the like according to the embodiments herein.

As used herein, the expression“in some embodiments” has been used to indicate that the features of the embodiment described may be combined with any other embodiment disclosed herein.

Even though embodiments of the various aspects have been described, many different alterations, modifications and the like thereof will become apparent for those skilled in the art. The described embodiments are therefore not intended to limit the scope of the present disclosure.