Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR CONTROLLING A FLOW IN A PACKET SWITCHING NETWORK
Document Type and Number:
WIPO Patent Application WO/2010/084034
Kind Code:
A1
Abstract:
The invention is related to the field of flow control between two computing nodes over a packet switching network. Furthermore, the invention concerns a method for controlling data flow between a sending node SN and a receiving node RN over a packet switching network, data being sent with a current data rate SR onto a protocol-specific buffer BU of the receiving node RN, an application AP reading data stored in the buffer BU at a playback rate SPR, According to the invention, it involves the following steps: - Notifying by the sending node SN to the receiving node RN about its maximum sending rate SRMax; - Determining by the receiving node RN a desired sending rate DSR value for the sending node SN from playback rate SPR value; - when the desired sending rate DSR value is significantly different from the current data rate SR value, notifying the desired sending rate DSR value by the receiving node RN to the sending node SN.

Inventors:
SIEMENS EDUARD (FR)
EGGERT DANIEL (FR)
Application Number:
PCT/EP2010/050081
Publication Date:
July 29, 2010
Filing Date:
January 06, 2010
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THOMSON LICENSING (FR)
SIEMENS EDUARD (FR)
EGGERT DANIEL (FR)
International Classes:
H04J3/06; H04L47/6275; H04L12/56; H04L47/2416; H04L47/30; H04N7/32
Other References:
CHIA-HUI WANG, JAN-MING HO, RAY-I CHANG, SHUN-CHIN HSU: "A Control-Theoretic Mechanism for Rate-based Flow Control of Real-time Multimedia Communication", MULTIMEDIA, INTERNET, VIDEO TECHNOLOGIES 2001 (MIV 2001) WSES/IEEE INTERNATIONAL MULTI-CONFERENCE, September 2001 (2001-09-01), XP002530450, Retrieved from the Internet [retrieved on 20090603]
YOUN-SIK HONG ET AL: "A Cost Effective Rate Control for Streaming Video Dedicated to Wireless Handheld Devices", MULTIMEDIA AND UBIQUITOUS ENGINEERING, 2008. MUE 2008. INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 24 April 2008 (2008-04-24), pages 537 - 542, XP031263747, ISBN: 978-0-7695-3134-2
Attorney, Agent or Firm:
RUELLAN, Brigitte (1-5 rue Jeanne d'Arc, Issy Les Moulineaux, FR)
Download PDF:
Claims:
CLAIMS

1. Method for controlling data flow between a sending node (SN) and a receiving node (RN) over a packet switching network, data being sent with a current data rate (SR) onto a protocol-specific buffer (BU) of the receiving node (RN), an application (AP) reading data stored in the buffer (BU) at a playback rate (SPR), the application (AP) acquiring the data from the buffer (BU) in portions also named "chunks" having a constant size (chunkSize), a round trip time (RTT) being an elapsed time between an issue of a first packet by the sending node (SN) to the receiving node (RN) and a reception by the sending node (SN) of a second packet issued by the receiving note (RN) immediately after receiving said first packet, said method involving the following steps :

- Notifying by the sending node (SN) to the receiving node (RN) about its maximum sending rate (SRMax);

- Determining by the receiving node (RN) a desired sending rate (DSR) value for the sending node (SN) from playback rate (SPR) value, said determination being triggered by a result of a test on the occupancy of the buffer (BU), said test being conducted periodically and consisting in determining if the buffer occupancy is lower than a low threshold (LTHR) value or greater than a high threshold (HTHR) value;

- when the desired sending rate (DSR) value is significantly different from the current data rate (SR) value, notifying the desired sending rate (DSR) value by the receiving node (RN) to the sending node (SN), characterized in that the low threshold (LTHR) value and the high threshold (HTHR) value are dynamically assigned is dynamically assigned and in that the low threshold (LTHR) value is calculated from the size of chunk (chunkSize); the current sending rate (SR) and the round trip time value (RTT).

2. Method according to Claim 1 , the buffer (BU) having a storage capacity (totalBufferSize), wherein the low threshold (LTHR) value is assigned according to following the formula: LTHR = min (max (2*chunkSize; SR * RTT) + K; totalBufferSize - 4 * chunkSize), wherein K is an empiric value.

3. Method according to Claim 2, wherein K is a value determined from a network model.

4. Method according to one of Claims 1 or 3, wherein the high threshold (HTHR) value depends on the size of chunks (chunkSize); the current sending rate (SR), the buffer storage capacity (totalBufferSize) and the round trip time (RTT) value.

5. Method according to Claim 4, wherein the high threshold (HTHR) value is assigned according to following the formula: HTHR = totalBufferSize - max(2*chunkSize, SR * RTT).

6. Method according to one of Claims 1 to 5, wherein the period with which the test on the buffer-occupancy is conducted is equal to a fraction of the round trip time (RTT).

7. Method according to one of Claim 1 to 6, wherein the step of determining the desired sending rate (DSR) further comprises a step of determining a number i of consecutive buffer occupancy test stating that the playback rate (SPR) is lower than the low threshold (LTHR) value and a step of determining a number j of consecutive buffer occupancy test stating that the playback-rate (SPR) value is lower than the low threshold (LTHR) value.

8. Method according to Claim 7, an increase percentage value (DRI) being a fixed percentage value between 0% and 10%, wherein when the test on the buffer- occupancy determines as a result that the buffer occupancy is lower than the low threshold (LTHR) value, the desired sending rate (DSR) value is determined according to following the formula: DSR = min( (SPR + i * DRI * SR), SRMax).

9. Method according to one of Claims 7 to 8, a decrease percentage value (DRR) being a fixed percentage value between 0% and 10%, wherein when the test on the buffer-occupancy determines as a result that the buffer occupancy is greater than the high threshold (HTHR) value, the desired sending rate (DSR) value is determined according to following the formula: DSR = min( (SPR - j * DRI * SR), SRMax).

10. Method according to one of Claim 1 to 9, wherein the playback rate (SPR) value used for conducting test and determination of desired sending rate (DSR) value is issued by a moving average method of playback rate (SPR).

Description:
METHOD FOR CONTROLLING A FLOW IN A PACKET SWITCHING

NETWORK

Field Of The Invention

The invention is related to the field of flow control between two computing nodes over a packet switching network. Furthermore, the invention also relates to a method of flow control.

Background Of The Invention

Bandwidth of a public packet switching network is shared by all kinds of applications to achieve statistical multiplexing gain. This share of resource introduces considerable uncertainty in workload and resource requirements. While delivering packets through such a network, the unpredictable delay jitter may introduce underflow or overflow in a limited client buffer even if the network is error- free at a time period. However, different from conventional text/image network applications, multimedia applications require end-to-end quality-of-service (QoS) with jitter-free playback of audio and video. Thus a good end-to-end flow control mechanism is needed to maintain high throughput and keeping average delay per packet at a reasonable level for such time-critical applications like multimedia communications.

Flow control is after congestion control, one of the major objectives of connection management in the area of reliable connection oriented data transmission. Congestion control prevents the network in a whole to be flooded. In contrast, the main goal of the flow control is to prevent a slow or even busy reception node from being flooded by a faster data sender. "Rate-based" and "window-based" mechanisms are the two best-known approaches for flow control. General TCP flow control which is described in [Pos81] (Postel, Jon:

Transmission Control Protocol - DARPA Internet Program Protocol Specification, RFC 793, University of Southern California - Information Sciences Institute (1981 )) uses a window-based flow control mechanism. A receiver returns a "window" with every acknowledgment indicating a range of acceptable sequence numbers beyond the last received segment. The window indicates an allowed number of bytes the sender may transmit before receiving further permission. The disadvantage of this method is a burst way of sending, which even amplifies on high Bandwidth Delay Product links (also called "high BDP links").

On the other hand, rate-based flow control can provide end-to-end deterministic and statistical performance guarantees over packet switching networks. The rate adjustment is performed by the intensive feedback controls from client to achieve guaranteed QoS. One rate-based approach is described in [WanO1 ] (Wang, Chia-Hui; Ho, Jan-Ming; Chang, Ray-I und Hsu, Shun-Chin: A Control-Theoretic Mechanism for Rate-based Flow Control of Realtime Multimedia Communication, Techn. Ber., Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, R. O. C and Institute of Information Science, Academia Sinica, Taipei, Taiwan, R. O. C. (2001 ), multimedia, Internet, Video Technologies 2001 (MIV 2001 ) of WSES/IEEE International Multi- conference). This method uses a closed loop control mechanism to adapt the sender's data rate. The control theoretic "device to control" is the receive-buffer- occupancy (BO). If the BO value exceeds a given threshold, the receiver gives a feedback about its BO to the sender. The sender uses a Proportional, Derivative (PD) rate control to calculate a new sending rate from the received BO feedback. The method proposed by Wang relies only on the current sending rate and the changes of the receive-buffer-occupancy. Effective receive rate of an application located on receiver is not directly taken into account. That is the reason why fluctuations of this rate stay almost unaccounted. On high BDP links this could lead to unintended buffer overflows or underruns, respectively. A drawback of method proposed by Wang is that it ignores also amount of data currently readable from buffer, which could cause unnecessary or even wrong speed adjustments. One of the goals of the present invention is to propose a rate-based flow control mechanism providing a more effective and reliable data flow control even on erroneous high BDP links than state-of-the art flow Control mechanisms and allowing a more efficient network resources utilization than the known approaches.

Summary Of The Invention

The technical problem the present invention intends to solve is to govern a sending rate of a sender to avoid flooding of a receiver and allowing the data sender to optimally adjust its sending data rate according to the data reception rate at the receiver's application level. Thus, the present invention concerns, a method for controlling data flow between a sending node SN and a receiving node RN over a packet switching network, data being sent with a current data rate SR onto a protocol-specific buffer

BU of the receiving node RN, an application AP reading data stored in the buffer BU at a playback rate SPR. According to the invention, it involves the following steps:

- Notifying by the sending node SN to the receiving node RN about its maximum sending rate SRMax;

- Determining by the receiving node RN a desired sending rate DSR value for the sending node SN from playback rate SPR value; - When the desired sending rate DSR value is significantly different from the current data rate SR value, notifying the desired sending rate DSR value by the receiving node RN to the sending node SN.

Brief Description Of The Drawings

Embodiments of the invention are described in text form hereinafter and are illustrated with drawings, which show in:

Fig. 1 a: data exchanges between a sending node SN and a receiving node RN;

Fig. 1 b: an exemplary flowchart of the method for controlling data flow according to the invention comprising a succession of four main steps; Fig. 2: an illustration of the difference between a "brut" playback rate and a smoothed playback rate;

Fig. 3a: an illustration of a non-empty buffer underrun situation at a given time ti .

Fig 3b: a graph showing temporal evolution of buffer occupancy and of amount of readable data from buffer;

Fig. 4: a detailed flowchart of the step 20 of determination of a desired sending rate DSR of a method according to the invention;

Detailed Description Of Preferred Embodiments

Figure 1 a illustrates data exchanges between a sending node SN and a receiving node RN which comprises a protocol-specific buffer BU. The sending node

SN sends data to the receiving node RN at a current sending rate SR expressed for example in MBit per second. At reception by the receiving node RN, data is stored in the buffer BU. At least one application AP runs on the receiving node RN and acquires data at a playback rate.

The application AP is for example a Media player, which receives video data from a sending nodes SN and plays back that data out to a video screen. In another example, the data sent is media streaming and the application AP can also be applied to file transfer, Database replication and so on.

The buffer occupancy BO is the size of the buffer space occupied by the data received by the receiving node RN, the buffer BU has a storage capacity denoted as totalBufferSize. An example of flowchart for implementing the method according to the invention is shown on figure 1 b. The flow control mechanism implemented by the method according to the invention uses a rate-based approach.

At a first step 1 , for example during a connection setup, the sending node SN notifies a receiving node RN about its maximum sending rate SRMax, which is the highest rate the sending node SN is able to send data to the receiving node RN. The receiving node RN notifies the sending node SN about a desired sending rate DSR, for instance during a latter step 20.

A calculation for determining the desired sending rate DSR value is done at the receiving node RN during an intermediary step 10. Hereby, the desired sending rate DSR is calculated from a smooth playback rate SPR which is determined by observation of the playback rate or "brut" playback rate.

Therefore, the playback-rate is subject to fluctuations caused by non-constant bit-rate of data retrieval by the application AP. In order to eliminate minor fluctuations e.g. rounding errors while rate computation, the playback-rate is smoothed by applying a temporal moving average, for example the "Exponentially

Weighted Moving Average" (EWMA) method is used to get a smooth playback rate

SPR. EWMA consists in applying weighting factors which decrease exponentially.

The weighting for each older data point decreases exponentially, giving much more importance to recent observations while still not discarding older observations entirely.

Advantageously, the playback rate SPR value used for conducting test and determination of desired sending rate DSR value is value issued by a moving average method of playback rate such as EWMA method.

Figure 2 compares temporal evolutions of the values of "brut" playback-rate, i.e. playback rate as measured without any smooth filter, with the values of EWMA smoothed playback-rate SPR over 2500 millisecond. On figure 2, X axis denotes time, Y axis denotes rate value in MBits/s. While the "brut" playback-rate is subject to "high frequency" fluctuations the smoothed playback-rate SPR is almost constant. After 550 ms since the start of the data flow, the smoothed playback rate SPR is constant and doesn't change till bigger fluctuations at the application, after 2300 ms, occur.

It is possible to measure the elapsed duration between a time of an issue of a packet by the sending node SN to the receiving node RN and a reception by the sending node SN of the packet issued by the receiving note RN immediately after receiving it. This measured elapsed duration is the "Round Trip Time" or RTT.

The application AP performs polling on the buffer BU by issuing a receiving call on the protocol engine. This receiving call sends an amount of data, the application AP desires to fetch from the buffer BU. So, the data fetching is fully controlled by the application AP. Ideally, in streaming applications, the data fetching is performed timely equidistantly in constant-sized portions, size of these portions, also called "chunk" is denoted "chunkSize". However, in real life none (non-realtime) application can reach this equidistant fetching approach (also called "Constant Bit Rate data sink). If the receiving call of the application AP is delayed, the amount of data, stored in the buffer BU has increased, and consequently one needs either to increase the frequency of the receiving calls to increase the amount of data, or increase the portion size fetched within one receiving call or even both. For example, the application AP streams an uncompressed video-feed in 24 frames per second. Within one second, the application AP acquires 24 picture-frames from the buffer BU. If each frame is received individually from the buffer BU, based on the fact that each frame has the same size, the chunk size is the size of one frame.

Then, playback rate fluctuations with high amplitude, for example as shown on figure 2 after 2300 ms, due to application's receiving calls can be controlled by periodic sending rate updates from receiving node RN to sending node SN.

In order to alleviate the drawback of high transmission delays, the rate updates will be sent periodically, whereby the update period is expressed in fractions of the RTT of the connection. However, the minimum period value must be bound to some fixed value. The period value can be based for example on empiric investigations. Used fractions of RTT corresponds for example to 1/4 RTT. A minimum value of RTT fraction is 10 ms with RTT values greater than 40 ms. Advantageously, the period with which the test on the buffer-occupancy is conducted is equal to a fraction of the round trip time RTT.

According to an embodiment of the invention, the step of determining the desired sending rate DSR value is triggered by a result of a test on the occupancy of the buffer BU, and in that this test is conducted periodically.

Figure 4 shows an example of implementation for realizing the step 10 of determination of desired sending rate DSR.

Decision about sending rate increase or decrease, above or below the current smoothed playback-rate SPR value, is based on the occupancy BO of the buffer of the receiving node RN: Two steps 10.2 and 10.3 are test related to buffer occupancy.

According to an embodiment of the invention, the test on the buffer-occupancy consists in determining if the buffer occupancy is lower than a low threshold LTHR value or greater than a high threshold HTHR value. If the buffer occupancy BO exceeds a high threshold value HTHR, respectively low threshold LTHR value, the desired sending rate DSR value will be reduced below, respectively raised above, the current smooth playback-rate SPR (cf. steps 10.5 and 10.6 shown on figure 4).

Advantageously, the low threshold LTHR value and the high threshold HTHR value are dynamically assigned.

Value of the high threshold HTHR and low threshold LTHR are dynamically assigned during a step 10.1. The dynamic feature of high and/ or low threshold values assignment is one of major differences with state-of-the-art implementations of rate-based Flow Control. This avoids the non-empty buffer underruns as shown on figure 4.

Advantageously, the low threshold LTHR value is calculated from the size of chunkchunkSize); the current sending rate SR and the round trip time value RTT).

Hereby, the high threshold HTHR can be calculated based on the maximum chunk size the application AP receives from the buffer BU (e.g. 2 * chunkSize) and the existing BDP which is here the product of the current sending rate with the RTT. BDP is a very important parameter in the area of transport protocols since it represents the amount of data, which can be sent, before a reaction of data sending can be performed (SR * RTT). Advantageously, the high threshold HTHR value depends on the size of chunks chunkSize; the current sending rate SR, the buffer storage capacity total BufferSize and the round trip time value RTT.

Advantageously, the high threshold value HTHR is assigned according to following the formula: HTHR = totalBufferSize - max(2 * chunkSize, SR * RTT).

From a performance point of view, a buffer BU underrun is more critical than a buffer BU overflow. So, it's not sufficient to take here into calculations only chunk size at the application AP level and the BDP for assigning a low threshold LTHR value.

On links with a considerable error rate or just congested links, an additional factor needs to be taken into account: a lost packet causes a gap in the data stream which is stored in the buffer BU, which prevents the application AP from receiving data beyond this gap. This situation is illustrated in figure 4a. The buffer BU is represented as a rectangular recipient. At a time t 1: one illustrates, coloured in black, parts of the buffer BU filled with received data. Empty parts of the buffer are set in "white" colour. The white part between two black parts corresponds to a missing packet. So there is a difference between the amount of data stored in the buffer (denoted as "BO" on figure 4a) figured by the highest part of the "recipient" and the amount of data BR 1 that is readable from the buffer BU, in one piece.

Hereby, the higher the packet loss rate is, the bigger is the difference between data stored in and readable from the buffer (stored/readable difference).

It's essential to take into account that stored/readable difference for determination of low threshold LTHR value. If the low threshold LTHR value is assigned only from is the maximum of chunk size and BDP, as well as arbitrary fixed values used by Wang, a buffer underrun could happen, meaning the application AP can't receive data from the buffer BU immediately, even if the buffer BU is not empty. This "non-empty buffer underrun" situation arises when the stored/readable difference becomes bigger than the low threshold LTHR value. This situation is illustrated on the graph of temporal evolution of buffer occupancy shown on figure 3b, for time greater than t 2 .

That the reason why the low threshold LTHR value is set to the maximum of chunk size and BDP (e.g. SR * RTT) plus the maximum of the previous introduced stored/readable difference, denoted as "K". LTHR = max (2 * chunkSize; SR * RTT) + K Since by addition of the K value, the low threshold value LTHR could exceed the high threshold HTHR value, the low threshold value LTHR must be also limited to a value smaller than the high threshold HTHR value (e.g. totalBufferSize - 4 * chunkSize) in this case. Advantageously, the low threshold LTHR value is assigned according to following the formula: LTHR = min (max (2 * chunkSize; SR * RTT) + K; totalBufferSize - 4 * chunkSize), wherein K is an empiric value.

Advantageously, K is a value determined from a network model.

The amount of rate increase/decrease needs to rely on the current sending rate (e.g. small fractions of the sending rate, like DRI = 2% for increase and DRR = 1 % for decrease). Since a filled buffer is more favourable than an empty buffer, the decrease value (the reduction value of the sending rate) should be smaller than the increase value (the increase value of the sending rate), causing the buffer to empty more slowly than to fill. In any case, a periodic rate update has never to exceed maximum sending rate of the sending node.

After sending a sending rate adjustment to the sender, the receiving node RN must delay the following BO evaluation, as well as the periodic rate updates. This delay is figured by the box denoted as 10.4. Because a sending rate adjustment realized by the sending node will not have an effect on the receiver's BO before at least one RTT after the adjustment message was sent by the receiver. Therefore the BO evaluation shall be delayed at least one RTT (preferable 2 * RTT).

If two consecutive BO evaluations drive to a similar conclusion on threshold exceeding, the rate will be increased/decreased more aggressively after the second of these evaluations.

For instance, in case the buffer-occupancy BO exceeds the low threshold LTHR value after two consecutive BO evaluations, the desired sending rate DSR is determined from a number of consecutive BO evaluations having driven to the same conclusion. Advantageously, the step of determining the desired sending rate DSR further comprises a step of determining a number i of consecutive buffer occupancy test stating that the playback-rate SPR is lower than the low threshold LTHR value and a step of determining a number j of consecutive buffer occupancy test stating that the playback-rate SPR value is lower than the low threshold LTHR value. Advantageously, when the test on the buffer-occupancy determines as a result that the buffer occupancy is lower than the low threshold LTHR value, the desired sending rate DSR value is determined according to following the formula: DSR = min( (SPR + i * DRI * SR), SRMax) Let i be the number of consecutive detection that the smoothed playback-rate

SPR is lower than the low threshold LTHR value, and DRI the mentioned increase value, then the desired sending rate DSR shall be calculated as follows:

DSR = min( (SPR + i * DRI * SR), SRMax)

Advantageously, when the test on the buffer-occupancy determines as a result that the buffer occupancy is greater than the high threshold HTHR value, the desired sending rate DSR value is determined according to following the formula: DSR = min( (SPR - j * DRI * SR), SRMax).

Let i be the number of consecutive detection that the smoothed playback-rate SPR is greater than the high threshold HTHR value, and DRR the mentioned decrease value, then the desired sending rate DSR shall be calculated as follows:

DSR = min( (SPR - i * DRR * SR), SRMax)

If the current calculated DSR hasn't changed significantly within the update period, meaning the update process would send the same value than the previous sent DSR update or just an insignificant change, than the sending rate update can be skipped in order to save bandwidth. A step of comparing values of DSR and SR is illustrated by step 1.8 on figure 4. ε is a threshold value having a constant empiric value.

Advantageously, the increase percentage value DRI is greater than the decrease percentage value DRR).