Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD FOR BUFFERING DATA
Document Type and Number:
WIPO Patent Application WO/2007/004885
Kind Code:
A1
Abstract:
A method for controlling the content of a circular buffer memory, including: receiving a number of data packets, storing said packets in said buffer memory, check if N+S>=2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling, if so drop the oldest unscheduled packets until N+S≤2k-2, schedule k-2 packets for transmission, receive more packets, and so on, each step being executed during subsequent pollings of the memory.

Inventors:
SVELMOE GEIR ROBERT (NO)
Application Number:
PCT/NO2005/000367
Publication Date:
January 11, 2007
Filing Date:
September 30, 2005
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
SVELMOE GEIR ROBERT (NO)
International Classes:
H04L12/56; H04L47/30; G06F
Foreign References:
EP1463281A1
Other References:
ELLOUMI O ET AL: "Issues in multimedia scaling: the MPEG video streams case", EMERGING TECHNOLOGIES AND APPLICATIONS IN COMMUNICATIONS, 1996. PROCEEDINGS., FIRST ANNUAL CONFERENCE ON PORTLAND, OR, USA 7-10 MAY 1996, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 7 May 1996 (1996-05-07), pages 46 - 49, XP010164870, ISBN: 0-8186-7585-3
KESSELMAN, Z. LOTKER, Y. MANSOUR: "Buffer Overflow Management in QoS Switches", INTERNET ARTICLE, 31 October 2000 (2000-10-31), pages 1 - 19, XP002366784, Retrieved from the Internet [retrieved on 20060208]
PAPAYIANNIS S ET AL: "Implications of proactive datagram caching on TCP performance in wireless/mobile communications", COMPUTER COMMUNICATIONS, ELSEVIER SCIENCE PUBLISHERS BV, AMSTERDAM, NL, vol. 26, no. 2, 1 February 2003 (2003-02-01), pages 79 - 89, XP004401225, ISSN: 0140-3664
Attorney, Agent or Firm:
OSLO PATENTKONTOR AS (OSLO, NO)
Download PDF:
Claims:

C l a i m s

1. A method for controlling the content of a circular buffer memory, c h a r a c t e r i z e d i n that said method includes the steps of:

• receiving a number of data packets, storing said packets in said buffer memory,

S • check if N+S>2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling,

• if so drop the oldest unscheduled packets until N+S≤2k-2,

• schedule k-2 packets for transmission,

o • receive more packets, and so on,

each step being executed during subsequent pollings of the memory.

2. A method as claimed in claim 1 , wherein the circular memory is realized as a continuous physical memory with a range of ordinary memory positions, when a first part of a packet has been written to the last ordinary memory position, the second 5 part of the packet will be written to the first ordinary memory position, the memory including a number of extra memory positions following the last ordinary memory position, said method including the additional steps of copying the second part of said packet to the extra memory position(s).

3. A method as claimed in claim 3, wherein the memory includes a buffer 0 descriptor field containing buffer descriptors, said method including the additional steps of updating the descriptor for the first part of the packet with the total length of the buffer and a status from the last descriptor corresponding to the last part of the packet, and releasing all other buffer descriptors for rewriting except the updated descriptor.

Description:

A METHOD FOR BUFFERING DATA

Field of the invention

The present invention relates to data communication in general, where data is transferred between two parties, the data transmission rate of the one party being different from the rate at which the receiving part may receive the data, the data stream being buffered in a buffer in an interface bridging the parties.

Technical background

Most traffic nodes include several network interfaces for traffic following different communication protocols. A node may receive data packets at one interface for forwarding on another interface following another protocol. Often the transmission rate at the first interface is different from the transmission rate at the other interface. A problem may arise when the node is receiving traffic at a faster rate on the first interface than it may deliver at the other one. Commonly, nodes include buffers on all interfaces for storing packets for forwarding. Under heavy network load, the node receiving a steady stream of data, the buffer on the receiving interface will eventually go full. There is a high probability that at least one fragment will be dropped, and this may make the packet worthless for the destination. Depending on the protocol carried in the packet a retransmission of the whole packet may happen and a fragment once more be dropped.

From Chinese patent application CN-1549108 there is known a communication system using a single buffer for storing data in transfer. This so-called zero-copy message queue is realized as a circulating buffer area shared by both the input and output interface.

There is known solutions in which the receiving node is adapted to issue pause frames to the transmitting part causing it to gracefully stop its transmitter for a period of time as specified in the pause frame.

Summary of the invention

The present invention relates to an improved method for buffering packets in a communication system including the steps of receiving a number of data packets, storing said packets in said buffer memory, check if N+S>2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling, if so drop the oldest unscheduled packets

until N+S≤2k-2, schedule k-2 packets for transmission, receive more packets, and so on, each step being executed during subsequent pollings of the memory, as claimed in the appended claim 1.

Brief overview of the drawings

The inventive method will now be explained in detail in reference to the appended drawings, in which:

Fig. 1 is a schematic overview over an exemplary traffic node, in which the invention may be implemented,

Fig. 2 shows the symbols used in the following figures,

Fig. 3#1 - 3#6 shows the content of a buffer before and after the scheduling algorithm has been executed,

Fig. 4 illustrates how packets are wrapped in the receive buffer.

Detailed description

The invention will now be described in way of an example, implemented in an Ethernet Device controlling the flow of data on an Ethernet interface an HDLC interface in a traffic node in a cellular network system. This particular setup illustrated in fig. 1 allows the two interfaces to write/read data from the same memory and thus provide zero copy forwarding of data between the two interfaces.

Structural description

As illustrated in Fig. 1 , the Ethernet device ETD includes three ports; an Ethernet port ethO for communication with the network inside the node (siteLAN), an Ethernet traffic port epO connected to a physical Ethernet interface FCC1 , and an HDLC port eoeO (Ethernet over E1) connected to an external HDLC interface FCC3. The object of the device is to bridge traffic between the ports. The bold lines illustrate the bi-directional dataflow among the several logical modules in the Ethernet device ETD. epO handles all incoming data on FCC1 and passes the data on to a circular buffer, to be described later, which again passes the data on to the eoeO and/or ethO depending on the type of data received. Incoming data on the HDLC interface FCC3 is forwarded only to epO. The same goes for outgoing data on ethO interface, whose data is only forwarded to epO.

Normally the traffic rates on the Ethernet lines are higher than on the HDLC line (100 Mbs versus 31 Mbs). This may result in that the epO outgoing queue overflows. This implies that the device must be able to drop outgoing packets to ensure correct operation.

To obtain better performance the ETD will control the data flow on both the Ethernet interface and on the HDLC interface. This will allow the two interfaces to write/read data to/from the same data memory and thus provide zero copy forwarding.

The receive handling on one interface is highly coupled to the transmit handling on the other interface to obtain efficient buffer handling. The receive and transmit handling on the same interface operates somewhat more independent.

Functions and performance

To be able to forward packets between two interfaces of different speed the ETD must take care of the following:

• Try to avoid that the circular buffer is completely filled in order to prevent loss of data (for statistics.)

• Do not send packets that are older than one second.

• Packets should be processed in the order they are received to avoid buffer fragmentation.

To avoid copying of data the idea is to use the receive data buffer of one interface as the transmit buffer of the other.

Although the real buffer used is a circular buffer with a finite size, the buffer handling algorithm is best explained using an infinite buffer as an example.

For the purpose of the example we assume the following:

• All packets are of the same size

• Receive rate is constant at five packets per poll (1 ms interval).

• Transmit rate is constant at 1 V 2 packet per poll (1 ms interval).

• Must schedule 2 ms worth of data in addition to packet currently in transmission. • Packets scheduled may not be older than eight packets duration at time of scheduling.

Figure 2 shows the symbols used to track buffer states.

In each of the following figures 3 - 6 the buffer state is shown before and after the scheduling algorithm has been executed.

Poll #1 :

At this poll we have currently no data in transmission and we have received five new packets. Scheduling three packets ensures that the transmitter is occupied at least until the next poll.

Poll #2:

Another five new packets have been received and the transmitter have completed the transmission of one packet and started on the next one. The space occupied by the transmitted packet is released.

Since we do not know how much of the currently transmitting packet remains in the buffer we do not consider this packet in the scheduling calculations. The current buffer situation does not indicate that we have an excessive amount of data received so we schedule the next two packets.

Poll #3:

Another five new packets have been received and the transmitter have completed the transmission of two packets and started on the next one. The current buffer situation indicates that we have an excessive amount of data received so we will have to drop three packets and then schedule the next two. In the figure we can count that the sum of S and N marked packets are eight. The locations corresponding to the dropped packets are not set to empty before all the preceding packets have been set to empty.

Poll #4:

Another five new packets have been received and the transmitter have completed the transmission of one packet and started on the next one. The current buffer situation indicates that we once again have an excessive amount of data received so we will have to drop four packets and then schedule the next one. In the figure we can count that the sum of S and N marked packets are eight.

Poll #5

At this point we see that the pattern starts to repeat itself with the given input and output rates.

Another five new packets have been received and the transmitter have completed the transmission of two packets and started on the next one. The current buffer situation indicates that we have an excessive amount of data received so we will have to drop three packets and then schedule the next two. In the figure we can count that the sum of S and N marked packets are eight.

Poll #6:

Another five new packets have been received and the transmitter have completed the transmission of one packet and started on the next one. The current buffer situation indicates that we have an excessive amount of data received so we will have to drop four packets and then schedule the next one. In the figure we can count that the sum of S and N marked packets are eight.

The algorithm used for handling the buffer may be expressed in a general way as:

Receive packets,

Check if N+S>2k-2, k being the maximum input rate in packets per polling,

If so drop the oldest unscheduled packets until N+S≤2k-2,

Schedule k-2 packets for transmission,

Receive packets, and so on.

We will emphasize that the buffering method outlined above is an example only. It has a much wider application, and may in fact be used in any buffer where the output transmission rate may be lower than the input rate.

Handling of packet wrapping in receive buffer

Since the receive buffer is not a true circular buffer a packet may appear with the first part located at the end of the physical buffer memory and the second part at the start of the physical buffer memory. This means that one packet for every round in the receive buffer must be handled as two fragments and that all code that handles packets must check for, and handle, wrapped packets. The method is illustrated in Fig. 4.

To simplify most of the packet handling throughout the driver the ETD will at the receiving side ensure that the packet is located in continuous physical memory. This is done by expanding the receive buffer with 1500 bytes after the memory pointed to by the very last buffer descriptor. When such a split packet is received the receive

functionality will copy the second part of the packet from the start of the receive buffer and to the extra memory adjacent to the end of the receive buffer. This makes the packet data to be in continuous physical memory.

The buffer descriptor that corresponds to the start of the packet is updated with the total length and the status from the last buffer descriptor corresponding to the packet in question. Then all descriptors except the first one are "released." The buffer descriptor manipulation is also applied to all the other packets that are naturally in continuous physical memory.

The extra work implied by the packet copying is at most 1500 bytes once every second. As such this is considered to be a small sacrifice compared to the expected reduction in packet handling complexity.