Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DYNAMIC MEMORY ALLOCATION
Document Type and Number:
WIPO Patent Application WO/1999/028816
Kind Code:
A1
Abstract:
Buffer memory is dynamically assigned to a number of users according to demand so as to increase the efficiency of use of the memory device. Each user is assigned a plurality of, not necessarily sequential, memory means which are linked by a pointer table and used by the user as if they were sequential, thereby achieving maximum usage of the buffer memory device. Particular application is in a computer network device where a single memory device provides buffering for the outgoing transmission on the ports.

Inventors:
MURPHY CIARAN
CLIFFORD NEIL
NOLAN JEROME
O'HARA CARMEL
Application Number:
PCT/GB1998/003574
Publication Date:
June 10, 1999
Filing Date:
November 30, 1998
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
3COM TECHNOLOGIES LTD
BUTCHER IAN JAMES (GB)
International Classes:
G06F5/06; H04L49/901; H04L49/9015; (IPC1-7): G06F5/06
Foreign References:
GB2267588A1993-12-08
EP0381645A21990-08-08
Other References:
"WORKSTATION LOAD LEVELING TECHNIQUE USING BUFFER ALLOCATION", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 30, no. 7, December 1987 (1987-12-01), pages 357 - 359, XP000023340
YEH D Y ET AL: "DYNAMIC INITIAL ALLOCATION AND LOCAL REALLOCATION PROCEDURES FOR MULTIPLE STACKS", COMMUNICATIONS OF THE ASSOCIATION FOR COMPUTING MACHINERY, vol. 29, no. 2, February 1986 (1986-02-01), pages 134 - 141, XP000719739
"ALGORITHM FOR MANAGING MULTIPLE FIRST-IN, FIRST-OUT QUEUES FROM A SINGLE SHARED RANDOM-ACCESS MEMORY", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 32, no. 3B, August 1989 (1989-08-01), pages 488 - 492, XP000029766
Attorney, Agent or Firm:
Butcher, Ian James (A.A. Thornton & Co. Northumberland House 303-306 High Holborn London WC1V 7LE, GB)
Butcher, Ian James (A.A. Thornton & Co. Northumberland House 303-306 High Holborn London WC1V 7LE, GB)
Download PDF:
Claims:
CLAIMS:
1. A buffer memory device arranged to provide means for buffering data by a plurality of users, the amount of data to be buffered for each user varying with time, comprising: memory means divided into a plurality of memory areas ; and control means arranged to assign a plurality of said memory areas to each of said users, each of said users being arranged to use the respective assigned memory areas sequentially to buffer its data and to request an additional memory area to be assigned to it when its current last assigned memory area is used to store data and to request unassignment of an assigned memory area which is no longer being used to store data. said control means being arranged to assign and unassign said memory areas to and from said users in response to said requests.
2. A buffer memory device according to claim 1 in which said control means is arranged not to assign additional memory areas to a user which has a current number of assigned memory areas larger than a current maximum number.
3. A buffer memory device according to claim 2 in which said control means comprises setting means arranged to assess the number of users requesting additional memory and setting said current maximum number to be approximately the total number of memory areas in said memory means divided by the total number of users when said number of users requesting additional memory is high and setting a larger current maximum number when said number of users requesting additional memorv is relatively low.
4. A buffer memorv device according to claim 1,2 or 3 in which said memory areas are sequentially numbered according to their location in said memory means. the control means further comprising means for recording in which order said memory areas should be used by said users.
5. A buffer memory device according to claim 1,2,3 or 4 in which memory locations in said memory means are identified by pointers and said control means assigns or unassigns said memory areas to or from said users by altering the values of pointers associated with each of said users.
6. A buffer memory device according to claim 5 in which each user has a first pointer identifying the last memory area assigned to it and a second pointer identifying the last memory location used to buffer its data, and each user generates a said request for an additional memory area when the memory location identified by said second pointer is within said memory area identified by said first pointer.
7. A buffer memory device according to claim 5 in which each user has a third pointer identifying its earliest current assigned memory area and a fourth pointer identifying the memory location of the next data to be read out of the buffer, and each user generates a said request for unassignment when said fourth pointer is not within said memory area identified by said third pointer.
8. A communications device for a computer network comprising: a plurality of ports via which network communications are received and transmitted : transmit means associated with each of said ports each arranged to receive communications for transmission via the respective port and to transmit them via that port : a buffer memory device according to any of claims 17 in which communications received bv said transmit means may be buffered prior to transmission, each of said transmit means being a said user of said buffer memory device.
9. A method of using a memory means to buffer data for a plurality of users, the amount of data to be buffered for each said user varying with time, the method comprising: dividing said memory means into a plurality of memory areas ; assigning a plurality of said memory areas to each of said users, each of said users using the respective assigned memory areas sequentially to buffer its data and being arranged to request an additional memory area to be assigned to it when its current last assigned memory area is used to store data and to request un assignment of an assigned memory area which is no longer being used to store data; and assigning and unassignin said memory areas to and from said users in response to said requests.
10. A method according to claim 9 further comprising setting a maximum number of memory areas which may be assigned to a user and not assigning additional memory areas to a user which has a current number of assigned memory areas larger than said maximum.
11. A method according to claim 10 in which said steps of setting a maximum number comprises assessing the number of users requesting additional memory and setting said maximum number to be approximately the total number of memory areas divided by the total number of users when said number of users requesting additional memory is high and setting a larger number when said number of users requesting additional memory is relatively low.
Description:
Dynamic Memory Allocation The present invention relates to the dynamic allocation of the memory resource between a number of users of that resource.

It is well known in many data transfer situations to buffer data at the input and output of devices to enable different devices to work together without necessarily being synchronised. Such data buffers are usually implemented in the form of first-in-first-out (FIFO) buffers and these buffers enable data to be output from one device and temporarily stored until another device is ready to read the data in. It will of course be understood that the time average input capacity of the device receiving the data must be at least as great as the time averaged output of the device generating the data or the buffer will become full. However, it is well known that such buffers allow variations in the rate of data production to be handled, which the receiving device may not be able to be cope with itself.

Such FIFO buffers are well known in many applications, but will be described herein in particular reference to network communication devices in computer networks. It is well known that. in computer networks, there are provided communication devices which have a plurality of ports to which computing devices can be connected by way of which they can communicate with each other. Communications are sent by the computing devices and a communicating device receives each communication on one of its ports and retransmits the communication on one or more of its other ports according to various well known protocols. At the ports of these communication devices there are provided FIFO buffers of the general type outlined above to enable the communicating device to transfer the communication very quickly to the port it is to be transmitted from, without necessarily waiting for the network to be ready to receive the transmission. Within a communication device there are provided therefore a considerable number of buffers, typically one transmit buffer and one receive buffer for each port.

Typically, these buffers are not each implemented in a separately provided memory device. Rather there is provided a single memory device, eg. static RAM, which is divided up and allocated to the different ports to provide the different queues. If the memory available is simply divided into predetermined areas and allocated according to the number of queues it is desired to provide this may provide a workable system but it is not the most efficient use of the memory device.

This is because the queues would not all be used at the same time and therefore it is possible that one queue may become full, and therefore slow down the operation of the system, while in fact there is other memory in the memory device which is unused because another queue is not being used at the present time.

The present invention provides a buffer memory device arranged to provide means for buffering data by a plurality of users, the amount of data to be buffered for each user varying with time, comprising: memory means divided into a plurality of memory areas; and control means arranged to assign a plurality of said memory areas to each of said users, each of said users being arranged to use the respective assigned memory areas sequentially to buffer its data and to request an additional memory area to be assigned. to it when its current last assigned memory area is used to store data and to request un-assignment of an assigned memory area which is no longer being used to store data ; said control means being arranged to assign and un-assign said memory areas to and from said users in response to said requests.

Preferably the memory areas are identified by pointers and the required assigning and un-assigning is achieved by appropriate entries in pointer tables.

The present invention therefore aims to overcome the difficulties of inefficient usage of memory by dynamically allocation. the available memory to

the variously required queues according to the relative levels of activity in the different queues. This is achieved bv dividing the total available memory into a number of areas (which will be termed in the specific description below, buffers) which can be linked together and assigned to the various queues required as is determined necessary.

Initially, each queue will be allocated a minimum number of areas in memory to form its FIFO and, as particular queues require more memory more areas will be allocated to them. In particular, it is not necessary that the different areas of memory which are allocated to form a particular queue are contiguous in the memory device. There is maintained a record of which areas in the memory device are linked together and this feature enables the areas of memory to be freelv allocated to and returned from the various queues as is required.

As will be described in the following. various schemes can be used to allow or not allow the further allocation of areas of memory to different queues according to whether the overall demand is high or low and also according to possible priority arrangements between the different users of the memory device.

The areas of the memorv device which are defined as being linked to form a particular queue are used by the queue as if they were a contiguous area of memory, that is there is no space left unused within a queue as the areas are effectively transparently linked together.

This invention will be better understood from the following description of a preferred embodiment given by way of example and with reference to the accompanying drawings in which Figure 1 illustrates the allocation of buffers between three ports in a network communication device ; Figure 2 is a table describing in various pointers necessary to maintain the queue : Figure 3 illustrates the operation of these pointers in relation to a transmit queue ;

Figure 4 illustrates the buffer pointer table which links the buffers in the memory device together ; Figure 5 illustrates the generation of a request for an additional buffer by a queue; Figure 6 illustrates the granting of a further buffer to a queue ; Figure 7 illustrates the reusing by a queue of a buffer which it has just freed ; Figure 8 illustrates the returning of a buffer which is no longer required by a transmit queue; and Figure 9 is a schematic illustration of a network communication device in which the invention is implemented.

In this invention buffer memory is dynamically assigned to a number of users according to demand so as to increase the efficiency of use of the memory device. Each user is assigned a plurality of. not necessarily sequential. memory means which are linked by a pointer table and used by the user as if they were sequential, thereby achieving maximum usage of the buffer memory device.

Particular application is in a computer network device where a single memory device provides buffering for the outgoing transmission on the ports.

Specific Description The memory allocation according to the preferred embodiment of the invention will be described in the context of transmit queues of ports in a network communication device. Essentially, as is well known. each port is provided with a FIFO buffer to handle communications which are to be transmitted via the port. Tvpically, memory will be statically allocated to the receive queue of ports, although it may be possible to implement the dynamic allocation of this invention also for the receive queue.

The memory available for the transmit queues is provided by static RAM (SRAM) which is divided into fixed sized buffers, the starting addresses of which are held in a free-pool. Initially, each transmit queue is allocated a fixed

number, say two, of buffers and will first grow to a minimum size as it receives data by requesting buffers from the Free Pool Controller (FPC). Once the queue reaches the minimum allocation of buffers, it begins to re-use the buffers it has been allocated if possible. Otherwise it requests additional buffering from the FPC as they are required. If a buffer is emptied and is not required for re-use by the queue then the buffer is returned to the FPC.

The FPC limits the number of buffers that any one queue can be allocated. When the demand for buffers from the FPC is low, the FPC allows individual queues to be allocated any number of buffers up to an upper limit known as BufMax. When demand for buffers is high, the FPC allows individual queues to be allocated anv number of buffers up to a lower limit known as BufTarget. These two limits are based on the total number of buffers and the number of ports enabled.

Under light network loads, onlv a few queues if any will be busy while the remaining queues will be idle i. e the demand for buffers will be low. In this situation the FPC enforces the BufMax limit. When the number of buffers allocated to a queue reaches this limit no additional buffers will be allocated to the queue and it must re-use the buffers already allocated to it if more data needs buffering. If it is unable to reuse its buffering (i. e. if the tail end of the FIFO has not yet emptied the tail buffer) then the queue is full and cannot accept any more data.

Under heavy network loads, all or most queues will be busy i. e. the demand for buffers will be high. In this situation the FPC enforces the BufTarget limit which is less than the BufMax limit. The BufTarget limit represents the maximum number of buffers that can be allocated to each individual queue such that all queues have an equal amount of buffering. When the number of buffers allocated to a queue reaches this target level no additional buffers will be allocated to the queue and it must re-use the buffers already allocated to it if more data needs buffering. If it is unable to reuse its buffering (i. e. if the tail end of the

FIFO has not yet emptied the tail buffer) then the queue is full and cannot accept any more data.

The FPC determines which limit to enforce by comparing the number of buffer requests to the number of buffers available in the free pool. If there are more requests for buffers than there are buffers available then it uses the BufTar_et level. If there are more buffers available than there are requests then it will use the BufMax limit.

In a situation where the FPC switches from enforcing the BufMax limit to the BufTarget limit, there may be some queues that are currently allocated more buffers than BufTarlJet. The FPC will prevent these queues from re-using their buffers and force them to return emptied buffers to the free pool until the number of buffers allocated to the queue is less than or equal to the target limit.

Figure 1 illustrates this dynamic allocation of memory, which allocates memory to a queue based on demand. The memory available for the plurality of transmit queues is divided up into fixed sized buffers (eg. 4k or 8k).

These buffers are placed in a free pool and can be used as required by transmit processes from different ports. Figure 1 shows how the dynamic buffering scheme organises SRAM: These buffers are linked together so that they act as a FIFO per queue. Whenever a queue nears the FIFO full condition (i. e. when it has less than a buffer of free space left in it's FIFO), or whenever a buffer is emptied out, it requests service from the free pool controller. Usually, if a queue is requesting a buffer and requesting to return a buffer at the same time, the free pool controller will re-allocate the buffer being returned i. e. the queue will re-use the buffering currently allocated to it.

In this way, the FIFO for each Tx queue can grow or shrink according to the traffic going through the queue. This provides for a more efficient usage of the available system memory, especially in the case of bursty traffic conditions.

It should be noted in Figure 1 that the free pool FIFO operates opposite to normal queues in that it gives buffers via the head and takes them back via the tail.

Each transmit queue is defined and maintained by a number of hardward pointers. and these are listed and described in the table of Figure 2.

Figure 3 illustrates the positions of these pointers for a typical transmit queue and Figure 3 shows the important feature of this arrangement whereby each FIFO queue is not formed in a contiguous area of memory. Assuming that the buffers are numbered in the order they are arranged in the SRAM, it will be seen that the queue illustrated in Figure 3 is currently allocated buffers 13 and 2, but the queue uses these buffers as is they were a contiguous area of memory.

To enable this operation it is necessary to manage the allocation and ordering of the buffers properly, and this is done using the buffer pointer table.

The buffer pointer table is illustrated in Figure 4 and this is used to efficiently maintain linked lists for each of the individual queues and the free pool.

Each list in the table describes how the buffers are chained together.

In the present embodiment, it consists of 128 entries, each entry represents one buffer and the contents of that entry indicate the next buffer in the list. In Figure 4, the start of the linked list for Queue 6 is given by the txTailStBufPtr which has a value of 4. Entry 4 in the Buffer Pointer Table contains the value 7 indicating that buffer 7 is the next buffer in the chain. Entry 7 in the Buffer Pointer Table contains the value 0 indicating that buffer 0 is the next buffer in the chain. Entry <BR> <BR> <BR> <BR> 0 points to entry 8 which points to entry 2 which is the last buffer in the chain because txHeadBufPtr for Queue 6 is equal to 2.

The linked list for each queue has to be updated every time a buffer is allocated/re-used/de-allocated from the queue e. g. if buffer 15 was the next <BR> <BR> <BR> <BR> buffer allocated to Queue 6. then the value 15 would be written to Entry 2 and txHeadBufPtr would point to buffer 15.

The Free Pool Controller (FPC) scans queues that have posted

service requests in a round-robin manner. The FPC services a queue depending on the result of a number of evaluations such as : the number of buffers already allocated. the number of buffers available in the free pool, demand for free buffers by other ports etc.. The following sections describe the processes involved.

Request Generation There are three types of requests which a queue can make to the FPC. These are buffer request, buffer return request and update tail buffer. The FPC resolves all requests that are posted by a queue before moving onto the next port.

I. Buffer Request This type of request is posted by a queue when it requires additional buffering. The following pseudocode describes how a queue posts a request for a buffer to the FPC.

If ((txHeadPtr<16 : 10> = txHeadBufPtr<6: 0>) and QueueEnabled) Then BufReq <= I ; Else BufReq <= 0 ; Hence, a request for a buffer will be posted by a queue whenever the txHeadPtr is located in the buffer pointed to by the txHeadBufPtr as illustrated in Figure 5A. Once a buffer is allocated to this queue the txHeadBufPtr will have a new value as illustrated in the queue of Figure 5B. and the queue stops requesting additional buffering until the condition becomes true again.

II. Buffer Return Request This type of request is posted by a queue when it wants to return an emptied buffer. The following pseudo-code describes how a queue posts a request to return a buffer to the FPC.

If ((txTailStPtr<16 : 10> o txTailStBufPtr<6 : 0>) and (txBufAlloc >=

bufMin) and QueueEnabled)<BR> <BR> <BR> <BR> <BR> BufRetReq <= 1, Else BufRetReq <= 0, Hence, a request to return a buffer will be posted by a queue whenever the txTailStPtr is no longer located in that buffer pointed to by the txTailStBufPtr i. e. all of the data in the txTailStBufPtr buffer must have been successfully processed (no collisions have occurred on the ethernet) and therefore the buffer is empty and can be returned to the FPC. This situation is also illustrated in Figure 5A. This request is negated if the current number of buffers allocated to the queue is less than the minimum allocation.

III. Update Tail Buffer Request This type of request is posted when a queue requires the txTailBufPtr to be updated to point to the next buffer in the chain. It is necessary to update the txTailBufPtr so that the txTailPtr knows where to go when it comes to the end of the buffer it is currently located in i. e. the txTailPtr can load the value contained in txTailBufPtr when it comes to the end of it's current buffer.

The following pseudo-code describes how this request is posted to the FPC.

If ((txTailStPtr<16 : 10> = txTailBufPtr<6: 0>) and (txTailBufPtr o txHeadBufPtr) and QueueEnabled) updateTB <= 1 ; Else updateTB <= 0 : Hence, a request is posted if the txTailStPtr is located in the same buffer pointed to by txTailBufPtr and the txTailBufPtr is not equal to the txHeadBufPtr. The reason that txTailStPtr rather than txTailPtr is used in the condition above is to ensure that the txTailPtr can be wound back to the start of

the current packet when a collision occurs without having to wind back txTailBufPtr also.

The generation of these requests is illustrated in Figure 5 with reference to two buffers in different states and generating different requests.

When queues are enabled either at reset or by network management, some buffering must be initially allocated to the queue before the queue can independently post requests to the FPC for service. This is achieved by masking the txChanEn signal and forcing bufReq for the queue until 2 buffers have been allocated to it. At this point, the mask is removed and the queue can independently post requests to the FPC as it receives data and increase it's buffer allocation to bufMin.

When queues are disabled (including blipping a port) by network management, any buffers currently allocated to the queue must be returned to the free pool. This is achieved by forcing bufRetReq whenever the queue is disabled and txBufAlloc is non-zero. This forces the queue to return all of its buffers.

If a queue is disabled, the queue is prevented from being re-enabled until it has handed back all of its buffers to the free pool. This avoids phantom packets appearing in re-enabled queues and/or lost buffers.

Scanning Queues A scanner continuouslv scans the request lines from queues looking for the next queue to service. Once it has found a queue to service, it indicates the queue number to the FPC and waits until the FPC finishes servicing all requests relating to that queue before moving to the next queue. A priority encoder is preferably used to jump directly to the next queue rather than taking a clock tick to check each queue for requests.

The scanner groups requests from a port together (i. e. high and low priority queue requests). Usually, there will only be one queue at a time making requests to the FPC but in certain situations both queues may have requests posted to the FPC. The FPC alwavs remembers which queue it serviced last via the

hiServicedLast bit per port and in situations where both queues have requests posted, it services the queue which didn't get serviced the last time this port was visited. The other queue will get serviced next time around.

Evaluating Buffer Demand The demand for buffers is evaluated using adders to add the number of buffer requests that are posted at any one time and comparing this to the number of buffers available in the free pool to the queue being serviced.

Servicing Requests I. Controller Decisions The scanner finds queues for the controller to service. When servicing a queue, the controller processes each request posted by that queue before indicating to the scanner to find the next port to service.

The update tail buffer request (if one is posted) is the first to be processed by the controller. The controller then processes buffer requests and buffer return requests. This requires a decision making process with four possible outcomes: a buffer is granted, a buffer is returned, a buffer is reused or the requests are ignored (rejected). The decision making process is dependent on buffer demand, the number of buffers already allocated to that queue, the queue priority and the availability of buffers in the free pool. The sections below detail the buffer transactions carried out by the controller.

II. Update Tail Buffer The following steps are involved in updating the txTailBufPtr : 1. Address the Buffer Pointer Table using contents of txTailBufPtr 2. Copy the contents of that location to txTailBufPtr Thus the chain of buffers defined bv the Buffer Pointer Table is used to move the txTailBufPtr to the next buffer in the queue.

III. Buffer Grant

The following steps are involved in granting a buffer from the free pool: Address the Buffer Pointer Table using txHeadBufPtr 2. Copy the contents of the FPHead register to that location 3. Copy the contents of the FPHead register to txHeadBufPtr 4. Address the Buffer Pointer Table using the new txHeadBufPtr 5. Copy the contents of that location to the FPHead register This is illustrated in Figure 6 and it will be seen that the effect is to re-allocate Buffer 0 from the Free Pool to Queue 17.

IV. Buffer Re-use The following steps are required when a queue has posted a buffer request and buffer return request and the FPC allows the queue to re-use the buffer it is returning: 1. Address the Buffer Pointer Table using txHeadBufPtr 2. Copy the contents of the txTailStBufPtr to that location 3. Copy the contents of the txTailStBufPtr register to txHeadBufPtr 4. Address the Buffer Pointer Table using the new txHeadBufPtr 5. Copy the contents of that location to the txTailStBufPtr This is illustrated in Figure 7 and it will be seen that the effect is to re-allocate Buffer 9 from the tail to the head of Queue 17.

V. Buffer Return The following steps are involved in returning a buffer to the free pool : 1. Address the Buffer Pointer Table using contents of FPTail register 2. Copy the contents of the txTailStBufPtr to that location 3. Copy the contents of the txTailStBufPtr to FPTail register 4. Address the Buffer Pointer Table using the new FPTail register

5. Copy the contents of that location to the txTailStBufPtr This is illustrated in Figure 8 and it will be seen that the effect is to re-allocate Buffer 9, which came free at the tail of Queue 17, back to the Free Pool.

For completeness, Figure 9 is a schematic illustration of an implementation of the invention discussed above in a computer network communication device 90. The device 90 has a plurality of ports 92 through which network communications are transmitted and received. At the heart of device 90 is core device 91 which implements the required functionality of the <BR> <BR> <BR> <BR> device. eg. repeater bridge or switch. Each port 92 has associated Media Access Controller (MAC) 93 which controls the passing of received communications to core device 91 and receives communications from core device 91 for re- transmission.

Associated with the MAC device 93 is a buffer memory means 94 which is divided into a number of memory areas as discussed above. Each MAC device 93 has a number of memory areas of memory means 94 assigned to it which it uses to buffer data for re-transmission. The assigning of the memory areas to the MAC devices is under the control of buffer memory control means 95 to which the MAC devices address their requests as discussed in detail above.

By this means, the buffer memory provided in communication devices, and in other contexts too, is efficiently used, which reduces the amount of memory it is necessary to provide to achieve a given buffering capacity.

Alternatively it may be seen that an increased buffering capacity is achieved in a particular amount of memory provision.

The above description of the operation of a preferred arrangement is given merely by way of example, but it illustrates how the present invention makes efficient use of a memory resource by dynamically assigning portions of the memory resource to various users according to demand.