Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LOAD BALANCING IN IP ADDRESS LOOKUP
Document Type and Number:
WIPO Patent Application WO/2002/098055
Kind Code:
A2
Abstract:
A load balancing mechanism maps a binary tree representation of a routing table into a set of fixed size memories. The mechanism efficiency utilizes the memory in the routing table without violating the tree precedence constraints and the memory access requirements of a pipelines system. The mechanism stores a subtree associated with a densely populated level of the binary tree in memory associated with lower levels.

Inventors:
AHMAD IMTIAZ (CA)
BROWN DAVID A (CA)
Application Number:
PCT/CA2002/000786
Publication Date:
December 05, 2002
Filing Date:
May 28, 2002
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MOSAID TECHNOLOGIES INC (CA)
AHMAD IMTIAZ (CA)
BROWN DAVID A (CA)
International Classes:
G06F7/00; G06F17/00; G06F17/30; H04L12/00; H04L12/56; (IPC1-7): H04L12/00
Domestic Patent References:
WO1999013619A21999-03-18
WO1999014906A11999-03-25
Other References:
GUPTA P ET AL: "Routing lookups in hardware at memory access speeds" INFOCOM '98. SEVENTEENTH ANNUAL JOINT CONFERENCE OF THE IEEE COMPUTER AND COMMUNICATIONS SOCIETIES. PROCEEDINGS. IEEE SAN FRANCISCO, CA, USA 29 MARCH-2 APRIL 1998, NEW YORK, NY, USA,IEEE, US, 29 March 1998 (1998-03-29), pages 1240-1247, XP010270370 ISBN: 0-7803-4383-2
YU D ET AL: "FORWARDING ENGINE FOR FAST ROUTING LOOKUPS AND UPDATES" 1999 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE. GLOBECOM'99. SEAMLESS INTERCONNECTION FOR UNIVERSAL SERVICES. RIO DE JANEIRO, BRAZIL, DEC. 5-9, 1999, IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, NEW YORK, NY: IEEE, US, vol. 2, 5 December 1999 (1999-12-05), pages 1556-1564, XP001016965 ISBN: 0-7803-5797-3
RUIZ-SANCHEZ M A ET AL: "SURVEY AND TAXONOMY OF IP ADDRESS LOOKUP ALGORITHMS" IEEE NETWORK, IEEE INC. NEW YORK, US, vol. 15, no. 2, March 2001 (2001-03), pages 8-23, XP002901792 ISSN: 0890-8044
Attorney, Agent or Firm:
Ogilvy, Renault (Suite 1600 1981 McGill College Avenu, Montreal Quebec H3A 2Y3, CA)
Download PDF:
Claims:
CLAIMS What is claimed is:
1. A multilevel lookup table comprising: a plurality of memories, a binary tree representation of a routing table mapped into the memories, with each memory associated with one level of the binary tree; and logic which allows storage of a subtree associated with a densely populated level of the binary tree in a memory associated with a level lower than the densely populated level of the binary tree to increase the number of locations for storing routes for the densely populated level.
2. The multilevel lookup table as claimed in Claim 1 further comprising: a skip indicator stored with a subtree index to the subtree in memory associated with a higher level of the binary tree, the skip indicator indicating to the logic whether the subtree is stored in the memory associated with the densely populated level.
3. The multilevel lookup table as claimed in Claim 2 wherein the skip indicator is a single bit stored in a mapper entry.
4. The multilevel lookup table as claimed in Claim 2 wherein the skip indicator is a plurality of bits stored in a mapper entry.
5. The multilevel lookup table as claimed in Claim 1 wherein the subtree associated with a densely populated level of the binary tree is stored in the memory associated with the lower level in the binary tree, upon detecting the number of routes stored in the memory associated with the densely populated tree is greater than a predetermined threshold.
6. The multilevel lookup table as claimed in Claim 1 wherein the subtree associated with a densely populated level of the binary tree stored in a lower level memory associated with the lower level of the binary tree is moved to the memory associated with the densely populated level, upon inserting a subtree index to the subtree.
7. A method for increasing a number of routes stored in a multilevel lookup table comprising the steps of : mapping a binary tree representation of a routing table into a plurality of memories, each memory associated with one level of the binary tree; and storing a subtree associated with a densely populated level of the binary tree in a memory associated with the level of the binary tree lower than the densely populated level to increase the number of locations for storing routes for the densely populated level.
8. The method as claimed in Claim 7 further comprising the step of : storing a skip indicator with a subtree index to the subtree, the skip indicator indicating whether the subtree is stored in the memory associated with the densely populated level.
9. The method as claimed in Claim 8 further comprising the step of : skipping a search of the densely populated level dependent on the skip indicator.
10. The method as claimed in Claim 9 further comprising the step of : after skipping the search of the densely populated level, continuing the search in the memory associated with the level of the binary tree lower than the densely populated level.
11. The method as claimed in Claim 8 wherein the skip indicator is a single bit stored in a mapper entry.
12. The method as claimed in Claim 8 wherein the skip indicator is a plurality of bits stored in a mapper entry.
13. The method as claimed in Claim 7 further comprising the step of : moving the subtree associated with a densely populated level of the binary tree to a lower level memory associated with the lower level in the binary tree, upon detecting the number of routes stored in the memory associated with the densely populated tree is greater than a predetermined threshold.
14. The method as claimed in Claim 7 further comprising the steps of : moving the subtree associated with a densely populated level of the binary tree stored in a lower level memory associated with the lower level in the binary tree to the memory associated with the densely populated level, upon inserting a subtree index to the subtree.
15. A multilevel lookup table comprising: a plurality of memories, a binary tree representation of a routing table mapped into the memories, with each memory associated with one level of the binary tree; and logic means for storing a subtree associated with a densely populated level of the binary tree in a lower level memory associated with the lower level of the binary tree to increase the number of locations for storing routes for the densely populated level.
16. The multilevel lookup table as claimed in Claim 15 further comprising: a skip indicator stored with a subtree index to the subtree, the skip indicator indicating whether the subtree is stored in the memory associated with the densely populated level.
17. The multilevel lookup table as claimed in Claim 16 wherein the skip indicator is a single bit stored in a mapper entry.
18. The multilevel lookup table as claimed in Claim 16 wherein the skip indicator is a plurality of bits stored in a mapper entry.
19. The multilevel lookup table as claimed in Claim 15 wherein the subtree associated with a densely populated level of the binary tree is moved to a lower level memory associated with the lower level in the binary tree, upon detecting the number of routes stored in the memory associated with the densely populated tree is greater than a predetermined threshold.
20. The multilevel lookup table as claimed in Claim 15 wherein the subtree associated with a densely populated level of the binary tree stored in a lower level memory associated with the lower level in the binary tree is moved to the memory associated with the densely populated level, upon inserting a subtree index to the subtree.
Description:
LOAD BALANCING IN IP ADDRESS LOOKUP

BACKGROUND OF THE INVENTION The Internet is a set of networks connected by routers. A router maintains a routing table that indicates for each possible destination network, the next hop to which a received data packet should be forwarded. The next hop may be another router or the final destination.

An Internet Protocol ("IP") data packet received at a port in a router includes an IP destination address. The IP destination address is the final destination of the IP data packet. Currently there are two versions of IP, IP version 4 ("IPv4") and IP version 6 ("IPv6"). IPv4 provides a 32-bit field in an IP header included in the data packet for storing the IP destination address. The router forwards a received data packet to a next-hop router or the final destination if the destination is the local network, dependent on the IP destination address stored in the IP header.

A 32-bit IPv4 destination address provides 4 billion possible routes. An Internet router typically stores 50,000 of the 4 billion possible routes. However, the number of stored routes will increase with the growth of the Internet and the widespread use of IPv6.

Originally, the IP address space was divided into three classes of IP addresses; A, B and C. Each IP address space was divided into a network address and a host address. Class A allowed for 126 networks and 16 million hosts per network. Class B allowed for 16382 networks with 64,000 hosts per network and class C allowed for 2 million networks with 256 hosts per network. However, dividing the IP address space into different classes reduced the number of available IP addresses.

Class C only allowed a maximum of 256 hosts per network which is too small for most organizations. Therefore, most organizations were assigned a Class B address, taking up 64,000 host addresses which could not be used by other organizations even if they were not used by the organization to which they were assigned. Hosts in an organization with a Class B IP address all store the same network address in the 16 Most Significant Bits ("MSBs"), for example, 128.32. xx. xx.

Classless InterDomain Routing ("CIDR") was introduced to free up unused IP host addresses. The remaining unused networks are allocated to organizations in variable sized blocks. An organization requiring 500 addresses gets 500 continuous addresses. For example, an organization can be assigned 500 available addresses starting at 128.32. xx. The number of routes stored by a router has increased since the introduction of Classless InterDomain Routing. Classless InterDomain Routing requires longest prefix matching instead of searching for a matching network address in order to find the corresponding next hop for the IP destination address. For example, a search can no longer stop after the 16 MSBs of a Class B IP address, for example, 128. xx. xx because 128.32.4. xx may be assigned to another organization requiring a different next hop.

One method for searching for a longest prefix match for a key is through the use of a binary tree search. A binary tree search matches a 32-bit input bit by bit down to 32 levels, requiring 32 searches to find the entry matching the 32-bit key.

Another method for searching for a match is through the use of a Patricia tree. A Patricia tree reduces the number of searches required if there are no entries down a leaf of the binary tree.

Yet another method for efficiently searching for a next hop associated with an IP destination address is described in PCT application Serial Number

PCT/SE98/00854 entitled"Method and System for Fast Routing Lookups"by Brodnick et al. filed on May 11,1998. The method described by Brodnick reduces the number of next hops stored by not storing duplicate routes. By reducing the number of next hops, the memory requirement is reduced so that a route lookup table can be stored in fast cache memory.

Brodnick et al. divides the 32-bit binary tree into 3-levels. Dividing the 32-bit binary tree into 3-levels reduces the number of searches to three. The indexed entry in the first level indicates whether the search can end at the first level with the route taken from the entry, or the search must continue to a subsequent level using a further portion of the IP destination address.

Fig. 1A illustrates a prior art 64K (65536) bit map representing the first level of a binary tree. A 64K bit map 30 represents the leaves or nodes 44 of the binary tree at depth 16, with one bit per node 44. The bit map is divided into bit-masks of length 16. There are 2'z = 4096 bit masks in the 64k bit map. One bit mask is shown in Fig. 1A. A bit in the bit map 30 is set to'1'if there is a subtree or a route index stored in an array of pointers corresponding to the node 44. A bit in the bit map 30 is set to'0'if the node shares a route entry with a previous node 44.

Fig. 1B illustrates a prior art lookup table implemented in cache memory. The lookup table includes an array of code words 36, an array of base indices 34 and a map table 40. A 32-bit IP address 38 is also shown in Fig. 1B. A codeword 46 is stored in the array of code words 36 for each bit mask in the bit map 30 (Fig. 1A).

The code word 46 includes a six-bit value 46a and a 10-bit offset 46b. A base index 42 is stored in the array of base indices 34 for every four code words 46 in the array of code words 36.

The array of code words 36, array of base indices 34 and map table 40 are used to select a pointer in an array of pointers (not shown). The pointer stores a route index or an index to perform a further search.

A group of pointers in the array of pointers is selected by selecting a code word 46 in the array of code words 36 and a base index 42 in the array of base indices 34. The code word 46 is selected using the first 12 bits 50 of the IP address

38. The base index 42 is selected using the first 10 bits 48 of the IP address 38. The correct pointer in the group of pointers is selected using the map table 32.

The 10-bit value 46b in the selected code word 36 is an index into the map table 32. The map table 32 maps bit numbers within a bit-mask to 4-bit offsets. The offset specifies the pointer within the selected group of pointers in the array of pointers. The 10-bit value 46b selects the row in the map table 32 and bits 19: 16 of the IP address 52 selects the 4-bit offset 54.

Thus, a search for a pointer requires the following cache memory accesses: (1) read a 16 bit code word 46; (2) read a 16-bit base address 42; (3) read a 4 bit offset 54 from the map table 32; (4) read a pointer at a pointer index where the pointer index is the sum of the base address 42, the code word offset 46a and the 4-bit offset 54.

The same memory accesses are required for each level of the binary tree.

Thus, a search of three levels requires 12 memory accesses.

SUMMARY OF THE INVENTION U. S. Patent Application Serial No. 09/733,627 filed on December 8,2000 describes a method and apparatus for storing a route for an Internet Protocol ("IP") address in a multi-level lookup table. A multi-level search is performed, based on a single search request, to find a route index stored in a mapper in the lookup table which indexes a range of IP addresses associated with a range of leaves of a subtree.

As new routes are learned, the lookup table is updated to add new route indexes.

The multi-level lookup table includes a plurality of fixed size memories, with each memory associated with one level of the tree. The binary tree is mapped into the fixed size memories such that each node (route index or subtree index) is mapped into only one memory. The multi-level search performs a longest prefix search for a search key in the plurality of memories starting at the first memory and successively searching a next memory based on the result of the search of the previous level memory and a next portion of the search key. By providing one memory per level, multiple searches for different search keys can be performed in parallel. To avoid a memory access conflict, predecessor and successor nodes are

not mapped into the same memory and nodes for each level are stored in the respective level memory.

Some of the levels of the binary tree may be sparsely populated and other levels may be densely populated resulting in uneven distribution of routes stored in the table. For example, in existing Internet Routing Tables, the majority of routes for a 32-bit IP address have a longest match prefix address of 24 bits resulting in the majority of routes being stored in one of the memories. If the fixed size memory associated with a level is full, no further routes associated with the level can be stored in the lookup table.

The number of routes stored in a multi-level lookup table having a plurality of memories is increased by storing the route in a memory associated with a lower level of the binary tree.

A multi-level lookup table includes a plurality of memories storing a binary tree representation of a routing table. Each memory is associated with one level of the binary tree. The multi-level lookup table also includes logic which allows a subtree associated with a densely populated level of the binary tree to be stored in a lower level memory associated with a lower level of the binary tree to increase the number of locations for storing routes for the densely populated level.

A skip indicator is stored with a subtree index to the subtree in memory associated with a higher level of the binary tree. The skip indicator provides an indication to the logic whether the subtree is stored in the memory associated with densely populated level. The skip indicator can be a single bit or a plurality of bits stored in a mapper entry.

The subtree associated with a densely populated level of the binary tree may be stored in the memory associated with the lower level in the binary tree, upon detecting the number of routes stored in the memory associated with the densely populated tree is greater than a predetermined threshold.

The subtree associated with a densely populated level of the binary tree stored in a lower level memory associated with the lower level of the binary tree may be moved to the memory associated with the densely populated level, upon inserting a subtree index to the subtree.

BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.

Fig. 1A illustrates a prior art 64K (65536) bit map representing the first level of a binary tree; Fig. 1B illustrates a prior art lookup table implemented in cache memory; Fig. 2 is a binary tree representation of a lookup table including routes and subtree indexes for a 48 bit search key; Fig. 3 is a block diagram of a multi-level lookup table including a plurality of fixed size memories storing routes and subtrees shown in the binary tree representation in Fig. 2 according to the principles of the present invention; Fig. 4 is a block diagram of the level 3 memory in the lookup table shown in Fig. 3; Fig. 5 is a block diagram illustrating a level 4 subtree stored in level 5 memory; and Fig. 6 illustrates parallel multi-level searches of the lookup table shown in Fig.

3.

DETAILED DESCRIPTION OF THE INVENTION A description of preferred embodiments of the invention follows.

Fig. 2 is a binary tree representation of a multi-level lookup table having a 48 bit search key. The binary tree has five levels with level 1 including the most significant 16-bits of the 48-bit search key and levels 2-5 each including a respective next 8 bits of the 48-bit search key. For illustrative purposes, only the first 5-bits of the 16-bits of level 1 and the first 3 bits of the 8-bits of levels 2-5 are shown.

As shown, level 1 of the binary tree has one route index (rl) and two subtree pointers (sO, sl) to respective subtrees (A, B). A search of level 1 based on the first 16-bits of the 48-bit search key results in route index rl or subtree pointers sO, sl.

Level 2 has two route indexes (r2, r3) and four subtree pointers (s2, s3, s4, s5) to respective subtrees (A,, A2, B,, B2) in level 3. Level 3 has three route indexes (r4, r5, r6) and two subtree pointers (s6, s7) to respective subtrees (A2j, B2i) in level 4.

Level 4 has one route index r7 and one subtree pointer (s8) to subtree A3 ; in level 5.

Level 5 has one host node hO.

With 16-bits, level 1 of the binary tree can have 216 (64k) possible route indexes or subtree pointers. The number of possible routes or subtree pointers increases in each subsequent lower level of the binary tree. For example, there are (2l6 x 28 = 224 (16M)) possible route indexes or subtree pointers in level 2 and (224 x 28 = 232 (4G)) possible route indexes or subtree pointers in level 3. An extremely large memory is required to provide storage for all possible route indexes or subtree indexes. However, only a small portion of the possible routes are stored.

Fig. 3 is a block diagram of a multi-level forwarding table including a plurality of level memories 512,516'-516'* storing the binary tree representation shown in Fig. 2 according to the principles of the present invention. Nodes in the binary tree are mapped to the level memories such that, nodes in the same level of the binary tree are stored in the same level memory. This mapping scheme allows a plurality of multi-level searches to be performed in parallel in the lookup table.

Only the data path in the lookup table is shown. The lookup table also includes a control unit. The control unit issues memory control signals for example, a read control signal. A multi-level search is performed in the lookup table to find a route index corresponding to a search key. The search begins in level 1 memory, 512. The result of the search in each level memory indicates whether a further search is required in the next level memory. Each level memory 512,516'-516" stores all route indexes and subtree pointers for the respective level of the binary tree.

In one embodiment, the level 1 memory 512 includes a location for each of the 64K nodes in the first level of the binary tree. All of the other level memories

516'-5164 provide storage for 16K subtree descriptors and 512K route indexes or subtree pointers. Thus, only 512K of the 4G possible route indexes or subtree pointers in level 3 of the binary tree can be stored in level 3 memory 5162.

Level 1 memory 512 stores all route indexes (rl) and subtree pointers (sO, sl) corresponding to level 1 of the binary tree. Level 2 memory 516'stores all route indexes for subtrees A and B corresponding to level 2 of the binary tree. Level 3 memory 5162 stores all route indexes and subtree points for subtrees A,, A2, B, and B2 corresponding to level 3 of the binary tree.

As shown in Fig. 2, level 1 is the highest level of the subtree and level 5 is the lowest level. If level 3 memory 5162 is full, subtrees Al and B, cannot be stored in a higher level memory, for example, level 1 memory 516'or level 2 memory 5162 because it would violate tree precedence constraints. Subtrees A, and B, do not require further search in a lower level because they do not include any subtree pointers. Thus, tree precedence constraints are not violated if subtrees A, and B, are stored in a lower level memory. Thus, subtrees A, and B, can be stored in the level memory 5163. If level 4 memory 5163 is full, subtree B2j can be stored in level 5 memory 5164 because subtree B2j does not include any subtree pointers to subtrees in level 5 of the binary tree. By storing a subtree associated with a level in the binary tree in a memory associated with a lower level in the binary tree, the number of available locations for storing routes in a lookup table is increased.

If subtree B2j is stored in level 5 memory 5164, a skip indicator stored with subtree pointer s7 in level 3 memory 5162 indicates that the subtree pointer points to a subtree stored in level 5 memory. Level 4 memory is not to be searched, i. e., a search in level 4 memory is skipped because subtree B2j is stored in level 5 memory 5164 instead of level 4 memory 5163. Thus, no search is performed in the level 4 memory search cycle for the search key. Instead, the level 4 search is performed in the level 5 search cycle. Tree precedence constraints are not violated because level 5 memory 5164 is searched after level 4 memory 5613 and subtree B2j does not have any subtree pointers requiring a further level 5 search cycle for the search key.

The multi-level lookup table 500 provides a final route index 502 for a key 504. In the embodiment shown, the key 504 is 48 bits wide and includes a 32-bit

Internet Protocol Version 4 ("IPv4") address and a 16-bit route table index (VPN).

The first 16-bits of the 48-bit key 504 are coupled to the LI mapper 512 to search for a route index corresponding to the first 16-bits of the 48-bit key 504 or a pointer to a subtree stored in the next level mapper 516'to continue the search down the tree.

A search is performed in each level memory 516'-516 4for a route index or subtree index corresponding to the result of the search of the previous respective mapper level 514'-514"and a next 8-bits of the 48-bit key 518'-5184. The result of the search of the respective mapper level 514'-5145 is forwarded to a pipeline 520. The result of the search of the multi-level lookup table 500 for a route corresponding to the 48-bit key 604 is output as the final index 502.

Each level memory 516l-5163 includes a respective subtree memory 522'- 5224, a mapper 512-512 and an Arithmetic Logical Unit ("ALU") 524'-524.

The subtree memory 522'-5224 stores a subtree descriptor per subtree stored in the level. The mapper 5122-5125 stores route indexes and subtree indexes for nodes in subtrees stored in the respective subtree memory 522'-5224. The ALU generates a mapper index dependent on the result of the search of the upper level 514'-5144, the next 8 bits of the key 518'-518"and the selected subtree descriptor 528'-5284.

Each subtree memory 522'-522"can store dense subtree descriptors and sparse subtree descriptors. If the subtree has less than sixteen routes or subtree indexes, a sparse subtree descriptor is stored for a subtree. If the subtree has at least 16 routes or subtree indexes, a dense subtree descriptor is stored for a subtree.

Dense subtree descriptors and sparse subtree descriptors are described in co-pending U. S. Patent Application No. 09/733,627 filed December 8,2000 entitled"Method And Apparatus For Longest Match Address Lookup"by David A. Brown the contents of which are incorporated herein by reference in its entirety.

Fig. 4 is a block diagram of the level 3 memory 5162 in the multi-level lookup table 500 shown in Fig. 3. The L3 memory 5162 includes L3 subtree memory 5222 and an L3 mapper 5123. The width of each entry in the L3 mapper 5123 is 21-bits.

The L3 mapper 512 stores mapper entries 610'-610"corresponding to nodes in a subtree identified by a subtree descriptor 602 stored in the L3 subtree memory 5222.

A mapper entry can store a no-entry 610', a route index 6102 or a subtree index 6103. A route index is a pointer to a location in another memory storing the route.

In an alternative embodiment the actual route can be stored in the mapper entry instead of a pointer route index. A subtree index is a pointer to a subtree descriptor stored in subtree memory in the next level mapper.

If the selected mapper entry 610 in the L3 mapper 5123 stores a subtree index 6104, 6103, the data stored in the mapper entry is forwarded as the result of the search of level 3 to the level 4 memory 5163 and to the pipeline 520 (Fig. 3). If the selected mapper entry 610 stores a route index 6102, the data stored in the mapper entry is an index for the last memory mapper (L6). The L6 mapper stores 32 bit associated data to be returned for the route index.

Each mapper entry 610 includes a skip indicator 604, the state of which indicates whether the next lower level memory should be searched in the next level search cycle for a route corresponding to the key 504. In one embodiment, the skip indicator is one bit which is set to 1'to skip a search of the next level memory.

Instead of performing the next level search in the next level memory, the search of the next level memory is skipped and the multi-level search for a route corresponding to the key continues in the next lower level search cycle in the next lower level memory.

The subtree index forwarded from the second level memory 516'selects a subtree entry 602 in the subtree memory 5222. The subtree entry 602 includes subtree data 606, subtree pointers 608 and a default index 612. The next 8-bits of the search key [23: 16] 614 and a portion of the subtree index from the level 2 memory 516l together with the subtree entry 602 select a mapper entry 610 in the L3 mapper memory 5123 corresponding to the selected node in the selected subtree in level 3 as shown in Fig. 2.

The skip indicator 604 in the mapper entry 610 allows distribution of subtrees among memory associated with different levels of the subtree increasing the number of routes stored in the lookup table without violating tree precedence constraints.

Returning to Fig. 3, routes corresponding to the longest prefix searches for keys mapping to the level 4 memory 516 can be distributed between level 4 memory

5163 and level 5 memory 5164. Two multiplexers 5501, 5502 allow L4 subtrees and associated routes to be stored in either the level 4 memory 5163 or the level 5 memory 5164. The level 4 memory 5163 is skipped and the subtree index stored in the L3 mapper 5123 is forwarded directly to the L5 subtree memory 5224. The skip indicator from the selected mapper entry in the L3 mapper memory 5123 controls the multiplexers 5501, 55 02.

To skip the search of the L4 memory 5163, the skip indicator 604 in the L3 mapper entry is set to'1'and the 8 bits of the next portion of the search key are directed to the level 5 memory 5164 instead of the level 4 memory 5163. The subtree index stored in the L3 mapper memory entry is a pointer to a subtree stored in the L5 memory and is forwarded directly to the level 5 memory. Thus, a subtree with no subtree pointers associated with level 4 of the binary tree can be stored in the level 5 memory 5164 and accessed as if it were stored in the level 4 memory 5163 without violating the tree precedence constraints. All routes for a particular subtree are stored in the same level memory. Thus, subtrees associated with level 4 with no subtree pointers can be stored in the level 5 memory S 164 instead of the level 4 memory 5163. However, only subtrees that do not require a further search in L5 memory can be moved to L5 memory; that is, subtrees with no subtree pointers to subtrees in a lower level of the binary tree.

Fig. 5 is a block diagram illustrating the level 4 subtree B2j stored in level 5 memory 5164. Fig. 5 is described in conjunction with Fig. 3. As shown in Fig. 2, route r6 and subtree s7 are nodes in subtree B2 in level 3 of the binary tree.

Returning to Fig. 5, level 3 memory 5162 stores a subtree descriptor 530 for subtree B2 in L3 subtree memory 522'. The subtree descriptor 530 together with the next portion of the search key IP [23: 16] selects subtree pointer s7 to subtree Bu ; in level 4 of the binary tree or the route index r6 stored in L3 mapper memory 5122.

The skip indicator 536 is set'1'in the mapper entry for subtree pointer s7 indicating that a search of level 4 memory 5163 is to be skipped because the subtree descriptor for B2 ; 533 is stored in level 5 memory 5164. The L3 mapper entry data including the L3 skip indicator is stored in latch 540 for one level search cycle; that is, the search cycle for the L4 level memory. In the level 5 search cycle, the L3

mapper entry data for the search of L5 memory is provided at the input to L5 memory by multiplexer 538. The output of the L3 memory is stored for one level search cycle so that the subtree pointer from the L3 memory 5122 and the key to be searched are provided to the input of the L5 memory 5124 in the L5 search cycle for the search key. The stored L3 skip indicator is coupled to 2: 1 multiplexer 536 to select the next portion of the key to be forwarded to level 5 memory 5164.

If the skip indicator is'1', IP [15: 8] is selected as the next portion of the key; that is, the portion associated with level 4 of the binary tree. If the skip indicator is '0', IP [7: 0] is selected as the next portion of the key; that is, the portion associated with level 5 of the binary tree.

Multiplexer 538 forwards the subtree pointer to select the subtree descriptor dependent on the state of the L3 skip indicator. The subtree pointer is the subtree pointer stored in L3 memory 5162 or stored in L4 memory 5163 dependent on the state of the L3 skip indicator. The L3 skip indicator is coupled to controller 546.

Controller 546 controls memory access to memory in level 4 memory 5163. Upon detecting the L3 skip indicator set 1'indicating a search of L4 memory 5163 is to be skipped, the controller 546 does not issue a memory read command on the memory control signals 548 for the level search cycle. Thus, the search of L4 memory 5163 is skipped.

Fig. 6 illustrates parallel multi-level searches of the lookup table shown in Fig. 3. A search for a longest prefix match for a search key commences in level 1 memory 512 and continues in the level 2 memory 516'based on the result of the search of the level 1 search and the next portion of the search key 504. Subsequent searches in level 3 memory 516, level 4 memory 5163 and level 5 memory 5164 are performed based on the result of the search of the previous level search and the next portion of the search key. A multi-level search for another search key can begin each time period t. As shown, a search for key 1 commences in level 1 memory 512 in time period tl, and a search for key 2 in level 1 memory 512 begins in time period t2. Thus, multiple multi-level searches can be performed in parallel in the lookup table.

In the embodiment shown, a search of level 4 of the tree is skipped if the subtree is stored in L5. The skip indicator allows L4 subtrees to be stored in both the level 4 memory and the level 5 memory to increase the number of available locations for storing routes associated with level 4. However, the invention is not limited to distributing routes for a particular level between two level memories. For example, routes for a particular level can be distributed among a plurality of lower levels by increasing the number of bits in the skip indicator in the mapper entry.

A subtree can be moved to a memory associated with a lower level of the tree only if all routes for the subtree can be stored in the memory; that is, there can be no further search in a level memory associated with a lower level in the tree. Thus, only subtrees including final routes can be moved to a lower level memory. The subtree can not include a pointer to a subtree. Routes can be redistributed by moving subtrees to the lower level memory to provide locations associated with each level for storing new routes. In one embodiment, redistribution of subtrees is performed when adding a route to a subtree in a level by replacing a pointer to another subtree storing the route. In another embodiment subtrees are redistributed when the number of available locations falls below a predetermined threshold. In yet another embodiment, subtrees are redistributed only upon detecting no available location for storing a new route in the memory associated with the level of the route.

While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.