Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SIMULTANEOUS PROCESSING OF FLOW TABLES
Document Type and Number:
WIPO Patent Application WO/2017/018989
Kind Code:
A1
Abstract:
An example method is described in which a data packet is received at a network switch. Operations are then performed simultaneously on a plurality of flow tables stored in the network switch. The operations include attempting to match a field of the data packet to a field of an entry in each of the flow tables. An action to be performed on the data packet is identified based on these operations.

Inventors:
VALVERDE GARRO DIEGO (CR)
VIQUEZ CALDERON CLAUDIO ENRIQUE (CR)
Application Number:
PCT/US2015/041980
Publication Date:
February 02, 2017
Filing Date:
July 24, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD ENTPR DEV LP (US)
International Classes:
H04L47/41
Domestic Patent References:
WO2011148306A12011-12-01
Foreign References:
US20130170495A12013-07-04
US20140146674A12014-05-29
US20120099591A12012-04-26
EP2667553A12013-11-27
Attorney, Agent or Firm:
TONG, Kin-Wah et al. (US)
Download PDF:
Claims:
What is claimed is:

1 . A method, comprising:

obtaining a data packet at a network switch;

performing operations on a plurality of flow tables stored in the network switch, wherein the operations include attempting to match a field of the data packet to a field of an entry in each of the plurality of flow tables, and wherein the operations are performed on all of the plurality of flow tables simultaneously; and

identifying an action to be performed on the data packet, based on the operations.

2. The method of claim 1 , wherein the operations comprise:

performing a plurality of lookups to each of the plurality of flow tables simultaneously, wherein the plurality of lookups results in a plurality of actions being stored in a data structure, the plurality of actions including each action associated with each entry in each flow table of the plurality of flow tables; and comparing the field of the data packet to fields of each entry in each flow table of the plurality of flow tables.

3. The method of claim 2, wherein the identifying comprises:

removing from the data structure any action in the plurality of actions that corresponds to an entry in a flow table of the plurality of flow tables for which a field cannot be matched to the field of the data packet.

4. The method of claim 3, further comprising:

replacing an action that is removed from the data structure with a new action to be applied in an event that no field of a flow table of the plurality of flow tables matches the field of the data packet.

5. The method of claim 1 , further comprising:

performing the action on the data packet.

6. The method of claim 1 , wherein the network switch stores n flow tables, and the plurality of flow tables includes a number of flow tables between two and n.

7. The method of claim 6, wherein the number is configurable.

8. The method of claim 1 , wherein the operations are performed without knowing whether a path exists to each flow table in the plurality of flow tables.

9. A non-transitory machine-readable storage medium encoded with instructions executable by a processor, the machine-readable storage medium comprising:

instructions to perform operations on a plurality of flow tables stored in a network switch, wherein the operations include attempting to match a field of an incoming data packet to a field of an entry in each of the plurality of flow tables, and wherein the operations are performed on all of the plurality of flow tables simultaneously; and

instructions to identify an action to be performed on the incoming data packet, based on the operations.

10. The non-transitory machine-readable storage medium of claim 9, wherein the instructions to perform operations comprise:

instructions to perform a plurality of lookups to each of the plurality of flow tables simultaneously, wherein the plurality of lookups results in a plurality of actions being stored in a data structure, the plurality of actions including each action associated with each entry in each flow table of the plurality of flow tables; and

instructions to compare the field of the incoming data packet to fields of each entry in each flow table of the plurality of flow tables.

1 1 . The non-transitory machine-readable storage medium of claim 10, wherein the instructions to identify comprise:

instructions to remove from the data structure any action in the plurality of actions that corresponds to an entry in a flow table of the plurality of flow tables for which a field cannot be matched to the field of the incoming data packet.

12. The non-transitory machine-readable storage medium of claim 1 1 , wherein the instructions to identify further comprise:

instructions to replace an action that is removed from the data structure with a new action to be applied in an event that no field of a flow table of the plurality of flow tables matches the field of the data packet.

13. The non-transitory machine-readable storage medium of claim 9, wherein the operations are performed without knowing whether a path exists to each flow table in the plurality of flow tables.

14. A computer system comprising:

a switch storing a plurality of flow tables; and

an instruction set to cooperate with the switch to:

perform operations on at least two of the plurality of flow tables stored in the switch, wherein the operations include attempting to match a field of an incoming data packet to a field of an entry in each of the at least two of the plurality of flow tables, and wherein the operations are performed on all of the at least two of the plurality of flow tables simultaneously; and

identify an action to be performed on the incoming data packet, based on the operations.

15. The computer system of claim 14, wherein the operations comprise : performing a plurality of lookups to each of the at least two of the plurality of flow tables simultaneously, wherein the plurality of lookups results in a plurality of actions being stored in a data structure, the plurality of actions including each action associated with each entry in each flow table of the at least two of the plurality of flow tables;

comparing the field of the incoming data packet to fields of each entry in each flow table of the at least two of the plurality of flow tables; and

removing from the data structure any action in the plurality of actions that corresponds to an entry in a flow table of the at least two of the plurality of flow tables for which a field cannot be matched to the field of the incoming data packet.

Description:
SIMULTANEOUS PROCESSING OF FLOW TABLES

BACKGROUND

[0001] OpenFlow is a communication protocol that separates the data path and the control path functions. The data path resides on a switch, which is responsible for packet forwarding. The control path, however, resides on a separate controller, which is responsible for high-level routing decisions. The switch and the controller communicate using the OpenFlow protocol.

[0002] The switch maintains a set of flow tables, configured by the controller. Each entry in a flow table defines a "flow" or set of characteristics and a corresponding action to be taken on any data packet which matches the flow. The set of flow tables is numbered, and incoming packets are processed against the flow tables in order, one-by-one, to locate matching flows.

BRIEF DESCRIPTION OF THE DRAWINGS

[0003] FIG. 1 is a block diagram of an example system of the present disclosure;

[0004] FIG. 2 illustrates a flowchart of an example method for simultaneous processing of flow tables;

[0005] FIG. 3 illustrates a flowchart of another example method for simultaneous processing of flow tables; and

[0006] FIG. 4 depicts an example high-level block diagram of a computer that can be transformed into a machine that is dedicated to perform the functions described herein. DETAILED DESCRIPTION

[0007] The present disclosure broadly discloses a method, computer system, and non-transitory computer-readable medium for simultaneous processing of flow tables. When a data packet is received by a switch that supports the OpenFlow protocol, the data packet is processed against the switch's flow tables one-by-one, in a serial fashion. For instance, the data packet is first processed against the first flow table. If a match is found in the first flow table, and the match included an instruction to process the data packet against the second flow table, then the data packet is next processed against the second flow table. The process continues like this until the data packet has been processed against each of the flow tables, some of which may match and some of which may not match. Thus, for a set of n flow tables, it takes an average of O(n) table operations to complete processing of the data packet.

[0008] Examples of the present disclosure provide a novel method for accelerating the processing of flow tables using speculative execution. For instance, when a data packet is received by a switch in a software-defined networking (SDN) network, examples of the present disclosure perform operations simultaneously on a plurality of flow tables stored in the switch. Actions associated with the flow tables are automatically retrieved and accumulated in a stack. If any of the flow tables are determined not to match the data packet, then the corresponding actions are removed from the stack or are replaced with the correspondent actions for a "miss" case. Thus, the total time taken to process a packet against a plurality of flow tables can be reduced dramatically, e.g., as compared to methods that perform operations on the flow tables one-by-one. Examples of the present disclosure can be implemented directly in hardware, e.g., using an application specific integrated circuit (ASIC), or can be implemented in software, e.g., where a separate thread runs for each table element.

[0009] Although the present disclosure is described within the context of the OpenFlow communication protocol, the OpenFlow protocol is considered to be only one specific example in which the present disclosure is applicable. For instance, examples of the present disclosure could be advantageously deployed to accelerate processing of any software-defined networking (SDN) pipeline. Moreover, examples of the disclosed techniques assume that there are no dependencies between flow tables.

[0010] FIG. 1 is a block diagram of an example system 100 of the present disclosure. In one example, the system 100 generally comprises a switch 102 and a controller 104 that communicate over a secure connection 106, such as a secure sockets layer (SSL) connection.

[0011] The switch 102 stores a plurality of flow tables 108i-108 n , hereinafter collectively referred to as "flow tables 108." The flow tables 108 each comprises at least one entry. In one example, each entry includes at least three fields: (1 ) a rule (R) or set of data packet characteristics that defines a "flow," for use in classifying incoming data packets; (2) an action (A) that defines how data packets that are part of the corresponding flow should be processed (e.g., forward to a specific port, encapsulate and forward to the controller 104, or drop); and (3) statistics (S) that track the number of data packets and/or bytes for the corresponding flow and the time elapsed since an incoming data packet was last matched to the flow.

[0012] The controller 104 provides commands to the switch 102 over the secure link 106, including commands to add or delete entries in the switch's flow tables 108. The switch 102 may also forward data packets to the controller 104 over the secure link 106, for example when the data packets do not match any of the flows in the switch's flow tables 108. The switch 102 and the controller 104 may communicate according to the OpenFlow communication protocol or other protocols compatible with SDN.

[0013] When an incoming data packet is received by the switch 102, the switch 102 processes the incoming data packet against the flow tables 108 in order to determine what action or actions to take on the data packet. For instance, if the incoming data packet is matched to a flow in one of the flow tables 108, then the switch 102 will process the incoming data packet according to the actions that correspond to the flow. This may involve forwarding the incoming data packet to a particular port so that it can be delivered to an endpoint device 1 10i -1 10 m , hereinafter collectively referred to as "endpoint devices 1 10." In one example discussed in further detail in connection with FIGs. 2 and 3, the switch 102 processes the incoming data packet against a plurality of the flow tables 108 simultaneously, using a speculative execution technique.

[0014] Actions to be performed on the incoming data packet are stored by the switch 102 in a stack 1 12. The stack 1 12 is a data structure that includes a plurality of slots 1 14i -1 14 n , hereinafter referred to as "slots 1 14," for storing the actions. In one example, the stack 1 12 includes one slot 1 14 for each of the flow tables 108 stored in the switch 102.

[0015] FIG. 2 illustrates a flowchart of an example method 200 for simultaneous processing of flow tables. In one example, the method 200 may be performed by the switch 102 illustrated in FIG. 1 or by a computer as illustrated in FIG. 4 and discussed below. For simplicity of explanation, the method 200 is described as being performed by the switch 102 of FIG. 1 . Reference is made in the discussion of the method 200 to elements of FIG. 1 and FIG. 2.

[0016] At block 202, the method 200 begins. At block 204, the switch 102 obtains an incoming data packet.

[0017] At block 206, the switch 102 performs operations on a plurality of flow tables simultaneously. In one example, the operations performed on the plurality of flow tables include attempting to match a field of the incoming data packet to a field of an entry in each of the flow tables 108. Thus, the method 200 performs operations on flow tables even when there is no guarantee that those flow tables will match the incoming data packet, or even that a path to the flow tables will exist based on matches in other flow tables.

[0018] At block 208, the switch 102 identifies at least one action to be performed on the incoming data packet, based on the operations performed in block 206.

[0019] The method 200 ends in block 210.

[0020] FIG. 3 illustrates a flowchart of another example method 300 for simultaneous processing of flow tables. In one example, the method 300 may be performed by the switch 102 illustrated in FIG. 1 or by a computer as illustrated in FIG. 4 and discussed below. For simplicity of explanation, the method 300 is described as being performed by the switch 102 of FIG. 1 . Reference is made in the discussion of the method 300 to elements of FIG. 1 and FIG. 3.

[0021] At block 302, the method 300 begins. At block 304, the switch 102 obtains an incoming data packet.

[0022] At block 306, the switch 102 retrieves a plurality of flow tables. In one example, the plurality of flow tables comprises a subset of all of the flow tables 108 stored by the switch. This subset is herein referred to as a "speculation window." The size of the speculation window, i.e., the number of flow tables in the speculation window, may be configurable based on hardware and/or software constraints or requirements. For instance, referring to the example illustrated in FIG. 1 , the speculation window may include anywhere between two and n flow tables.

[0023] At block 308, the switch 102 performs lookups to each of the flow tables in the speculation window simultaneously. In one example, performing a lookup to a flow table involves retrieving the actions associated with each of the entries in the flow table and storing those actions in the stack 1 12. Thus, the method 300 assumes, speculatively, that each flow table in the speculation window matches the incoming data packet.

[0024] At block 310, the switch 102 simultaneously compares the incoming data packet to the entries in each of the flow tables in the speculation window to determine which, if any, of the flow tables matches the incoming data packet. In one example, the comparisons are performed for each of the flow tables in the speculation window simultaneously. In one example, the comparison involves removing from the stack 1 12 any actions corresponding to flow tables that do not match the incoming data packet. In another example, the removed actions are replaced in the stack 1 12 with actions to be applied in cases of flow table misses (i.e., no matches).

[0025] At block 312, the switch 102 confirms that there are no flow tables remaining to be processed. If there are flow tables stored in the switch 102 for which operations have not been performed, then the method 300 returns to block 306 and retrieves a plurality of the unprocessed flow tables. The method 300 then proceeds as described above to perform simultaneous lookups and comparisons for the retrieved plurality of unprocessed flow tables.

[0026] Alternatively, if operations have been performed for all of the flow tables stored in the switch 102, then the method 300 proceeds to block 314. In block 314, the switch 102 performs the actions stored in the stack 1 12.

[0027] The method 300 ends in block 316.

[0028] The methods 200 and 300 can decrease the processing time for an SDN pipeline. For example, assume that it takes one clock cycle to perform a table lookup and one clock cycle to accumulate actions in the stack. Further assume that the SDN pipeline includes fifteen flow tables, i.e., n = 15. If one were to perform lookups to the fifteen flow tables one-by-one, and if every flow table matched an incoming data packet, then two clock cycles will be consumed in the processing of each flow table: one clock cycle to perform the lookup, and one clock cycle to store the associated actions. Thus, a total of thirty clock cycles would be consumed. If, instead, lookups and comparisons are performed for each of the fifteen flow tables simultaneously as described herein, then only two clock cycles will be consumed: one clock cycle to perform all of the lookups simultaneously, and one clock cycle to accumulate the actions from the matching flow tables. It is noted that this example represents a best case scenario in which every flow table matches the incoming data packet. For any flow tables that do not match the incoming data packet, one additional clock cycle will be consumed to remove the corresponding actions from the stack.

[0029] It should be noted that although not explicitly specified, at least one block, function, or operation of the method 200 or the method 300 described above may include a storing, displaying and/or outputting operation depending upon the particular application. In other words, any data, records, fields, and/or intermediate results discussed in the methods can be stored, displayed, and/or outputted to another device depending upon the particular application. Furthermore, blocks, functions, or operations in FIG. 2 and FIG. 3 that recite a determining operation, or involve a decision, do not necessarily imply that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step.

[0030] FIG. 4 depicts an example high-level block diagram of a computer that can be transformed into a machine that is dedicated to perform the functions described herein. Notably, no computer or machine currently exists that performs the functions as described herein. As a result, the examples of the present disclosure improve the operation and functioning of the computer to simultaneously process a plurality of flow tables, as disclosed herein.

[0031] As depicted in FIG. 4, the computer 400 comprises a hardware processor element 402, e.g., a central processing unit (CPU), a microprocessor, or a multi-core processor, a memory 404, e.g., random access memory (RAM) and/or read only memory (ROM), a module 405 for processing flow tables, and various input/output devices 406, e.g., storage devices, including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive, a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output port, an input port and a user input device, such as a keyboard, a keypad, a mouse, a microphone, and the like. Although only one processor element is shown, it should be noted that the computer may employ a plurality of processor elements. Furthermore, although one computer is shown in the figure, if the method(s) as discussed above is implemented in a distributed or parallel manner for a particular illustrative example, i.e., the blocks of the above method(s) or the entire method(s) are implemented across multiple or parallel computers, then the computer of this figure is intended to represent each of those multiple computers. Furthermore, at least one hardware processor can be utilized in supporting a virtualized or shared computing environment. The virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtualized virtual machines, hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented.

[0032] It should be noted that the present disclosure can be implemented by machine readable instructions and/or in a combination of machine readable instructions and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the blocks, functions and/or operations of the above disclosed methods. In one example, instructions and data for the present module or process 405 for simultaneous processing of flow tables, e.g., machine readable instructions can be loaded into memory 404 and executed by hardware processor element 402 to implement the blocks, functions or operations as discussed above in connection with the exemplary method 200 or method 300. Furthermore, when a hardware processor executes instructions to perform "operations", this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component, e.g., a co-processor and the like, to perform the operations

[0033] In one example, instructions and data for the present module or process 405 for simultaneous processing of flow tables, e.g., machine readable instructions can be loaded into memory 404 and executed by hardware processor element 402 to implement the blocks, functions or operations as discussed above in connection with the exemplary method 200 or method 300. For instance, the module 405 may include a plurality of programming code components, including a lookup component 408, a comparison component 410, and a stack builder component 412. These programming code components may be included in a computing system that includes a switch and a controller that cooperate to perform packet forwarding, such as the system 100.

[0034] The lookup component 406 may be configured to simultaneously retrieve the actions associated with each of the entries in a plurality of flow tables and to send those actions to a stack or similar data structure.

[0035] The comparison component 410 may be configured to simultaneously compare an incoming data packet to the entries in each of a plurality of flow tables and to determine which, if any, of the flow tables matches the incoming data packet. The comparison component 410 may be further configured to issue a command to remove any actions corresponding to flow tables that do not match the incoming data packet from a stack or similar data structure.

[0036] The stack builder component 412 may be configured to generate a stack of actions to be performed on an incoming data packet, based on commands received from the lookup component 408 and/or the comparison component 410.

[0037] The processor executing the machine readable instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor. As such, the present module 405 for simultaneous processing of flow tables, including associated data structures, of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, nonvolatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like. More specifically, the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server.

[0038] It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.




 
Previous Patent: PAPERBOARD CARTON

Next Patent: MULTIPLE SPEED DRILL BIT ASSEMBLY