Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TECHNOLOGIES FOR NETWORK APPLICATION PROGRAMMING WITH FIELD-PROGRAMMABLE GATE ARRAYS
Document Type and Number:
WIPO Patent Application WO/2019/009980
Kind Code:
A1
Abstract:
Technologies for network application programming include a computing device that analyzes a network application source program. The source program includes a declarative description of a network application in a domain-specific language, such as P4. The computing device translates the declarative description of the network application into a register-transfer level (RTL) description, and then compiles the RTL description into a bitstream definition that is targeted to an FPGA. For example, the computing device may generate a parse graph based on the network application source program, and then generate an RTL TCAM-SRAM structure for each node of the parse graph. The computing device may optimize the RTL description, for example by simplifying RTL structures or removing unused logic. The computing device may program an FPGA with the bitstream definition. Other embodiments are described and claimed.

Inventors:
DALY DANIEL (US)
WILLIS THOMAS (US)
WANG PAT (US)
ANAND VISHAL (US)
NGUYEN HUNG (US)
APTE VARSHA (US)
Application Number:
PCT/US2018/036449
Publication Date:
January 10, 2019
Filing Date:
June 07, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTEL CORP (US)
International Classes:
G06F13/10
Foreign References:
US20130263082A12013-10-03
US20170068751A12017-03-09
US9223921B12015-12-29
US20040143801A12004-07-22
Other References:
WESLEY G. PECK: "Hardware/Software Co-Design via Specification Refinement", PHD DISSERTATION, 31 December 2011 (2011-12-31), University of Kansas, pages 1 - 173, XP055564532, Retrieved from the Internet
Attorney, Agent or Firm:
KELLETT, Glen, M. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computing device for network application programming, the computing device comprising:

source analyzer circuitry to analyze a network application source program, wherein the network application source program comprises a declarative description of a network application in a domain- specific language;

translator circuitry to translate the declarative description of the network application into a register-transfer level description of the network application; and

compiler circuitry to compile the register-transfer level description of the network application into a bitstream definition of the network application, wherein the bitstream is targeted to a field-programmable gate array.

2. The computing device of claim 1, further comprising programmer circuitry to program the field-programmable gate array with the bitstream definition in response to compilation of the register-transfer level description.

3. The computing device of claim 1, further comprising programmer circuitry to partially reconfigure the field-programmable gate array with the bitstream definition in response to compilation of the register-transfer level description.

4. The computing device of claim 3, further comprising:

programmer circuitry to switch from an active block of the field-programmable gate array to a backup block of the field-programmable gate array in response to partial reconfiguration of the field-programmable gate array;

wherein to partially reconfigure the field-programmable gate array comprises to program the backup block with the bitstream definition.

5. The computing device of claim 1, wherein to compile the register- transfer level description of the network application into a bitstream definition comprises to optimize the register-transfer level description of the network application.

6. The computing device of any of claims 1-5, wherein: to analyze the network application source program comprises to generate a parse graph based on the network application source program, wherein the parse graph comprises a plurality of nodes; and

to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a plurality of TCAM-SRAM structures, wherein each TCAM-SRAM structure corresponds to a node of the parse graph.

7. The computing device of claim 6, wherein to compile the register-transfer level description of the network application into the bitstream definition comprises to optimize the plurality of TCAM-SRAM structures to generate corresponding logic gates.

8. The computing device of any of claims 1-5, wherein:

to analyze the network application source program comprises to determine a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and

to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate multiplexer logic based on the control flow.

9. The computing device of any of claims 1-5, wherein:

to analyze the network application source program comprises to determine a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and

to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate congestion management logic based on the control flow.

10. The computing device of any of claims 1-5, wherein:

to analyze the network application source program comprises to analyze a definition of a match- action table; and

to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a lookup block to access the match-action table in an external memory.

11. The computing device of claim 10, wherein to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a modify block to apply one or more actions from the match- action table to a network packet.

12. A method for network application programming, the method comprising:

analyzing, by a computing device, a network application source program, wherein the network application source program comprises a declarative description of a network application in a domain- specific language;

translating, by the computing device, the declarative description of the network application into a register-transfer level description of the network application; and

compiling, by the computing device, the register-transfer level description of the network application into a bitstream definition of the network application, wherein the bitstream is targeted to a field-programmable gate array.

13. The method of claim 12, further comprising:

programming, by the computing device, the field-programmable gate array with the bitstream definition in response to compiling the register-transfer level description.

14. The method of claim 12, further comprising partially reconfiguring, by the computing device, the field-programmable gate array with the bitstream definition in response to compiling the register-transfer level description.

15. The method of claim 14, further comprising:

switching, by the computing device, from an active block of the field- programmable gate array to a backup block of the field-programmable gate array in response to partially reconfiguring the field-programmable gate array;

wherein partially reconfiguring the field-programmable gate array comprises programming the backup block with the bitstream definition.

16. The method of claim 12, wherein compiling the register- transfer level description of the network application into a bitstream definition comprises optimizing the register-transfer level description of the network application.

17. The method of claim 12, wherein:

analyzing the network application source program comprises generating a parse graph based on the network application source program, wherein the parse graph comprises a plurality of nodes; and

translating the declarative description of the network application into the register-transfer level description of the network application comprises generating a plurality of TCAM-SRAM structures, wherein each TCAM-SRAM structure corresponds to a node of the parse graph.

18. The method of claim 17, wherein compiling the register- transfer level description of the network application into the bitstream definition comprises optimizing the plurality of TCAM-SRAM structures to generate corresponding logic gates.

19. The method of claim 12, wherein:

analyzing the network application source program comprises determining a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and

translating the declarative description of the network application into the register-transfer level description of the network application comprises generating multiplexer logic based on the control flow.

20. The method of claim 12, wherein:

analyzing the network application source program comprises determining a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and

translating the declarative description of the network application into the register-transfer level description of the network application comprises generating congestion management logic based on the control flow.

21. The method of claim 12, wherein:

analyzing the network application source program comprises analyzing a definition of a match- action table; and translating the declarative description of the network application into the register-transfer level description of the network application comprises generating a lookup block to access the match-action table in an external memory.

22. The method of claim 21, wherein translating the declarative description of the network application into the register-transfer level description of the network application comprises generating a modify block to apply one or more actions from the match-action table to a network packet.

23. A computing device comprising:

a processor; and

a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of claims 12-22.

24. One or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of claims 12-22.

25. A computing device comprising means for performing the method of any of claims 12-22.

Description:
TECHNOLOGIES FOR NETWORK APPLICATION PROGRAMMING WITH FIELD- PROGRAMMABLE GATE ARRAYS

CROSS-REFERENCE TO RELATED U.S. PATENT APPLICATION

[0001] The present application claims priority to U.S. Utility Patent Application Serial

No. 15/644,150, entitled "TECHNOLOGIES FOR NETWORK APPLICATION PROGRAMMING WITH FIELD-PROGRAMMABLE GATE ARRAYS," which was filed on July 07, 2017.

BACKGROUND

[0002] Modern computing devices may include general-purpose processor cores as well as a variety of hardware accelerators for performing specialized tasks. Certain computing devices may include one or more field-programmable gate arrays (FPGAs), which may include programmable digital logic resources that may be configured by the end user or system integrator. In some computing devices, an FPGA may be used to perform network packet processing tasks instead of using general-purpose compute cores.

[0003] P4 is a declarative programming language that may be used to specify how a network switch processes packets. P4 programs have been targeted to execution on FPGAs by building a fully flexible, programmable packet processing pipeline in gates on the FPGA and then compiling the P4 program to be executed by that packet processing pipeline.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.

[0005] FIG. 1 is a simplified block diagram of at least one embodiment of a computing device for network programming with a field-programmable gate array;

[0006] FIG. 2 is a simplified block diagram of at least one embodiment of an environment of the computing device of FIG. 1;

[0007] FIG. 3 is a simplified flow diagram of at least one embodiment of a method for network programming that may be executed by the computing device of FIGS. 1 and 2; and [0008] FIG. 4 is a simplified schematic diagram of at least one embodiment of an RTL

FPGA application that may be generated by the computing device of FIGS. 1-2.

DETAILED DESCRIPTION OF THE DRAWINGS

[0009] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.

[0010] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).

[0011] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).

[0012] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.

[0013] Referring now to FIG. 1, an illustrative computing device 100 for network programming includes a processor 120 and a field-programmable gate array (FPGA) 130. In use, as described below, the computing device 100 analyzes a source program that includes a network application written in a domain- specific, declarative language such as P4. The computing device 100 translates the declarative program into a register transfer level (RTL) description of the application for the FPGA 130. The computing device 100 compiles the RTL description into a bitstream for the FGPA 130 and then may program the FPGA 130 with the bitstream. The computing device 100 may optimize the RTL description during compilation, which may produce an optimized bitstream with reduced resource usage and/or improved performance. Thus, the computing device 100 may allow for network application development using a flexible, high-productivity declarative language such as P4, while still allowing the application to be run efficiently on FPGA hardware. Accordingly, the computing device 100 may perform network packet processing using the FPGA 130 with improved clock speeds and/or reduced physical resource usage. Additionally or alternatively, although illustrated as including an FPGA 130, it should be understood that in some embodiments the computing device 100 may translate and compile the source program into a bitstream targeted to an FPGA 130 for a different computing device 100.

[0014] The processor 120 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 120 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 124 may be embodied as any type of volatile or non- volatile memory or data storage capable of performing the functions described herein. In operation, the memory 124 may store various data and software used during operation of the computing device 100 such operating systems, applications, programs, libraries, and drivers. The memory 124 is communicatively coupled to the processor 120 via the I/O subsystem 122, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120, the memory 124, and other components of the computing device 100. For example, the I/O subsystem 122 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, sensor hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 122 may form a portion of a system-on-a- chip (SoC) and be incorporated, along with the processor 120, the memory 124, and other components of the computing device 100, on a single integrated circuit chip.

[0015] The data storage device 126 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, non-volatile flash memory, or other data storage devices. The computing device 100 may also include a communications subsystem 128, which may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device 100 and other remote devices over a computer network (not shown). For example, the communications subsystem 128 may be embodied as or otherwise include a network interface controller (NIC) for sending and/or receiving network data with remote devices. The communications subsystem 128 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, InfiniBand®, Bluetooth®, Wi-Fi®, WiMAX, 3G, 4G LTE, etc.) to effect such communication.

[0016] As shown in FIG. 1, the computing device 100 includes a field-programmable gate array (FPGA) 130. The FPGA 130 may be embodied as an integrated circuit including programmable digital logic resources that may be configured after manufacture. The FPGA 130 may include, for example, a configurable array of logic blocks in communication over a configurable data interchange. The FPGA 130 may be coupled to the processor 120 via a highspeed connection interface such as a peripheral bus (e.g., a PCI Express bus) or an inter- processor interconnect (e.g., an in-die interconnect (IDI) or QuickPath Interconnect (QPI)), via a fabric interconnect such as Intel® Omni-Path Architecture, or via any other appropriate interconnect. Additionally, although illustrated in FIG. 1 as a discrete component separate from the processor 120 and/or the I/O subsystem 122, it should be understood that in some embodiments the FPGA 130, the processor 120, the I/O subsystem 122, and/or the memory 124 may be incorporated in the same package and/or in the same computer chip, for example in the same SoC. In some embodiments, the FPGA 130 may be incorporated as part of a network interface controller (NIC) of the computing device 100 and/or included in the same multi-chip package as the NIC. In some embodiments, the FPGA 130 may be integrated with the NIC on the same die (e.g., SoC).

[0017] The computing device 100 may further include one or more peripheral devices

132. The peripheral devices 132 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 132 may include a touch screen, graphics circuitry, a graphical processing unit (GPU) and/or processor graphics, an audio device, a microphone, a camera, a keyboard, a mouse, a network interface, and/or other input/output devices, interface devices, and/or peripheral devices.

[0018] Referring now to FIG. 2, in an illustrative embodiment, the computing device

100 establishes an environment 200 during operation. The illustrative environment 200 includes source analyzer 202, a translator 204, a compiler 206, and a programmer 208. The various components of the environment 200 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 200 may be embodied as circuitry or collection of electrical devices (e.g., source analyzer circuitry 202, translator circuitry 204, compiler circuitry 206, and/or programmer circuitry 208). It should be appreciated that, in such embodiments, one or more of the source analyzer circuitry 202, the translator circuitry 204, the compiler circuitry 206, and/or the programmer circuitry 208 may form a portion of the processor 120, the I/O subsystem 122, and/or other components of the computing device 100. Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another.

[0019] The source analyzer 202 is configured to analyze a network application source program 210. The network application source program 210 may be embodied as a source code file or other computer program that includes a declarative description of a network application in a domain- specific language, such as the P4 language. Analyzing the network application source program 210 may include generating a parse graph including multiple nodes, determining a control flow that is indicative of an order of match-action tables, and/or analyzing definitions of the match-action tables.

[0020] The translator 204 is configured to translate the declarative description of the network application into a register-transfer level (RTL) description 212 of the network application. The RTL description 212 may be embodied as any computer file or other data that includes a RTL description of the network application. For example, the RTL description 212 may include a description of the network application in a RTL hardware description language such as Verilog or VHDL. Translating the declarative description may include generating multiple TCAM-SRAM structures, with each TCAM-SRAM structure corresponding to a node of the parse graph. Translating the declarative description may also include generating multiplexer logic and/or congestion management logic based on the control flow. Translating the declarative description may also include generating one or more lookup blocks to access the match-action tables in a memory external to the FPGA 130 (e.g., DRAM). Translating the declarative description may include generating a modify block to apply one or more actions from the match-action table to a network packet.

[0021] The compiler 206 is configured to compile the RTL description 212 of the network application into a bitstream definition 214 of the network application. The bitstream definition 214 is targeted to the FPGA 130. Compiling the RTL description 212 of the network application into the bitstream definition 214 may include optimizing the RTL description 212.

[0022] The programmer 208 is configured to program the FPGA 130 with the bitstream definition 214 in response to compilation. In some embodiments, the programmer 208 may be configured to partially reconfigure the FPGA 130 with the bitstream definition 214. Additionally or alternatively, in some embodiments partially reconfiguring the FPGA 130 may include programming a backup block of the FPGA 130 with the bitstream definition 214. In those embodiments, the programmer 208 may be configured to switch from an active block of the FPGA 130 to the backup block in response to the partial reconfiguration.

[0023] Referring now to FIG. 3, in use, the computing device 100 may execute a method 300 for network application programming with the FPGA 130. It should be appreciated that, in some embodiments, the operations of the method 300 may be performed by one or more components of the environment 200 of the computing device 100 as shown in FIG. 2. The method 300 begins in block 302, in which the computing device 100 analyzes a network application program written in domain- specific language. For example, the network application may be defined by a source program 210 that includes a P4 program. The network application may define a packet processing pipeline. For example, in some embodiments, the network application may implement an exact match cache (EMC) that may be used to offload packet processing tasks for an Open vSwitch programmable virtual switch. In some embodiments, in block 304 the computing device 100 may generate a parse graph based on a parser definition of the source program 210. The parser definition identifies one or more fields (e.g., header, data, or other fields) that are to be extracted from an incoming packet. In some embodiments, in block 306, the computing device 100 may determine control flow based on a control flow definition of the source program. The control flow determines which match-action tables are looked up and in what order.

[0024] In block 308, the computing device 100 translates the domain- specific application definition into a register-transfer level (RTL) description 212 for the FPGA 130. In some embodiments, in block 310, the computing device 100 may generate an RTL

TCAM+SRAM structure for each node of the parse graph. In some embodiments, in block 312, the computing device 100 may generate RTL multiplexer logic based on the control flow definition. The multiplexer logic may concatenate multiple fields identified by the parser into a match key, which is used to look up an entry in a match-action table. The control flow multiplexer logic also defines what actions are taken if a match-action table misses, and how the results are interpreted if there is a hit. In some embodiments, in block 314, the computing device 100 may generate RTL congestion management logic based on the control flow definition. The congestion management logic may be generated as state machine logic at the end of the packet processing pipeline, for example comparing a set of watermarks against the current utilization of packet memory.

[0025] Referring now to FIG. 4, diagram 400 illustrates one potential embodiment of an

RTL description 212 that may be generated by the computing device 100. As described above, the RTL description 212 may be generated by the computing device 100 based on a source program in a domain- specific declarative programming language such as P4. As shown, the RTL description 212 may include an ingress block 402, a parser 404, a control block 410, a memory lookup block 412, a packet modification block 414, a congestion management block 424, and an egress block 426. Those components may operate as a fixed-function packet processing pipeline, based on the P4 definition of the network application program.

[0026] The ingress block 402 receives network packets from the communication subsystem 128. For example, the ingress block 402 may receive network packets from one or more MACs or other network port logic. Incoming network packet data may be placed on a ring, bus, or otherwise communicated to other blocks of the FPGA 130, such as the parser 404.

[0027] The parser 404 parses an incoming packet to identify one or more headers within the packet. The parser may receive packet data from the ingress block 402 in an interleaved manner. After parsing, the parser 404 may forward packet data, lookup results, and other packet information (such as first and last cell address) to packet memory. As shown, the parser 404 may include multiple stages 406, which each correspond to a node of the parse graph. Thus, the number of stages 406 may be determined from the deepest parsing graph inferred from the source program 210.

[0028] Each stage 406 includes a TCAM-SRAM block 408. The maximum number fields that will be looked at in parallel will be inferred based on the source program 210.

During the parsing of the packet, a header offset, header type, and field offsets may be maintained. The header type indicates which type of header is being currently parsed, the header offset points to the start of this header in the packet, and field offsets relative to the header offset point to the fields that should be extracted for lookup. The header type and packet data from those field offsets may be used to form a TCAM lookup key. This lookup key will be matched against the static entries for that particular stage 406. The matching corresponding SRAM entry indicates how all the fields representing the state should be updated. In addition to updating the state, the parsing stage 406 will also populate the lookup fields specified in the source program 210. The SRAM also contains an entry to indicate when parsing should be ended. The parser 404 drives the lookup results to the control block 410 when the end of packet (EOP) for the packet is received. Parsing results may be transferred, for example, using a buffer of packet lookup descriptor entries.

[0029] The control block 410 determines which match-action tables are looked up and in what order. The control block 410 implements the control flow defined by the source program 210. As described above, the control block 410 includes multiplexer logic to concatenate multiple fields identified by the parser 404 into a match key. The match key is provided to the memory lookup block 412 to look up an entry in a match-action table. The control block 410 multiplexer logic also defines what actions are taken if a match-action table misses, and how the results are interpreted if there is a hit. At the end of the control flow, results from the table lookups (i.e., modification requests) are queued up and sent to the packet modification block 414.

[0030] The memory lookup block 412 accesses external memory (e.g., DDR memory) that contains the match-action tables. For example, the memory lookup block 412 may lookup match-action tables in the main memory 124 or in a DDR memory coupled to the FPGA 130.

The memory lookup block 412 may receive a lookup key (e.g., a concatenation of various packet header fields) from the control block 410, and then look up an entry matching the lookup key in the match-action table. The key definition, result definition, and the number of times the memory is accessed is defined in the control flow of the source program 210. For each match, the memory lookup block 412 retrieves a set of bytes that are interpreted by the fixed-function pipeline of the FPGA 130. Every entry in the match-action table may be counted (e.g., both the number of packets hitting the entry and the number of bytes hitting the entry). This allows the network application to determine the size of the lookup performed, which is directly related to the number of packets/second that are supported by the pipeline. The match-action tables may be populated by an administrator or by a software pipeline, for example by an Open vSwitch

(OVS) control plane. For example, an OVS software pipeline may translate one or more rules into match-action table entries on the first packet of every new flow, and then install those entries into the appropriate match-action table. The memory lookup block 412 may return one or more commands, such as packet modification commands, to be performed from the match- action tables. The commands may be transferred to the packet modification block 414 using a buffer of packet transmit descriptors.

[0031] The packet modification block 414 performs actions that were specified by the match-action tables. The packet modification block 414 performs any header modification before sending the packet to the destination port. The packet modification block 414 receives the packet data from the pipeline and the modification commands from the control flow. The packet modification block 414 may be capable of performing any packet encapsulation, de- encapsulation, or modification that has been specified in the network application of the source program 210. At egress, the packet modification block 414 may maintain a per-port packet realign block 422 that will re-align the packet data to 64 bytes before transferring the packet data on a ring or other internal data interconnect. The packet modification block 414 may also maintain a per-port context that contains the current state of the header, including partial checksums, number of bytes remaining in packet, total incoming packet length, first segment address, last segment address, and/or other state information. The checksum may be a domain specific function to correctly calculate IP and L4 checksums in each header. The packet modification block 414 may be capable of supporting packet segmentation. The incoming packet may contain a segmentation enable and max length of each segment. The control block 410 may still schedule the packet to packet modify as a single packet; however, the packet modification block 414 may chop the packet into smaller packets. The headers that need to be added for every segment will be stored when the first packet is received. The control block 410 may also schedule the packet to packet modify with a list of pointers, where each pointer points to a fragment of the packet. The fragments in this list may be concatenated back together before the other modification operations are applied.

[0032] The packet modification block 414 may include one or more sub-blocks to perform the requested commands, such as a strip block 416, a modify block 418, and/or an insert block 420. The packet modification block 414 may receive the starting offset for each header from the parser 404. Header strip commands may be received in the form of an n-bit vector. The value of n may be inferred from the source program 210. Each bit in the vector represents each valid header in the packet. The strip block 416 may remove any header with a corresponding bit that has a value of 0. Packet modify commands may be in the format {header number, field offset, field length, source of new value}. The header number indicates which header the command is applicable to, starting from number 0 at the MSB of the packet. For example, a packet with Ethernet, VLAN, IPv4, and TCP will have header 0 for Ethernet, header

1 for VLAN, header 2 for IPv4, and header 3 for TCP. The source field may represent either a static value, a value contained in a packet transmit descriptor (received from the memory lookup block 412), an index into a memory, or an offset into the current packet. Insert commands may be in the format of {header number, source of header} . The header number indicates where the header should be inserted. The source of the header may be a memory.

[0033] The congestion management block 424 may perform congestion management operations, including dropping and/or delaying transmission of the packet after packet modifications are completed. The congestion management block 424 may be generated based on one or more control flow definitions of the source program 210. The congestion management block 424 is emitted as state machine logic at the end of the pipeline. Congestion management involving comparing set of watermarks against the current utilization of the packet memory. The congestion management may flow control or drop the packet if the packet buffer is congested.

[0034] The egress block 426 outputs the packet to an appropriate port of the communication subsystem 128. The egress block 426 may, for example, pull packets after modification from a packet memory and then output them to the appropriate port of the communication subsystem 128.

[0035] Referring back to FIG. 3, after generating the RTL description 212, in block 316 the computing device 100 compiles the RTL description 212 into the bitstream definition 214 for the FPGA 130. The computing device 100 may use an RTL toolchain to synthesize, place and route, optimize, and otherwise compile the RTL description 212 into the bitstream definition 214. Optimizing the RTL description 212 may improve the space, time, power, or other efficiency of the generated bitstream definition 214, but may not necessarily generate the mathematically optimal or otherwise most efficient bitstream definition 214 possible. Thus, after compilation and optimization, the bitstream definition 214 may include a reduced (or minimal) amount of logic gates or other resources required to perform the network program. In some embodiments, in block 318, the computing device 100 may optimize the bitstream definition 214 by replacing RTL constructs with logic gates. In some embodiments, in block 320, the computing device 100 may optimize the bitstream definition 214 by replacing the TCAM+SRAM structure for each parse graph node with equivalent logic gates. Because most rules in a P4 source program 210 do not require the "ternary" match of the TCAM, most parser rules may be optimized into simple "if-then" state machine comparisons by the RTL compiler. This optimization may allow only the required parsing logic to be put into a fixed function pipeline in the FPGA 130, removing all of the unused gates and logic. In some embodiments, in block 322, the computing device 100 may optimize the bitstream definition 214 by removing remaining unused logic.

[0036] In block 324, the computing device 100 determines whether to program the

FPGA 130 with the bitstream definition 214. For example, a production computing device 100 may program the FPGA 130 in order to perform packet processing tasks. In particular, the FPGA 130 may be programmed to operate as an exact match cache (EMC) used with an Open vSwitch (OVS) software pipeline. After programming, the FPGA 130 could then be paired with a software control plane to have forwarding rules populated in the same fashion that an all- software pipeline would be populated. The FPGA 130 may fully mimic the behavior of the software pipeline it offloads. Of course, it should be understood that in some embodiments the bitstream definition 214 may be used to program a different computing device 100. If the computing device 100 determines not to program the FPGA 130, the method 300 loops back to block 302, in which the computing device 100 may analyze and compile another source program 210 (e.g., a different network application program, a new version of the application program, or other source program 210). If the computing device 100 determines to program the FPGA 130, the method 300 advances to block 326.

[0037] In block 326 the computing device 100 programs the FPGA 130 with the bitstream definition 214. The computing device 100 may use any appropriate technique to program or otherwise configure the logic blocks or other digital logic resources of the FPGA

130 with the bitstream definition 214. In some embodiments, in block 328, the computing device 100 may partially reconfigure an individual block of the FPGA 130 with a modified bitstream definition 214. For example, the computing device 100 may partially reconfigure only parts of the bitstream definition 214 that have changed between versions of the source program 210. In some embodiments, in block 330, the computing device 100 may program a backup block of the FPGA 130 and switch from an active block to the newly programmed backup block. In those embodiments, parts of the FPGA 130 may be reserved for backup blocks (e.g., a backup parser, a backup control block, etc.). The computing device 100 may program and switch to the backup blocks in order to support online reconfiguration. For example, referring again to FIG. 4, the illustrative FPGA 130 includes resources reserved for a backup parser 404b. In that example, the backup parser 404b may be reconfigured with a revised bitstream definition 214, and then the control block 410 may switch from the original parser 404 to the newly-programmed parser 404b, allowing for fast reconfiguration. Referring back to FIG. 3, after programming the FPGA 130, the method 300 loops back to block 302, in which the computing device 100 may analyze and compile another source program 210 (e.g., a different network application program, a new version of the application program, or other source program 210).

[0038] It should be appreciated that, in some embodiments, the method 300 may be embodied as various instructions stored on a computer-readable media, which may be executed by the processor 120, the FPGA 130, and/or other components of the computing device 100 to cause the computing device 100 to perform the method 300. The computer-readable media may be embodied as any type of media capable of being read by the computing device 100 including, but not limited to, the memory 124, the data storage device 126, firmware devices, other memory or data storage devices of the computing device 100, portable media readable by a peripheral device 132 of the computing device 100, and/or other media.

EXAMPLES

[0039] Illustrative examples of the technologies disclosed herein are provided below.

An embodiment of the technologies may include any one or more, and any combination of, the examples described below.

[0040] Example 1 includes a computing device for network application programming, the computing device comprising: one or more processors; and one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the computing device to: analyze a network application source program, wherein the network application source program comprises a declarative description of a network application in a domain- specific language; translate the declarative description of the network application into a register-transfer level description of the network application; and compile the register-transfer level description of the network application into a bitstream definition of the network application, wherein the bitstream is targeted to a field-programmable gate array.

[0041] Example 2 includes the subject matter of Example 1, and wherein the plurality of instructions, when executed, further cause the computing device to program the field- programmable gate array with the bitstream definition in response to compilation of the register-transfer level description.

[0042] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the plurality of instructions, when executed, further cause the computing device to partially reconfigure the field-programmable gate array with the bitstream definition in response to compilation of the register-transfer level description. [0043] Example 4 includes the subject matter of any of Examples 1-3, and wherein: the plurality of instructions, when executed, further cause the computing device to switch from an active block of the field-programmable gate array to a backup block of the field-programmable gate array in response to partial reconfiguration of the field-programmable gate array; and to partially reconfigure the field-programmable gate array comprises to program the backup block with the bitstream definition.

[0044] Example 5 includes the subject matter of any of Examples 1-4, and wherein to compile the register-transfer level description of the network application into a bitstream definition comprises to optimize the register-transfer level description of the network application.

[0045] Example 6 includes the subject matter of any of Examples 1-5, and wherein: to analyze the network application source program comprises to generate a parse graph based on the network application source program, wherein the parse graph comprises a plurality of nodes; and to translate the declarative description of the network application into the register- transfer level description of the network application comprises to generate a plurality of TCAM-SRAM structures, wherein each TCAM-SRAM structure corresponds to a node of the parse graph.

[0046] Example 7 includes the subject matter of any of Examples 1-6, and wherein to compile the register-transfer level description of the network application into the bitstream definition comprises to optimize the plurality of TCAM-SRAM structures to generate corresponding logic gates.

[0047] Example 8 includes the subject matter of any of Examples 1-7, and wherein: to analyze the network application source program comprises to determine a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate multiplexer logic based on the control flow.

[0048] Example 9 includes the subject matter of any of Examples 1-8, and wherein: to analyze the network application source program comprises to determine a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate congestion management logic based on the control flow. [0049] Example 10 includes the subject matter of any of Examples 1-9, and wherein: to analyze the network application source program comprises to analyze a definition of a match- action table; and to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a lookup block to access the match-action table in an external memory.

[0050] Example 11 includes the subject matter of any of Examples 1-10, and wherein to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a modify block to apply one or more actions from the match-action table to a network packet.

[0051] Example 12 includes the subject matter of any of Examples 1-11, and wherein the computing device further comprises a network interface controller, and wherein the network interface controller comprises the field-programmable gate array.

[0052] Example 13 includes the subject matter of any of Examples 1-12, and wherein the computing device further comprises a multi-chip package, wherein the multi-chip package comprises a network interface controller and the field-programmable gate array.

[0053] Example 14 includes the subject matter of any of Examples 1-13, and wherein the computing device comprises a system-on-a-chip, wherein the system-on-a chip comprises the one or more processors, the field-programmable gate array, and a network interface controller.

[0054] Example 15 includes a method for network application programming, the method comprising: analyzing, by a computing device, a network application source program, wherein the network application source program comprises a declarative description of a network application in a domain- specific language; translating, by the computing device, the declarative description of the network application into a register-transfer level description of the network application; and compiling, by the computing device, the register-transfer level description of the network application into a bitstream definition of the network application, wherein the bitstream is targeted to a field-programmable gate array.

[0055] Example 16 includes the subject matter of Example 15, and further comprising: programming, by the computing device, the field-programmable gate array with the bitstream definition in response to compiling the register-transfer level description.

[0056] Example 17 includes the subject matter of any of Examples 15 and 16, and further comprising partially reconfiguring, by the computing device, the field-programmable gate array with the bitstream definition in response to compiling the register-transfer level description. [0057] Example 18 includes the subject matter of any of Examples 15-17, and further comprising: switching, by the computing device, from an active block of the field- programmable gate array to a backup block of the field-programmable gate array in response to partially reconfiguring the field-programmable gate array; wherein partially reconfiguring the field-programmable gate array comprises programming the backup block with the bitstream definition.

[0058] Example 19 includes the subject matter of any of Examples 15-18, and wherein compiling the register-transfer level description of the network application into a bitstream definition comprises optimizing the register-transfer level description of the network application.

[0059] Example 20 includes the subject matter of any of Examples 15-19, and wherein: analyzing the network application source program comprises generating a parse graph based on the network application source program, wherein the parse graph comprises a plurality of nodes; and translating the declarative description of the network application into the register- transfer level description of the network application comprises generating a plurality of TCAM- SRAM structures, wherein each TCAM-SRAM structure corresponds to a node of the parse graph.

[0060] Example 21 includes the subject matter of any of Examples 15-20, and wherein compiling the register-transfer level description of the network application into the bitstream definition comprises optimizing the plurality of TCAM-SRAM structures to generate corresponding logic gates.

[0061] Example 22 includes the subject matter of any of Examples 15-21, and wherein: analyzing the network application source program comprises determining a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and translating the declarative description of the network application into the register-transfer level description of the network application comprises generating multiplexer logic based on the control flow.

[0062] Example 23 includes the subject matter of any of Examples 15-22, and wherein: analyzing the network application source program comprises determining a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and translating the declarative description of the network application into the register-transfer level description of the network application comprises generating congestion management logic based on the control flow. [0063] Example 24 includes the subject matter of any of Examples 15-23, and wherein: analyzing the network application source program comprises analyzing a definition of a match- action table; and translating the declarative description of the network application into the register-transfer level description of the network application comprises generating a lookup block to access the match-action table in an external memory.

[0064] Example 25 includes the subject matter of any of Examples 15-24, and wherein translating the declarative description of the network application into the register-transfer level description of the network application comprises generating a modify block to apply one or more actions from the match-action table to a network packet.

[0065] Example 26 includes the subject matter of any of Examples 15-25, and wherein the computing device further comprises a network interface controller, and wherein the network interface controller comprises the field-programmable gate array.

[0066] Example 27 includes the subject matter of any of Examples 15-26, and wherein the computing device further comprises a multi-chip package, wherein the multi-chip package comprises a network interface controller and the field-programmable gate array.

[0067] Example 28 includes the subject matter of any of Examples 15-27, and wherein the computing device comprises a system-on-a-chip, wherein the system-on-a chip comprises the one or more processors, the field-programmable gate array, and a network interface controller.

[0068] Example 29 includes a computing device comprising: a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of Examples 15-28.

[0069] Example 30 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 15-28.

[0070] Example 31 includes a computing device comprising means for performing the method of any of Examples 15-28.

[0071] Example 32 includes a computing device for network application programming, the computing device comprising: source analyzer circuitry to analyze a network application source program, wherein the network application source program comprises a declarative description of a network application in a domain- specific language; translator circuitry to translate the declarative description of the network application into a register-transfer level description of the network application; and compiler circuitry to compile the register-transfer level description of the network application into a bitstream definition of the network application, wherein the bitstream is targeted to a field-programmable gate array.

[0072] Example 33 includes the subject matter of Example 32, and further comprising programmer circuitry to program the field-programmable gate array with the bitstream definition in response to compilation of the register-transfer level description.

[0073] Example 34 includes the subject matter of any of Examples 32 and 33, and further comprising programmer circuitry to partially reconfigure the field-programmable gate array with the bitstream definition in response to compilation of the register-transfer level description.

[0074] Example 35 includes the subject matter of any of Examples 32-34, and further comprising: programmer circuitry to switch from an active block of the field-programmable gate array to a backup block of the field-programmable gate array in response to partial reconfiguration of the field-programmable gate array; wherein to partially reconfigure the field- programmable gate array comprises to program the backup block with the bitstream definition.

[0075] Example 36 includes the subject matter of any of Examples 32-35, and wherein to compile the register-transfer level description of the network application into a bitstream definition comprises to optimize the register-transfer level description of the network application.

[0076] Example 37 includes the subject matter of any of Examples 32-36, and wherein: to analyze the network application source program comprises to generate a parse graph based on the network application source program, wherein the parse graph comprises a plurality of nodes; and to translate the declarative description of the network application into the register- transfer level description of the network application comprises to generate a plurality of TCAM-SRAM structures, wherein each TCAM-SRAM structure corresponds to a node of the parse graph.

[0077] Example 38 includes the subject matter of any of Examples 32-37, and wherein to compile the register-transfer level description of the network application into the bitstream definition comprises to optimize the plurality of TCAM-SRAM structures to generate corresponding logic gates.

[0078] Example 39 includes the subject matter of any of Examples 32-38, and wherein: to analyze the network application source program comprises to determine a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate multiplexer logic based on the control flow.

[0079] Example 40 includes the subject matter of any of Examples 32-39, and wherein: to analyze the network application source program comprises to determine a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate congestion management logic based on the control flow.

[0080] Example 41 includes the subject matter of any of Examples 32-40, and wherein: to analyze the network application source program comprises to analyze a definition of a match-action table; and to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a lookup block to access the match-action table in an external memory.

[0081] Example 42 includes the subject matter of any of Examples 32-41, and wherein to translate the declarative description of the network application into the register-transfer level description of the network application comprises to generate a modify block to apply one or more actions from the match-action table to a network packet.

[0082] Example 43 includes the subject matter of any of Examples 32-42, and wherein the computing device further comprises a network interface controller, and wherein the network interface controller comprises the field-programmable gate array.

[0083] Example 44 includes the subject matter of any of Examples 32-43, and wherein the computing device further comprises a multi-chip package, wherein the multi-chip package comprises a network interface controller and the field-programmable gate array.

[0084] Example 45 includes the subject matter of any of Examples 32-44, and wherein the computing device comprises a system-on-a-chip, wherein the system-on-a chip comprises the one or more processors, the field-programmable gate array, and a network interface controller.

[0085] Example 46 includes a computing device for network application programming, the computing device comprising: means for analyzing a network application source program, wherein the network application source program comprises a declarative description of a network application in a domain- specific language; means for translating the declarative description of the network application into a register-transfer level description of the network application; and means for compiling the register-transfer level description of the network application into a bitstream definition of the network application, wherein the bitstream is targeted to a field-programmable gate array.

[0086] Example 47 includes the subject matter of Example 46, and further comprising: means for programming the field-programmable gate array with the bitstream definition in response to compiling the register-transfer level description.

[0087] Example 48 includes the subject matter of any of Examples 46 and 47, and further comprising partially reconfiguring the field-programmable gate array with the bitstream definition in response to compiling the register-transfer level description.

[0088] Example 49 includes the subject matter of any of Examples 46-48, and further comprising: means for switching from an active block of the field-programmable gate array to a backup block of the field-programmable gate array in response to partially reconfiguring the field-programmable gate array; wherein the means for partially reconfiguring the field- programmable gate array comprises means for programming the backup block with the bitstream definition.

[0089] Example 50 includes the subject matter of any of Examples 46-49, and wherein the means for compiling the register-transfer level description of the network application into a bitstream definition comprises means for optimizing the register-transfer level description of the network application.

[0090] Example 51 includes the subject matter of any of Examples 46-50, and wherein: the means for analyzing the network application source program comprises means for generating a parse graph based on the network application source program, wherein the parse graph comprises a plurality of nodes; and the means for translating the declarative description of the network application into the register-transfer level description of the network application comprises means for generating a plurality of TCAM-SRAM structures, wherein each TCAM- SRAM structure corresponds to a node of the parse graph.

[0091] Example 52 includes the subject matter of any of Examples 46-51, and wherein the means for compiling the register-transfer level description of the network application into the bitstream definition comprises means for optimizing the plurality of TCAM-SRAM structures to generate corresponding logic gates.

[0092] Example 53 includes the subject matter of any of Examples 46-52, and wherein: the means for analyzing the network application source program comprises means for determining a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and the means for translating the declarative description of the network application into the register-transfer level description of the network application comprises means for generating multiplexer logic based on the control flow.

[0093] Example 54 includes the subject matter of any of Examples 46-53, and wherein: the means for analyzing the network application source program comprises means for determining a control flow of the network application, wherein the control flow is indicative of an order of match-action tables; and the means for translating the declarative description of the network application into the register-transfer level description of the network application comprises means for generating congestion management logic based on the control flow.

[0094] Example 55 includes the subject matter of any of Examples 46-54, and wherein: the means for analyzing the network application source program comprises means for analyzing a definition of a match-action table; and the means for translating the declarative description of the network application into the register-transfer level description of the network application comprises means for generating a lookup block to access the match-action table in an external memory.

[0095] Example 56 includes the subject matter of any of Examples 46-55, and wherein the means for translating the declarative description of the network application into the register- transfer level description of the network application comprises means for generating a modify block to apply one or more actions from the match-action table to a network packet.

[0096] Example 57 includes the subject matter of any of Examples 46-56, and wherein the computing device further comprises a network interface controller, and wherein the network interface controller comprises the field-programmable gate array.

[0097] Example 58 includes the subject matter of any of Examples 46-57, and wherein the computing device further comprises a multi-chip package, wherein the multi-chip package comprises a network interface controller and the field-programmable gate array.

[0098] Example 59 includes the subject matter of any of Examples 46-58, and wherein the computing device comprises a system-on-a-chip, wherein the system-on-a chip comprises the one or more processors, the field-programmable gate array, and a network interface controller.