Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HARDWARE-BASED TRANSACTION EXCHANGE
Document Type and Number:
WIPO Patent Application WO/2021/252423
Kind Code:
A1
Abstract:
A system may include a field programmable gate array (FPGA) based gateway comprising: a network interface configured to receive data packets containing proposed transactions, and validation logic circuitry configured to validate one or more headers or application-layer of the data packets in accordance with filter rules. The system may also include an FPGA based router comprising: a network interface configured to receive the data packets from the gateway, and parsing and lookup circuitry configured to compare the header field or application-layer field values in the data packets to those in a forwarding table. The system may also include an FPGA based matching engine comprising: a network interface configured to receive the data packets from the router, transaction validation circuitry configured to validate the proposed transactions based on information from state memory and policies, and matching algorithm circuitry configured to match pairs of proposed transactions according to pre-determined criteria.

Inventors:
FRIEDMAN SETH (JP)
GRYTA ALEXIS NICOLAS (JP)
GIBRALTA THIERRY (JP)
Application Number:
PCT/US2021/036299
Publication Date:
December 16, 2021
Filing Date:
June 08, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LIQUID MARKETS HOLDINGS INCORPORATED (US)
International Classes:
H04L45/74
Foreign References:
US20170249608A12017-08-31
US20180019943A12018-01-18
US20190268141A12019-08-29
US20170365002A12017-12-21
JP2007048280A2007-02-22
Attorney, Agent or Firm:
BORELLA, Michael S. (US)
Download PDF:
Claims:
CLAIMS What is claimed is: 1. A system comprising: a field programmable gate array (FPGA) based gateway comprising: (i) a first network interface configured to receive data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects, (ii) validation filter memory configured to store filter rules, and (iii) a sequence of validation logic circuitry configured to validate one or more headers or application-layer of the data packets in accordance with the filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the first network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules; an FPGA based router comprising: (i) a second network interface configured to receive the data packets that were validated from the FPGA based gateway, (ii) a forwarding table with entries mapping header field or application-layer field values to destination addresses, and (iii) parsing and lookup circuitry configured to compare the header field or application-layer field values in the data packets to those in the forwarding table and determine one of the destination addresses for each of the data packets; and an FPGA based matching engine comprising: (i) a third network interface configured to receive the data packets from the FPGA based router, (ii) state memory containing information related to the proposed transactions, (iii) policy memory containing policies to be applied to the proposed transactions, (iv) transaction validation circuitry configured to validate the proposed transactions based on their related information from the state memory and the policies, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies, (v) pending transaction memory configured to store proposed transactions that were validated, and (vi) matching algorithm circuitry configured to match pairs of pending transactions according to pre-determined criteria. 2. The system of claim 1, wherein the sequence of validation logic circuitry is configured to validate Ethernet headers Internet Protocol (IP) headers Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) headers, and the application-layer of the data packets as bits from these headers arrive at the FPGA based gateway. 3. The system of claim 1, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are blacklisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that match the field values that are blacklisted. 4. The system of claim 1, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are whitelisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that do not match the field values that are whitelisted. 5. The system of claim 1, wherein the FPGA based gateway also includes log memory, and wherein the sequence of validation logic circuitry is configured to write information from the data packets that fail validation into the log memory. 6. The system of claim 1, wherein the FPGA based gateway is further configured to replace Ethernet headers of the data packets that were validated with new Ethernet headers containing destination Ethernet addresses specifying the FPGA based router. 7. The system of claim 1, wherein the header field or application-layer field values in the forwarding table indicate the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the transaction subjects to those in the forwarding table. 8. The system of claim 1, wherein the header field or application-layer field values in the forwarding table indicate a combination of a source identifier and the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the combination of the source identifier and transaction subjects to those in the forwarding table.

9. The system of claim 1, wherein the FPGA based router includes a bus connection to a host system, and wherein the FPGA based router is also configured to receive updates to the forwarding table by way of the bus connection. 10. The system of claim 1, wherein the FPGA based router is further configured to replace the destination addresses of the data packets that were determined by way of the forwarding table with new destination Ethernet addresses specifying the FPGA based matching engine. 11. The system of claim 1, wherein the state memory contains information related to previously fulfilled transactions involving transaction subjects or source identifiers in common with those of at least some of the data packets, and wherein the source identifiers specify entities originating the proposed transactions. 12. The system of claim 11, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects that can be fulfilled within a predetermined period of time. 13. The system of claim 11, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects per particular instances of the source identifiers that can be fulfilled within a predetermined period of time. 14. The system of claim 1, wherein the FPGA based matching engine also includes log memory, and wherein the transaction validation circuitry is configured to write information from the data packets that fail validation into the log memory. 15. The system of claim 1, wherein parts of the FPGA based matching engine support virtualization into slices, wherein the pending transaction memory stores proposed transactions that were validated with their associated slices, and wherein the matching algorithm circuitry is further configured to only match pairs of pending transactions with a common associated slice.

16. The system of claim 15, wherein the FPGA based matching engine also includes lookup table memory configured with mappings of the transaction subjects to the slices or with combinations of transaction subjects and source identifiers to the slices, and wherein the source identifiers specify entities originating the proposed transactions. 17. The system of claim 16, wherein the FPGA based matching engine also includes lookup circuitry configured to retrieve the slices associated with data packets from the lookup table memory and provide the slices retrieved to the pending transaction memory or the matching algorithm circuitry. 18. The system of claim 1, wherein the FPGA based matching engine is also configured to transmit receipt confirmations for the proposed transactions that were validated but have not yet been matched. 19. The system of claim 1, wherein the FPGA based matching engine is also configured to transmit fulfilment confirmations for the proposed transactions that were matched. 20. A method comprising: receiving, by way of a first network interface of a field programmable gate array (FPGA) based gateway, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; validating, by way of a sequence of validation logic circuitry of the FPGA based gateway, one or more headers or application-layer of the data packets in accordance with filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the first network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules; receiving, by way of a second network interface of an FPGA based router, the data packets that were validated from the FPGA based gateway; comparing, by way of a parsing and lookup circuitry of the FPGA based router, header field or application-layer field values in the data packets to those in a forwarding table addresses, wherein the comparing determines one of the destination addresses for each of the data packets; receiving, by way of a third network interface of an FPGA based matching engine, the data packets from the FPGA based router; validating, by way of transaction validation circuitry of the FPGA based matching engine, the proposed transactions based on their related information from state memory and policies to be applied to the proposed transactions, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies; and matching, by way of matching algorithm circuitry of the FPGA based matching engine, pairs of pending transactions according to pre-determined criteria, wherein the pending transactions are proposed transactions that were validated stored in pending transaction memory. 21. The method of claim 20, wherein the sequence of validation logic circuitry is configured to validate Ethernet headers, Internet Protocol (IP) headers, Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) headers, and the application-layer of the data packets as bits from these headers arrive at the FPGA based gateway. 22. The method of claim 20, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are blacklisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that match the field values that are blacklisted. 23. The method of claim 20, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are whitelisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that do not match the field values that are whitelisted. 24. The method of claim 20, wherein the FPGA based gateway also includes log memory, and wherein the sequence of validation logic circuitry is configured to write information from the data packets that fail validation into the log memory.

25. The method of claim 20, wherein the FPGA based gateway is further configured to replace Ethernet headers of the data packets that were validated with new Ethernet headers containing destination Ethernet addresses specifying the FPGA based router. 26. The method of claim 20, wherein the header field or application-layer field values in the forwarding table indicate the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the transaction subjects to those in the forwarding table. 27. The method of claim 20, wherein the header field or application-layer field values in the forwarding table indicate a combination of a source identifier and the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the combination of the source identifier and transaction subjects to those in the forwarding table. 28. The method of claim 20, wherein the FPGA based router includes a bus connection to a host system, and wherein the FPGA based router is also configured to receive updates to the forwarding table by way of the bus connection. 29. The method of claim 20, wherein the FPGA based router is further configured to replace the destination addresses of the data packets that were determined by way of the forwarding table with new destination Ethernet addresses specifying the FPGA based matching engine. 30. The method of claim 20, wherein the state memory contains information related previously fulfilled transactions involving transaction subjects or source identifiers in common with those of at least some of the data packets, and wherein the source identifiers specify entities originating the proposed transactions. 31. The method of claim 30, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects that can be fulfilled within a predetermined period of time

32. The method of claim 30, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects per particular instances of the source identifiers that can be fulfilled within a predetermined period of time. 33. The method of claim 20, wherein the FPGA based matching engine also includes log memory, and wherein the transaction validation circuitry is configured to write information from the data packets that fail validation into the log memory. 34. The method of claim 20, wherein parts of the FPGA based matching engine support virtualization into slices, wherein the pending transaction memory stores proposed transactions that were validated with their associated slices, and wherein the matching algorithm circuitry is further configured to only match pairs of pending transactions with a common associated slice. 35. The method of claim 34, wherein the FPGA based matching engine also includes lookup table memory configured with mappings of the transaction subjects to the slices or with combinations of transaction subjects and source identifiers to the slices, and wherein the source identifiers specify entities originating the proposed transactions. 36. The method of claim 35, wherein the FPGA based matching engine also includes lookup circuitry configured to retrieve the slices associated with data packets from the lookup table memory and provide the slices retrieved to the pending transaction memory or the matching algorithm circuitry. 37. The method of claim 20, wherein the FPGA based matching engine is also configured to transmit receipt confirmations for the proposed transactions that were validated but have not yet been matched. 38. The method of claim 20, wherein the FPGA based matching engine is also configured to transmit fulfilment confirmations for the proposed transactions that were matched. a network interface configured to receive data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; validation filter memory configured to store filter rules; and a sequence of validation logic circuitry configured to validate one or more headers or application-layer of the data packets in accordance with the filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. 40. The FPGA based gateway of claim 39, wherein the sequence of validation logic circuitry is configured to validate Ethernet headers, Internet Protocol (IP) headers, Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) headers, and the application-layer of the data packets as bits from these headers arrive at the FPGA based gateway. 41. The FPGA based gateway of claim 39, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are blacklisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that match the field values that are blacklisted. 42. The FPGA based gateway of claim 39, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are whitelisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that do not match the field values that are whitelisted. 43. The FPGA based gateway of claim 39, wherein the FPGA based gateway also includes log memory, and wherein the sequence of validation logic circuitry is configured to write information from the data packets that fail validation into the log memory. 44. The FPGA based gateway of claim 39, wherein the FPGA based gateway is further configured to replace Ethernet headers of the data packets that were validated with new Ethernet headers containing destination Ethernet addresses specifying an FPGA based router and to transmit these data packets to the FPGA based router. 45. The FPGA based gateway of claim 44, wherein the FPGA based router is configured to route at least some of the data packets that it receives to an FPGA based matching engine. 46. A method comprising: receiving, by way of a network interface of a field programmable gate array (FPGA) based gateway, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; and validating, by way of a sequence of validation logic circuitry of the FPGA based gateway, one or more headers or application-layer of the data packets in accordance with filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. 47. The method of claim 46, wherein the sequence of validation logic circuitry is configured to validate Ethernet headers, Internet Protocol (IP) headers, Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) headers, and the application-layer of the data packets as bits from these headers arrive at the FPGA based gateway. 48. The method of claim 46, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are blacklisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that match the field values that are blacklisted. 49. The method of claim 46, wherein the filter rules identify field values of the one or more headers or application-layer of the data packets that are whitelisted, and wherein the sequence of validation logic circuitry is configured to discard the data packets that do not match the field values that are whitelisted.

50. The method of claim 46, wherein the FPGA based gateway also includes log memory, and wherein the sequence of validation logic circuitry is configured to write information from the data packets that fail validation into the log memory. 51. The method of claim 46, wherein the FPGA based gateway is further configured to replace Ethernet headers of the data packets that were validated with new Ethernet headers containing destination Ethernet addresses specifying an FPGA based router and to transmit these data packets to the FPGA based router. 52. A field programmable gate array (FPGA) based router comprising: a network interface configured to receive data packets that were validated by an FPGA based gateway; a forwarding table with entries mapping header field or application-layer field values to destination addresses; and parsing and lookup circuitry configured to compare the header field or application- layer field values in the data packets to those in the forwarding table and determine one of the destination addresses for each of the data packets. 53. The FPGA based router of claim 52, wherein the header field or application- layer field values in the forwarding table indicate the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the transaction subjects to those in the forwarding table. 54. The FPGA based router of claim 52, wherein the header field or application- layer field values in the forwarding table indicate a combination of a source identifier and the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the combination of the source identifier and transaction subjects to those in the forwarding table. 55. The FPGA based router of claim 52, wherein the FPGA based router includes a bus connection to a host system, and wherein the FPGA based router is also configured to receive updates to the forwarding table by way of the bus connection.

56. The FPGA based router of claim 52, wherein the FPGA based router is further configured to replace the destination addresses of the data packets that were determined by way of the forwarding table with new destination Ethernet addresses specifying an FPGA based matching engine and to and to transmit these data packets to the FPGA based matching engine. 57. A method comprising: receiving, by way of a network interface of a field programmable gate array (FPGA) based router, data packets that were validated by an FPGA based gateway; and comparing, by way of a parsing and lookup circuitry of the FPGA based router, header field or application-layer field values in the data packets to those in a forwarding table with entries mapping the header field or application-layer field values to destination addresses, wherein the comparing determines one of the destination addresses for each of the data packets. 58. The method of claim 57, wherein the header field or application-layer field values in the forwarding table indicate the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the transaction subjects to those in the forwarding table. 59. The method of claim 57, wherein the header field or application-layer field values in the forwarding table indicate a combination of a source identifier and the transaction subjects of the data packets, and wherein the parsing and lookup circuitry is also configured to compare the combination of the source identifier and transaction subjects to those in the forwarding table. 60. The method of claim 57, wherein the FPGA based router includes a bus connection to a host system, and wherein the FPGA based router is also configured to receive updates to the forwarding table by way of the bus connection. 61. The method of claim 57, wherein the FPGA based router is further configured to replace the destination addresses of the data packets that were determined by way of the forwarding table with new destination Ethernet addresses specifying an FPGA based matching engine and to and to transmit these data packets to the FPGA based matching engine. 62. A field programmable gate array (FPGA) based matching engine comprising: a network interface configured to receive data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; state memory containing information related to the proposed transactions; policy memory containing policies to be applied to the proposed transactions; transaction validation circuitry configured to validate the proposed transactions based on their related information from the state memory and the policies, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies; pending transaction memory configured to store proposed transactions that were validated; and matching algorithm circuitry configured to match pairs of pending transactions according to pre-determined criteria. 63. The FPGA based matching engine of claim 62, wherein the state memory contains information related to previously fulfilled transactions involving transaction subjects or source identifiers in common with those of at least some of the data packets, and wherein the source identifiers specify entities originating the proposed transactions. 64. The FPGA based matching engine of claim 62, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects that can be fulfilled within a predetermined period of time. 65. The FPGA based matching engine of claim 62, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects per particular instances of the source identifiers that can be fulfilled within a predetermined period of time. log memory, wherein the transaction validation circuitry is configured to write information from the data packets that fail validation into the log memory. 67. The FPGA based matching engine of claim 62, wherein parts of the FPGA based matching engine support virtualization into slices, wherein the pending transaction memory stores proposed transactions that were validated with their associated slices, and wherein the matching algorithm circuitry is further configured to only match pairs of pending transactions with a common associated slice. 68. The FPGA based matching engine of claim 67, further comprising: lookup table memory configured with mappings of the transaction subjects to the slices or with combinations of transaction subjects and source identifiers to the slices, wherein the source identifiers specify entities originating the proposed transactions. 69. The FPGA based matching engine of claim 68, further comprising: lookup circuitry configured to retrieve the slices associated with data packets from the lookup table memory and provide the slices retrieved to the pending transaction memory or the matching algorithm circuitry. 70. The FPGA based matching engine of claim 62, wherein the FPGA based matching engine is also configured to transmit receipt confirmations for the proposed transactions that were validated but have not yet been matched. 71. The FPGA based matching engine of claim 62, wherein the FPGA based matching engine is also configured to transmit fulfilment confirmations for the proposed transactions that were matched. 72. A method comprising: receiving, by way of a network interface of a field programmable gate array (FPGA) based matching engine, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; validating, by way of transaction validation circuitry of the FPGA based matching engine the proposed transactions based on their related information from state memory and circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies; and matching, by way of matching algorithm circuitry of the FPGA based matching engine, pairs of pending transactions according to pre-determined criteria, wherein the pending transactions are proposed transactions that were validated stored in pending transaction memory. 73. The method of claim 72, wherein the state memory contains information related to previously fulfilled transactions involving transaction subjects or source identifiers in common with those of at least some of the data packets, and wherein the source identifiers specify entities originating the proposed transactions. 74. The method of claim 72, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects that can be fulfilled within a predetermined period of time. 75. The method of claim 72, wherein the policies to be applied to the proposed transactions are formulated as rules, and wherein the rules specify limits to total amounts of certain transaction subjects per particular instances of the source identifiers that can be fulfilled within a predetermined period of time. 76. The method of claim 72, wherein the FPGA based matching engine also contains log memory, and wherein the transaction validation circuitry is configured to write information from the data packets that fail validation into the log memory. 77. The method of claim 72, wherein parts of the FPGA based matching engine support virtualization into slices, wherein the pending transaction memory stores proposed transactions that were validated with their associated slices, and wherein the matching algorithm circuitry is further configured to only match pairs of pending transactions with a common associated slice. 78. The method of claim 77, wherein the FPGA based matching engine also contains lookup table memory configured with mappings of the transaction subjects to the slices or with combinations of transaction subjects and source identifiers to the slices, wherein the source identifiers specify entities originating the proposed transactions. 79. The method of claim 78, wherein the FPGA based matching engine also contains lookup circuitry configured to retrieve the slices associated with data packets from the lookup table memory and provide the slices retrieved to the pending transaction memory or the matching algorithm circuitry. 80. The method of claim 72, wherein the FPGA based matching engine is also configured to transmit receipt confirmations for the proposed transactions that were validated but have not yet been matched. 81. The method of claim 72, wherein the FPGA based matching engine is also configured to transmit fulfilment confirmations for the proposed transactions that were matched.

Description:
Hardware-Based Transaction Exchange CROSS-REFERENCE TO RELATED APPLICATIONS [001] This application claims priority to U.S. provisional patent application no. 63/035,993, filed June 8, 2020, which is hereby incorporated by reference in its entirety. BACKGROUND [002] Transaction exchanges include small or large computing systems that can be used to conduct transactions between two or more entities. These entities may transmit proposed transactions to an exchange, and the exchange may match the proposed transactions based on various criteria. Matched transactions may be carried out or otherwise fulfilled. A goal for an exchange is to be able to accurately carry out a large number of transactions per second. [003] Conventional and current exchanges operate in software. In previous years, when network speeds were slow, the networks were the bottleneck to attaining this goal. But with the deployment of gigabit and 10 gigabit Ethernet, as well as routers and switches that can operate at commensurate line speeds, the exchange software is now the bottleneck. The discrepancy in network versus exchange software speeds has become so great that proposals have been made to artificially slow down the network equipment (with what are colloquially referred to as “speed bumps”) in order to prevent software exchanges from being overloaded.

SUMMARY [004] The embodiments herein overcome these and potentially other deficiencies with a custom hardware architecture that performs exchange functions that were previously performed in software. In order to scale the exchange to a point where it can keep up with the offered volume of transactions arriving via networks, custom field programmable gate arrays (FPGAs) are used to process data packets at high speed. [005] Particularly, exchange functionality is distributed onto three different types of components. A gateway component serves as ingress to the exchange and performs network protocol and application layer validation of proposed transactions. Any data packets containing invalid headers or proposed transactions are discarded. Data packets containing valid headers and proposed transactions are passed on to a routing component that directs the data packets, based on their transaction data, to one of several possible matching engine components. Each matching engine is dedicated to transactions of certain types or characteristics, and matches incoming transactions (e.g., in pairs) according to various algorithms. Matched transactions are fulfilled and confirmations are provided to the appropriate entities. [006] Advantageously, each of these components operates on an FPGA with purpose-built logic in order to perform its operations a line speed (e.g., up to 1-10 gigabits per second, or higher in some cases). Doing so eliminates the x86-64-based software bottleneck in today’s exchanges. Also, removing the traditional software network stack processing in the exchange results in the exchange having fewer security vulnerabilities and makes any remaining security vulnerabilities much more difficult to exploit. [007] Accordingly, a first example embodiment may involve an FPGA based gateway comprising: (i) a first network interface configured to receive data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects, (ii) validation filter memory configured to store filter rules, and (iii) a sequence of validation logic circuitry configured to validate one or more headers or application-layer of the data packets in accordance with the filter rules, wherein at least one of sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the first network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. The first example embodiment may also involve an FPGA based router comprising: (i) a second network gateway, (ii) a forwarding table with entries mapping header field or application-layer field values to destination addresses, and (iii) parsing and lookup circuitry configured to compare the header field or application-layer field values in the data packets to those in the forwarding table and determine one of the destination addresses for each of the data packets. The first example embodiment may also involve an FPGA based matching engine comprising: (i) a third network interface configured to receive the data packets from the FPGA based router, (ii) state memory containing information related to the proposed transactions, (iii) policy memory containing policies to be applied to the proposed transactions, (iv) transaction validation circuitry configured to validate the proposed transactions based on their related information from the state memory and the policies, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies, (v) pending transaction memory configured to store proposed transactions that were validated, and (vi) matching algorithm circuitry configured to match pairs of pending transactions according to pre-determined criteria. [008] A second example embodiment may involve receiving, by way of a first network interface of an FPGA based gateway, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects. The second example embodiment may further involve validating, by way of a sequence of validation logic circuitry of the FPGA based gateway, one or more headers or application- layer of the data packets in accordance with filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the first network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. The second example embodiment may further involve receiving, by way of a second network interface of an FPGA based router, the data packets that were validated from the FPGA based gateway. The second example embodiment may further involve comparing, by way of parsing and lookup circuitry of the FPGA based router, header field or application-layer field values in the data packets to those in a forwarding table with entries mapping the header field or application-layer field values to destination addresses, wherein the comparing determines one of the destination addresses for each of the data packets. The second example embodiment may further involve receiving, by way of a third network interface of an FPGA based matching engine, the data packets from the FPGA based router The second example embodiment may further involve validating by transactions based on their related information from state memory and policies to be applied to the proposed transactions, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies. The second example embodiment may further involve matching, by way of matching algorithm circuitry of the FPGA based matching engine, pairs of pending transactions according to pre-determined criteria, wherein the pending transactions are proposed transactions that were validated and stored in pending transaction memory. [009] A third example embodiment may involve an FPGA based gateway comprising: a network interface configured to receive data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; validation filter memory configured to store filter rules; and a sequence of validation logic circuitry configured to validate one or more headers or application-layer of the data packets in accordance with the filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. [010] A fourth example embodiment may involve receiving, by way of a network interface of an FPGA based gateway, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects. The fourth example embodiment may further involve validating, by way of a sequence of validation logic circuitry of the FPGA based gateway, one or more headers or application-layer of the data packets in accordance with filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. [011] A fifth example embodiment may involve an FPGA based router comprising: a network interface configured to receive data packets that were validated by an FPGA based gateway; a forwarding table with entries mapping header field or application-layer field values to destination addresses; and parsing and lookup circuitry configured to compare the header field or application-layer field values in the data packets to those in the forwarding table and determine one of the destination addresses for each of the data packets [012] A sixth example embodiment may involve receiving, by way of a network interface of an FPGA based router, data packets that were validated by an FPGA based gateway. The sixth example embodiment may also involve comparing, by way of a parsing and lookup circuitry of the FPGA based router, header field or application-layer field values in the data packets to those in a forwarding table with entries mapping the header field or application-layer field values to destination addresses, wherein the comparing determines one of the destination addresses for each of the data packets. [013] A seventh example embodiment may involve an FPGA based matching engine comprising: a network interface configured to receive data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects; state memory containing information related to the proposed transactions; policy memory containing policies to be applied to the proposed transactions; transaction validation circuitry configured to validate the proposed transactions based on their related information from the state memory and the policies, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies; pending transaction memory configured to store proposed transactions that were validated; and matching algorithm circuitry configured to match pairs of pending transactions according to pre-determined criteria. [014] An eighth example embodiment may involve receiving, by way of a network interface of an FPGA based matching engine, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects. The eighth example embodiment may involve validating, by way of transaction validation circuitry of the FPGA based matching engine, the proposed transactions based on their related information from state memory and policies to be applied to the proposed transactions, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies. The eighth example embodiment may involve matching, by way of matching algorithm circuitry of the FPGA based matching engine, pairs of pending transactions according to pre-determined criteria, wherein the pending transactions are proposed transactions that were validated stored in pending transaction memory. [015] In a ninth example embodiment, an article of manufacture may include a non- transitory computer-readable medium, having stored thereon program instructions that, upon execution by a computing system cause the computing system to perform operations in [016] In a tenth example embodiment, a computing system may include at least one processor, as well as memory and program instructions. The program instructions may be stored in the memory, and upon execution by the at least one processor, cause the computing system to perform operations in accordance with any of the previous example embodiments. [017] In an eleventh example embodiment, a system may include various means for carrying out each of the operations of any of the previous example embodiments. [018] These, as well as other embodiments, aspects, advantages, and alternatives, will become apparent to those of ordinary skill in the art by reading the following detailed description, with reference where appropriate to the accompanying drawings. Further, this summary and other descriptions and figures provided herein are intended to illustrate embodiments by way of example only and, as such, that numerous variations are possible. For instance, structural elements and process steps can be rearranged, combined, distributed, eliminated, or otherwise changed, while remaining within the scope of the embodiments as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS [019] Figure 1 depicts a transaction exchange connected to a network and in communication with client devices, in accordance with example embodiments. [020] Figure 2 depicts an architecture for an example exchange, in accordance with example embodiments. [021] Figure 3A depicts a possible enclosure-based implementation of a transaction exchange, in accordance with example embodiments. [022] Figure 3B depicts another possible enclosure-based implementation of a transaction exchange, in accordance with example embodiments. [023] Figure 4 depicts an example FPGA-based gateway component, in accordance with example embodiments. [024] Figure 5 depicts an example FPGA-based router component, in accordance with example embodiments. [025] Figure 6 depicts an example FPGA-based matching engine component, in accordance with example embodiments. [026] Figure 7 is a flow chart, in accordance with example embodiments. [027] Figure 8 is a flow chart, in accordance with example embodiments. [028] Figure 9 is a flow chart, in accordance with example embodiments. [029] Figure 10 is a flow chart, in accordance with example embodiments.

DETAILED DESCRIPTION [030] Example methods, devices, and systems are described herein. It should be understood that the words “example” and “exemplary” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as being an “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features unless stated as such. Thus, other embodiments can be utilized and other changes can be made without departing from the scope of the subject matter presented herein. [031] Accordingly, the example embodiments described herein are not meant to be limiting. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations. For example, the separation of features into “client” and “server” components may occur in a number of ways. [032] Further, unless context suggests otherwise, the features illustrated in each of the figures may be used in combination with one another. Thus, the figures should be generally viewed as component aspects of one or more overall embodiments, with the understanding that not all illustrated features are necessary for each embodiment. [033] Use of the term “or” may refer to one, the other, or both of the operands that the term joins. For example, “A or B” refers to just “A”, just “B”, or “A and B”. Use of the term “or” with three or more operands is to be interpreted similarly. [034] Additionally, any enumeration of elements, blocks, or steps in this specification or the claims is for purposes of clarity. Thus, such enumeration should not be interpreted to require or imply that these elements, blocks, or steps adhere to a particular arrangement or are carried out in a particular order. I. Network Environment [035] Figure 1 depicts an example system 100 that contains client devices 102, network 104, and transaction exchange 106. In various embodiments, more or fewer components may be present. [036] Client devices 102 may be any combination of hardware and/or software that can generate transaction proposals and/or receive confirmations, rejections, or errors. Thus, client devices 102 may include software applications operating on a standard personal computer or server, as well as platforms based on dedicated hardware. While three of client devices 102 are shown in Figure 1, more or fewer may be present. Some or all of client [037] Network 104 may be a packet-switched or circuit-switched data network, such as the Internet or another type of private or public Internet Protocol (IP) network. Thus, network 104 could include one or more local-area networks, wide-area networks, or some combination thereof. Network 104 connects client devices 102 to transaction exchange 106 such that transaction proposals from client devices 102 may be received by transaction exchange 106, and response messages from transaction exchange 106 (e.g., confirmations, rejections, errors) may be received by client devices 102. In some embodiments, network 104 may be configured to be able to sustain very high data rates between client devices 102 and transaction exchange 106, such a 1 gigabit per second, 10 gigabits per second, or even higher data rates. [038] Transaction exchange 106 (also referred to as “exchange 106” for purposes of simplicity) may include one or more computing devices or computing components arranged to carry out transactions proposed by client devices 102. Thus, in some embodiments, exchange 106 may receive transaction proposals from client devices 102 and match these proposals in a pairwise fashion to facilitate their completion. [039] For example, the transaction proposals may entail orders for traded securities. An order may include an identifier of the security (e.g., a ticker symbol), a side (e.g., buy long, buy cover, sell long, sell short), a number of units of the security (e.g., 100 shares), a type of order (e.g., market or limit), and a proposed price (e.g., $50.00) among other possible values. This information may be encoded in various ways, for example in text fields and/or binary fields. [040] Other types of transactions may be supported, such as task scheduling on distributed systems, auctions, exchange of cryptocurrencies for fiat currencies or other items of value, the sale and purchase of non-fungible tokens (NFTs), and so on. Any sort of market where buyers and seller come together to place bids or offers (propose transactions) that can be fulfilled could be supported. [041] Exchange 106 matches transaction proposals according to a set of rules (e.g., algorithms) in order to carry out the transactions. Such rules can vary in scope and complexity. Generally, a buy order and a sell order can be matched when the price of the buy order meets or exceeds the price of the sell order. [042] As an example, price/time based algorithms match incoming transaction proposals in a first-in-first-out (FIFO) manner where buy orders are matched to sell orders according to their price and time of arrival Buy orders are given priority in decreasing order orders are for the same price, priority is given to these orders based on their times of arrival (e.g., orders that arrived earlier have higher relative priority). [043] As another example, pro-rata based algorithms also give priority to buy orders with the highest prices. However, buy orders with the same price are matched to sell orders in proportion to order size. Suppose, for instance, that buy orders for 300 shares and 100 shares of the same security have been received, and that a sell order for 300 shares of the same security arrives. A pro-rata algorithm could match 225 shares of the 300-share buy order and 75 shares of the 100-share buy order to the sell order. Thus, 75% of each buy order was fulfilled. [044] Other matching algorithms of various levels of complexity may be supported by exchange 106. Further, different matching algorithms may be supported in parallel by exchange 106 for different securities, different exchange entities, or different transaction pools. For example, exchange 106 may be virtualized into multiple slices, each slice being a logical exchange. In some embodiments, various securities may be distributed across these slices so that each security can only be traded on one particular slice. In other embodiments, different slices may operate as independent exchanges under the control of a particular agency, broker, or unaffiliated electronic market. In still more embodiments, some slices may be so-called “dark pools”, or private exchanges only accessible to a limited number of institutional entities, while others may be “light pools”, or public exchanges available to virtual any entity. Each slice may operate its own matching engine that executes one or more matching algorithms independently of the other slices. [045] Given the complexity of handling multiple slices and various matching algorithms, exchanges have traditionally been implemented in software. As noted above, however, network data rates have grown by orders of magnitude in the past few decades. While computing hardware has also increased in speed and capacity, this increase has not, on a relative basis, kept pace with improvements in networking technology. Exchange x86-64- based software is still a bottleneck on overall exchange throughput. [046] Advantageously, implementing exchange 106 in hardware, such as FPGAs, allows exchange 106 to operate at network speed. Further, the purpose-built nature of the logic deployed to FPGAs facilitate more efficiencies, such as bypassing the overhead of data packets traversing a traditional network protocol stack in software or hardware bus methods such as Peripheral Component Interconnect express (PCIe) which provide interconnectivity between the x86-64-based resources and the physical network interface Additionally the security vulnerabilities, such as those based on quirks of certain operating systems, applications, processes, or libraries. II. Example Hardware Exchange Architecture [047] To that point, an example hardware-based architecture for exchange 106 is shown in Figure 2. Particularly, the functionality of exchange 106 is divided into three logical components – gateway 200, router 202, and matching engine 204. Also shown in Figure 2 are network 104 and market data repository 206. [048] To facilitate understanding of these embodiments, it is assumed that gateway 200 is configured to receive data packets in the form of Ethernet frames from network 104. These frames may contain 176 bits of Ethernet header, followed by 160 bits of IP header. The IP header may be followed by either 160 bits of TCP header, or 64 bits of UDP header. The TCP header or the UDP header may be followed by n bits of application-layer data, where n may vary based on the application and type of transaction. For example, 64 bits of application-layer data could be used for a binary or textual encoding of the identifier of the security, a side, a number of units of the security, a type of order, and a proposed price. These numbers assume that no virtual local area network (VLAN) tags, optional IP headers, or optional TCP headers are used, each of which could increase their respective Ethernet, IP, or TCP header lengths. Nonetheless the embodiments herein could handle different header arrangements as well as different types of headers. [049] A high-level overview of the functionality of gateway 200, router 202, and matching engine 204 is provided immediately below. More technical detail regarding their implementation is provided later in this specification. In general, however, transaction proposals flow from left to right in Figure 2, i.e., from network 104 to gateway 200, router 202, and then matching engine 204. Conversely, confirmations, rejections, and/or errors related to transaction proposals flow from right to left in Figure 2, e.g., from any of gateway 200, router 202, or matching engine 202 to network 104. It is assumed but not shown in Figure 2 that gateway 200, router 202, and matching engine 204 are connected by way of one or more local area networks. A. Gateway [050] Gateway 200 serves as a form of application-specific firewall for exchange 106. Gateway 200 is configured to only permit certain types of valid proposed transactions into exchange 106. Gateway 200 is implemented using FPGA technology to receive data packets from network 104 as a bit stream, and to validate portions of the data packets in a [051] Thus, for example, gateway 200 may be configured to validate a standard Ethernet header as soon as it receives the first 176 bits of an Ethernet frame. Alternatively, gateway 200 may be configured to validate each field of the frame as it arrives (first validating that the destination Ethernet address is properly configured on gateway 200, then that the source Ethernet address is in a whitelist of addresses that gateway 200 is permitted to communicate with, and so on. Similar procedures can be carried out for IP, TCP, and/or UDP headers. [052] For the application-layer data, gateway 200 checks its n bits to ensure that the transaction encoded therein appears to be valid. For example, this may involve validating that the security is known and supported, the side is properly specified, the number units is within a reasonable range of values, the type of order is permitted, and the proposed price is within a reasonable range of values. In some cases, these checks may involve considering values of the application-layer data and one or more of the headers. For instance, a check could be configured such that a sell order from the client device at IP address 10.154.34.172 is permitted only for certain securities and/or with a volume of no more than 1000 units. [053] If gateway 200 finds that all of the headers / fields that it checks are valid, then gateway 200 transmits the data packet contained within the Ethernet frame to router 202. If one or more of these checks fail, gateway 200 may stop further validation processing of the Ethernet frame, log the detected error to a log file or external destination (not shown) and discard the frame. In this way, only data packets containing proposed transactions that are highly likely to be valid are provided for further processing by exchange 106. Any non- conforming data packets are discarded upon ingress, reducing the processing required at exchange 106 while also increasing its security. As such, gateway 200 (and possibly one or more other gateways performing similar functions) form a hardened security perimeter around the rest of exchange 106. B. Router [054] Router 202 receives incoming data packets from gateway 200 and forwards them on to matching engine 204. Particularly, router 202 may skip some or all header and/or application-layer validations that were performed by gateway 200 and instead identify information regarding the proposed transaction in the application-layer data. This information may be used to route the data packets to a particular slice of matching engine. [055] For example, router 202 may perform cut-through validation of Ethernet header fields as they arrive in frames. If these validations pass, then router 202 may refrain 200. Likewise, router 202 may refrain from validating the application-layer data. But, router 202 may use one or more fields of the headers or application-layer data to route a given data packet to an appropriate slice of matching engine 204. [056] As one possible example, router 202 may consider the source IP address and identifier of the security to determine the slice. Router 202 may be configured to route transaction proposals from certain IP addresses and related to a particular security to a slice supporting a dark pool, and all other transaction proposals related to this security to a slice supporting a light pool. [057] To facilitate this routing, router 202 may be configured with a forwarding table, each entry of which indicating values of information fields within incoming data packets and an associated slice. Once the information fields in an incoming data packet are read, router 202 may search for a matching entry in the forwarding table. If one is found, the data packet is routed to the appropriate slice. If there is no match, a default entry may be applied to route the data packet to a default slice, or the data packet may be discard and an error logged. C. Matching Engine [058] As discussed, matching engine 204 matches transactions according to matching rules. These rules may be implemented independently per slice as algorithms with various levels of complexity. [059] Not unlike router 202, matching engine 204 may skip some or all packet header validations that were performed by gateway 200. For example, matching engine 204 may perform cut-through validation of Ethernet header fields as they arrive in frames. If these validations pass, then matching engine 204 may refrain from validating any IP, TCP, or UDP headers, as those were already validated by gateway 200. [060] To support matching across multiple slices, proposed transactions may be stored in memory (e.g., dynamic random access memory (DRAM)) in matching engine 204 until they are fulfilled or expire. Each proposed transaction as stored may include information (identifier of the security, a side, a number of units of the security, a type of order, and a proposed price) as well as an indicator of the slice to which the proposed transaction belongs. Proposed transactions may be stored in various ways, for instance in sorted or unsorted data structures or in tables organized per slice. [061] Upon receiving a new incoming proposed transaction, matching engine 204 may validate the proposed transaction. This may be a similar set of application-layer the side is properly specified, the number units is within a reasonable range of values, the type of order is permitted, and the proposed price is within a reasonable range of values), but also include more stateful validations. [062] For instance, exchange 106 may limit the total number of transactions per a time period for a given client device, or may limit the total value of transactions per a time period for the client device. As an example of the latter, exchange 106 may be configured to prevent sales of more than $100,000 of any one security during a day for a particular client device. Thus, matching engine 204 would need to maintain state, for the client device, relating to the total value of transactions performed involving the security. If a proposed transaction would exceed the $100,000 threshold, the proposed transaction would be rejected. Other examples of state-based validations may be possible. [063] Once the proposed transaction is validated, matching engine 204 may execute the associated rules to determine whether there is a match for this proposed transaction. If there is, the transaction is carried out and confirmations are sent to the originating client devices. The transaction may be logged and the proposed transactions deleted from the slice. If there is not, the slice may queue the proposed transaction for a period of time so that it can be fulfilled later. [064] If a pending transaction is queued for more than a predetermined duration (e.g., 1 minute, 10 minutes, 60 minutes, until the end of the trading day), it may expire. Proposed transactions that expire may also be logged and a rejection may be sent to the originating client device. [065] Market data repository 206 may be one or more databases or data sources accessible to matching engine 204. Market data repository 206 may provide lists of supported or unsupported securities, updates to matching rules, up to date security pricing data, and other information usable by matching engine 204. One of the advantages of the hardware-based system described herein is that it can update market data repository with information about fulfilled transactions in nanoseconds rather than milliseconds as is the case for traditional techniques. Thus, market data can be distributed faster and more accurately. III. Enclosure-Based Exchange Implementation [066] In some embodiment, components of exchange 106 may be implemented on cards that can be plugged into computing enclosures, such as a 4U or 6U rack-mountable chassis. Such an enclosure may provide power, ventilation, backplane communication, and physical protection to one or more cards that can be placed therein. Each card may slide into a slot on the backplane and be connected to the enclosure’s power supply and communication bus by way of the slot. [067] Figures 3A and 3B depict front views of enclosure-based arrangements of exchange 106. Each exchange component, gateway 200, router 202, and matching engine 204, may be implemented on a standalone card. Thus, various combinations of cards may be deployed in enclosures. [068] As an example Figure 3A depicts an enclosure 300 containing six cards, gateway (GW) 200A, gateway 200B, router (RTR) 202A, router 202B, matching engine (ME) 204A, and matching engine 204B. Each type of card may be deployed in pairs as shown so that they exhibit 1:1 redundancy. Thus, gateways 200A and 200B are a redundant pair, routers 202A and 202B are a redundant pair, and matching engines 204A and 204B are a redundant pair. Not shown in Figure 3A or 3B, for sake of simplicity, is any cabling connecting the cards to one another or to other devices. [069] Such a redundant pair may operate in an active-backup arrangement in which one card of the pair is actively performing operations and the other is ready to take over operations should the active card fail or be removed. For example, gateway 200A may be active and gateway 200B may be backup. Thus, gateway 200A may be receiving incoming data packets and performing the gateway functions described herein, including forwarding at least some of these data packets to the active router. Gateway 200B may also be receiving the same incoming data packets, e.g., by way of an Ethernet tap upstream of both gateway 200A and 200B. But gateway 200B might not fully process these data packets and ultimately will discard these packets since it is in backup mode. Nonetheless, gateway 200B may maintain enough state so that it can take over for gateway 200A as the active gateway. [070] Such a switchover may occur if gateway 200A fails (e.g., fails to reply to a keep-alive heartbeat by way of the communication bus or an Ethernet port) or may be triggered by manual override. Upon switching over, gateway 200B becomes the active gateway and begins fully processing incoming packets in accordance with gateway functions. Gateway 200A either becomes the backup gateway, operating in the fashion previously described for gateway 200B, or is placed in a failed state. [071] In an analogous fashion, routers 202A and 202B may be a redundant pair, and matching engines 204A and 204B may also be a redundant pair. For example, router 202A may be active while router 202B is backup, and matching engine 204A may be active while matching engine 204B is backup [072] In some embodiments, gateway, router, and/or matching engine cards may be placed in N:1 redundant arrangements, where N is typically between 2 and 5, inclusive. In these cases, if any of N active cards of a given type fails, a single backup card of the same type will become active and take over its operations. [073] Further, any of the cards in chassis 300 may be hot-swappable, in that a card can be removed from chassis 300 without having to power off any other cards or chassis 300 as a whole. In this way, a faulty card can be replaced without impacting operation of the other cards. [074] Figure 3B depicts a dual-enclosure arrangement. In this case, enclosure 300 contains six cards as described above, and enclosure 302 contains six additional cards for matching engines 204C, 204D, 204E, 204F, 204G, and 204H. The matching engine cards of enclosure 302 may be arranged in a 1:1 redundant fashion as described above. For example, matching engines 204C and 204D may be a redundant pair, matching engines 204E and 204F may be a redundant pair, and matching engines 204G and 204H may be a redundant pair. Alternatively, N:1 redundancy may be applied. Again, the cabling between cards in enclosure 302, as well as any between enclosures 300 and 302, is omitted. [075] Some further embodiments not shown involve gateway, router, and/or matching engine cards being deployed in computing enclosures with third-party cards. For example, a matching engine may be placed in a computing chassis that supports one or more general-purpose computing cards configured to execute applications atop a LINUX® or WINDOWS® operating system. Such an enclosure may also include one or more storage cards containing solid state drives or hard disk drives. These applications may communicate with or otherwise be integrated with the matching engine. [076] Regardless, the modular components of exchange 106 allows the functions of the exchange to be independently scaled. For instance, matching engines cards may each support less throughput than a single gateway card or router card, so two, three, or more matching engine cards may be deployed for each gateway and router card. Other arrangements are possible. IV. Example FPGA-Based Exchange Components [077] An FPGA represents an arrangement of hardware logic disposed upon a printed circuit board (PCB), for example. In some embodiments, other types of integrated circuit technology, such as an application-specific integrated circuit (ASIC), could be used in place of an FPGA. Regardless, an FPGA may include a number of hardware logic blocks and or other types of volatile storage or non-volatile storage), one or more processors (e.g., general purpose processors, graphical processors, encryption processors, and/or network processors), clocks, and network interfaces may be placed on or may interact with an FPGA. [078] As noted above, each of gateway 200, router 202, and matching engine 204 may be implemented using FPGA technology. In the following sections, the focus is on the hardware blocks that support the functionality being provided by gateway 200, router 202, and matching engine 204. Thus, for sake of simplicity some modules (e.g., clocks, redundancy-supporting circuitry, and certain types of memory) may be omitted. A. Gateway FPGA [079] Figure 4 depicts an example gateway FPGA 400, configured to perform operations of gateway 200. Gateway FPGA 400 is shown containing network interfaces, protocol validation logic, memory, and protocol processing logic. As discussed, however, gateway FPGA 400 may contain additional components. Further, Figure 4 depicts a logical schematic and may not be drawn to scale. [080] The front of gateway FPGA 400 contains two network interfaces, network interface 404 and network interface 406. Each may include one or more physical ports (e.g., gigabit Ethernet, 10 gigabit Ethernet, 25 gigabit Ethernet, among others), though only one port for each is shown. Network interface 404 connects to network 104 and from it receives incoming data packets in the form of Ethernet frames. Network interface 406 connects to network 402 and on it transmits incoming data packets in the form of Ethernet frames to a router or another exchange component. Thus, network 402 may be a network internal to exchange 106. [081] Each of network interface 404 and network interface 406 may include a small form-factor pluggable (SFP) transceiver module, a physical layer (PHY) module, and a physical coding sublayer (PCS) module. In some embodiments, an SFP+, SFP28, QSFP+, or QSFP28 transceiver could be used in place of the SFP transceiver module. [082] Data packets incoming to exchange 106 may traverse gateway FPGA 400 in accordance with the arrows of Figure 4. Notably, at least MAC validation module 408, IP validation module 410, TCP/UDP validation module 412, and application validation module 414 may operate on incoming data packets (arranged as Ethernet frames) as bitstreams. There may be little buffering of such bits in these modules, and an entire data packet need not be received before a module can process parts of the data packet that are received. [083] In other words, MAC validation module 408 may receive the first 176 bits of header. Based on fixed offsets within these 176 bits, MAC validation module 408 may be able to identify the 6-byte destination address, 6-byte source address, and 2-byte ethertype fields. If VLAN tags are used, then some parsing of these tags may be required to locate information that is not at fixed offsets. [084] MAC validation module 408 may perform validations based on the content of these fields as soon as the first 176 bits of the Ethernet frame are received, or as each bit is received. Thus, processing of these 176 bits and reception of subsequent bits of the Ethernet frame may occur in parallel. The validations performed may be based on filters defined in validation filter memory 416. This unit of memory may be DRAM, for example, storing a set of MAC-layer filter rules. [085] These filters may whitelist (allow) or blacklist (deny) various combinations destination addresses, source addresses, and/or ethertype fields. In some embodiments, only whitelist rules are used and any data packet with an Ethernet frame not matching a whitelist rule would be discarded. In other embodiments, only blacklist rules are used and any data packet with an Ethernet frame matching a blacklist rule may be discarded. [086] As an example, a whitelist may be configured to only allow incoming data packets that are from a specific set of source addresses, that are transmitted to a particular destination address, and have an ethertype indicating that the Ethernet frame encapsulates an IP packet. All other packets may be discarded. Other filtering possibilities exist. [087] When a data packet is discarded, information related to the data packet (e.g., at least some of its header values) may be written to log memory 418. This unit of memory may also be DRAM, for example. Periodically or from time to time, log memory 418 may be rotated or flushed, and/or may operate in a first-in-first-out fashion where the oldest log entry is overwritten by the newest entry when log memory 418 is full. Further, a rejection or error message may be transmitted to the sending client device (not shown in Figure 4). [088] MAC validation module 408 may serve to decapsulate incoming data packets by removing their Ethernet headers. However, the values of certain fields of these headers may be maintained in metadata that accompanies the data packet as it traverses other modules of gateway FPGA 400. [089] After MAC validation module 408 processing, IP validation module 410 may receive the 160 bits of the incoming data packet containing the IP header. In some cases, IP validation module 410 may also receive the bits of the Ethernet header or metadata representing these bits as well [090] It is again assumed that no optional IP header sections are used, but if they are present IP validation module 410 may be configured to parse and validate these as well. Based on fixed offsets within these 160 bits, IP validation module 410 may be able to identify the version, header length, type of service, total length, identification, flags, fragment offset, time to live, protocol, header checksum fields, source IP address, and destination IP address fields. [091] IP validation module 410 may perform validations based on the content of these fields as soon as the 160 bits of the IP header are received. Thus, processing of these 160 bits and reception of subsequent bits of the data packet may occur in parallel. Not unlike MAC validation module 408, the validations performed may be based on filters defined in validation filter memory 416. [092] These filters may whitelist (allow) or blacklist (deny) various combinations Ethernet and/or IP header fields. As an example, a blacklist may be configured to discard all data packets from a particular set of source IP addresses. Alternatively or additionally, the blacklist may contain a rule to discard any data packets that are fragmented (as indicated by the flags and/or fragmentation offset fields). Additionally, IP validation module 410 may validate that the IP header checksum is correct. If it is not correct, the data packet may be discarded. Other filtering possibilities exist. [093] As noted above, information related to discarded data packets may be written to log memory 418. Further, a rejection or error message may be transmitted to the sending client device (not shown in Figure 4). [094] After IP validation module 410 processing, TCP/UDP validation module 412 may receive the 160 bits of the incoming data packet containing the TCP header or the 64 bits of the incoming data packet containing the UDP header. In some cases, TCP/UDP validation module 412 may also receive the bits of the Ethernet and/or IP headers (or representative metadata) as well. [095] TCP/UDP validation module 412 represents validation for both TCP and UDP headers, and may be implemented as two separate modules in practice. But it is shown as one in Figure 4 for purposes of simplicity. In the case of separate modules for TCP header and UDP header validation, one of these modules may be selected based on the value of the protocol field in the IP header. It is again assumed that no optional TCP header sections are used, but if they are present TCP/UDP validation module 412 may be configured to parse and validate these as well [096] In the case of a TCP header, based on fixed offsets within its 160 bits, TCP/UDP validation module 412 may be able to identify the source port, destination port, sequence number, acknowledgement number, offset, reserved, flags, window size, checksum, and urgent pointer fields. In the case of a UDP header, based on fixed offsets within its 64 bits, TCP/UDP validation module 412 may be able to identify the source port, destination port, length, and checksum fields. [097] TCP/UDP validation module 412 may perform validations based on the content of these fields as soon as the 160 bits or 64 bits of the TCP header or UDP header is received. Thus, processing of these bits and reception of subsequent bits of the data packet may occur in parallel. Not unlike MAC validation module 408 and IP validation module 410, the validations performed may be based on filters defined in validation filter memory 416. [098] These filters may whitelist (allow) or blacklist (deny) various combinations Ethernet IP, or TCP/UDP header fields. As an example, a blacklist may be configured to discard all data packets that are not directed to a specific one of several destination IP address / destination port combinations used by exchange 106. Alternatively or additionally, the blacklist may contain a rule to discard any data packets with the TCP urgent pointer set to a value other than zero. Other filtering possibilities exist. [099] As noted above, information related to discarded data packets may be written to log memory 418. Further, a rejection or error message may be transmitted to the sending client device (not shown in Figure 4). [100] After TCP/UDP validation module 412 processing, application validation module 414 may receive the n bits of the incoming data packet containing the application data. In some cases, application validation module 414 may also receive the bits of the Ethernet, IP, and/or TCP/UDP headers as well. [101] Depending on how the application data is arranged, application validation module 414 may either identify the fields therein based on fixed offsets or by parsing the application data in linear or semi-linear fashion. By way of these techniques, a proposed transaction (defined by e.g., an identifier of the security, a side, a number of units of the s i f d d d i ) b d i d ; ; SL=10; SS=11 Q P P P Table 1 [102] An example 64-bit binary encoding of application-layer data is shown in Table 1. The Sy field represents the securities symbol in a binary format. The Si field represents the transaction side. The two bits of this field encode the transaction type, such as buy long (BL) when the transaction is a purchase for holding, buy cover (BC) when the transaction is a purchase to close or cover a previous short sale, sell long (SL) when the transaction is a sale of securities that are currently held, and sell short (SS) which creates an obligation to sell (cover) the security at a future time. The Qt field represents the quantity of the transaction as a count of units represented in binary. These units may be either individual securities or lots of two or more. The Px.P field indicates whether the transaction is for a negative or positive number of units. Certain types of asset classes (e.g., futures) in certain exchanges may be specified by convention with a negative price. Thus, this field may represent an arithmetic sign, plus or minus. The Px.L field indicates the portion of the price of the security that is to the left of the decimal point, and the Px.R field indicates the portion of this price that is to the right of the decimal point. [103] In alternative embodiments, different numbers of bits may be used to encode these fields. For example, 7 bits could be used to encode the Px.R so that two digits to the right of the decimal can be represented. In some embodiments, one or more brokers may be identified in the binary representation. [104] As a concrete instance, a transaction involving the long purchase of 100 units of the security ABC Corp., which has a price of $32.40 per unit may be encoded as follows. The Sy field may encode the ticker symbol of ABC Corp. or some other representation 2 17 = 131,072 different securities can be referenced in this fashion. The Side field may take on a value of 00. The Qt field may encode the value of 100 in binary. The Px.P field may have a value of 1 indicating a positive number of units. The Px.L field may encode 32 in binary and the Px.R field may encode 4 as binary. [105] Notably, the embodiment and example shown above is just one possibility. Other encoding arrangements may exist. For instance, different trading exchanges and different asset classes could have different encodings within the bits of the application-layer data. [106] In an alternative embodiment, the proposed transaction could be encoded in one or more custom VLAN tags. Such an encoding may be based on or conform to the embodiments described in U.S. Patent No. 10,880,211, issued December 29, 2020, and hereby incorporated by reference in its entirety. The latter approach would result in the proposed transaction being provided to application validation module 414 during Ethernet header reception. [107] Regardless, application validation module 414 may perform validations based on the content of these fields as soon as these bits are received. Not unlike MAC validation module 408, IP validation module 410, and TCP/UDP validation module 412, the validations performed may be based on filters defined in validation filter memory 416. These filters may whitelist (allow) or blacklist (deny) various combinations Ethernet header, IP header, TCP/UDP header, and/or application-layer fields. [108] Possible validations may include ensuring that each field has a valid value when viewed in isolation, as well as that various combinations of fields have valid values when viewed in aggregate. For example, application validation module 414 may ensure that the Sy field specifies a known symbol, and/or that the combination of Sy and Qt fields do not specify a proposed transaction that exceeds certain predetermined quantity limits for the security specified by the symbol. Various formats of fields and orderings of fields may be supported. For example, three or four different types of messages may be supported, each with a different set of fields and ordering of fields. Application validation module 414 may discard any data packet that does not confirm to one of these types. [109] Further, the source IP address from the IP header may be used to identify the entity (e.g., client device) originating the proposed transaction, and further limits may be applied on an entity-by-entity basis. Moreover, application validation module 414 may track various connection states such as TCP sequence and/or acknowledgement numbers If the also cause application validation module 414 to discard a data packet. If more than a threshold number of improper packets from a particular client device are received within a threshold amount of time (e.g., 10 in one second), application validation module 414 may terminate the TCP connection with the client device and/or otherwise block further communication with the client device. Other filtering possibilities exist. [110] As noted above, information related to discarded data packets may be written to log memory 418. A rejection or error message may be transmitted to the sending client device (not shown in Figure 4). [111] After application validation module 414 completes its operations on the bits of a data packet, the data packet may be passed to MAC processing 420. This module determines the source and destination Ethernet addresses (and ethertype) for an Ethernet header, and then encapsulates the data packet in this new Ethernet header. The source Ethernet address may be that of gateway FPGA 400, and the destination Ethernet address may be that of router 202, for example. The resulting Ethernet frame is provided to network interface 406, which is configured to carry out transmission of the frame by way of network 402. In some cases, the same network interface may be used for both receiving and transmitting Ethernet frames. [112] In some embodiments, once an Ethernet trailer of a frame is received, its frame check sequence (FCS) may be calculated for the frame (not shown). If the FCS fails, then the data packet is discarded prior to forwarding, even if all other validation checks passed. [113] The rear of gateway FPGA 400 contains an interface to PCIe bus 422, which may connect to a host system. Gateway FPGA 400 may obtain power from PCIe bus 422, and use signaling capabilities of PCIe bus 422 to manage redundancy and failure procedures. Gateway FPGA 400 may also receive firmware updates by way of PCIe bus 422. B. Router FPGA [114] Figure 5 depicts an example router FPGA 500, configured to perform operations of router 202. Router FPGA 500 is shown containing network interfaces, protocol validation logic, memory, lookup logic, and protocol processing logic. As discussed, however, router FPGA 500 may contain additional components. Further, Figure 5 depicts a logical schematic and may not be drawn to scale. [115] The front of router FPGA 500 contains two network interfaces, network interface 504 and network interface 506. Each may include one or more physical ports (e.g., port for each is shown. Network interface 504 connects to network 402 and from it receives incoming data packets in the form of Ethernet frames. These data packets may have been transmitted by gateway FPGA 400. Network interface 506 connects to network 502 and on it transmits incoming data packets in the form of Ethernet frames to a matching engine or another exchange component. Thus, network 402 and network 502 both may be internal to exchange 106. In some embodiments, just one network interface may be present. [116] Each of network interface 504 and network interface 506 may include a small form-factor pluggable (SFP) transceiver module, a physical layer (PHY) module, and a physical coding sublayer (PCS) module. In some embodiments, an SFP+, SFP28, QSFP+, or QSFP28 transceiver could be used in place of the SFP transceiver module. [117] Data packets incoming to exchange 106 may traverse router FPGA 500 in accordance with the arrows of Figure 5. Thus, MAC processing module 508 may act on the Ethernet headers of these data packets. Particularly, MAC processing module 508 may perform a simple validity check of the these headers (e.g., that the destination address is one that is assigned to router FPGA 500, that the source address is not clearly improper (e.g., a source address of ff:ff:ff:ff:ff:ff), and that the ethertype is known. A data packet that fails these simple validity checks may be discarded. For data packets that pass the simple validity checks, the values of certain Ethernet header fields may be maintained in metadata that accompanies the data packet as it traverses other modules of router FPGA 500. [118] As discussed above, MAC processing module 508 may validate portions of the data packets in a cut-through or pipelined fashion. For example, processing of the first 176 bits of an Ethernet frame and reception of subsequent bits of the Ethernet frame may occur in parallel. [119] When a data packet is discarded, information related to the data packet (e.g., at least some of its header values) may be written to log memory 516. This unit of memory may also be DRAM, for example. Periodically or from time to time, log memory 516 may be rotated or flushed, and/or may operate in a first-in-first-out fashion where the oldest log entry is overwritten by the newest entry when log memory 516 is full. Further, a rejection or error message may be transmitted to the sending client device (not shown in Figure 5). [120] Parsing and lookup module 510 may receive the data packet from MAC processing 508 and parse the fields of the IP, TCP, and/or UDP headers, as well and the application-layer data, in order to identify the values of certain key fields. These values can be used to determine where to route the data packet For example parsing and lookup module 510 may be configured to identify the source IP address and the name of the security in each data packet, and then look up this tuple of values in forwarding table 512. [121] Forwarding table 512 may include entries mapping the values of one or more key fields to a destination Ethernet address. When the key fields of a data packet match the values in such an entry, the data packet is routed to the specified destination Ethernet address. Source IP address Security Destination Ethernet address Table 2 [122] Table 2 provides an example of forwarding table 512. In it, a first entry indicates that data packets with a source IP address of 10.12.171.205 and a proposed transaction involving a security numbered 5565 (a reference that maps to the security’s symbol) should be routed to destination Ethernet address 40:74:e0:28:2c:d3. Likewise, a second entry indicates that data packets with a source IP address of 10.12.171.205 and a proposed transaction involving a security numbered 1704 should be routed to destination Ethernet address 40:74:e0:aa:f2:45. A third entry indicates that data packets with a source IP address of 10.12.171.206 and a proposed transaction involving a security numbered 5565 should be routed to destination Ethernet address 40:74:e0:28:2c:d3. A fourth entry indicates that data packets with a source IP address of 10.12.171.206 and a proposed transaction involving a security numbered 498 should be routed to destination Ethernet address 40:74:e0:5e:22:ea. A fifth entry is a default entry, indicating that any data packet with values of key fields not matching any entry in the forwarding table should be routed to destination Ethernet address 40:74:e0:28:2c:d3. In some cases, a default entry might not be present and data packets not matching any entry are discarded. [123] As will be described below, this forwarding table structure allows various matching engines to be configured to support slices with arbitrary combinations of originating client devices and securities. Nonetheless, other key fields could be used. Thus, forwarding table 512 could be more or less complicated than what is shown in Table 2. In some embodiments, routing can be based on only the symbol, and therefore forwarding table up could be based on a hash of the security number and destination Ethernet address so that binary lookups are avoided. Other possibilities exist. [124] In an alternative embodiment, the proposed transaction could be encoded in one or more custom VLAN tags. Such an encoding may be based on or conform to the embodiments described in U.S. Patent No. 10,880,211, issued December 29, 2020, and hereby incorporated by reference in its entirety. The latter approach would result in the transaction being provided to parsing and lookup module 510 during Ethernet header reception, so a routing decision can be made in parallel to the rest of the Ethernet frame being received. This can result in the routing process completing several dozen nanoseconds earlier than if the proposed transaction is read from the application-layer data. [125] After parsing and lookup module 510 completes its operations on the bits of a data packet, the data packet may be passed to MAC processing 514. This module determines the source and destination Ethernet addresses (and ethertype) for an Ethernet header, and then encapsulates the data packet in this new Ethernet header. The source Ethernet address may be that of router FPGA 500, and the destination Ethernet address may be provided by the result of looking up the key fields in forwarding table 512. The resulting Ethernet frame is provided to network interface 506, which is configured to carry out transmission of the frame by way of network 502. In some cases, the same network interface may be used for both receiving and transmitting Ethernet frames. [126] In some embodiments, once an Ethernet trailer of a frame is received, its frame check sequence (FCS) may be calculated for the frame (not shown). If the FCS fails, then the data packet is discarded prior to transmission. [127] Advantageously, router FPGA 500 may refrain from performing validity checks on IP, TCP, or UDP headers of data packets, as well as validity checks on the application-layer data of data packets. Those checks have already been performed by gateway 200, and thus the processing of router FPGA 500 can be simplified with their omission. [128] The rear of router FPGA 500 contains an interface to PCIe bus 422, which may connect to a host system. Router FPGA 500 may obtain power from PCIe bus 422, and use signaling capabilities of PCIe bus 422 to manage redundancy and failure procedures. Router FPGA 500 may also receive firmware updates by way of PCIe bus 422, as well as updates to forwarding table 512. C. Matching Engine FPGA [129] Figure 6 depicts an example matching engine FPGA 600, configured to perform operations of matching engine 204. Matching engine FPGA 600 is shown containing network interfaces, protocol validation logic, memory, and protocol processing logic. As discussed, however, matching engine FPGA 600 may contain additional components. Further, Figure 6 depicts a logical schematic and may not be drawn to scale. [130] The front of matching engine FPGA 600 contains network interface 602. This interface may include one or more physical ports (e.g., gigabit Ethernet, 10 gigabit Ethernet, or 25 gigabit Ethernet), though only one port for each is shown. Network interface 602 connects to network 502 and from it receives incoming data packets in the form of Ethernet frames. These data packets may have been transmitted by router FPGA 500. [131] Network interface 602 may include a small form-factor pluggable (SFP) transceiver module, a physical layer (PHY) module, and a physical coding sublayer (PCS) module. In some embodiments, an SFP+, SFP28, QSFP+, or QSFP28 transceiver could be used in place of the SFP transceiver module. [132] Data packets incoming to exchange 106 may traverse matching engine FPGA 600 in accordance with the arrows of Figure 6. Not unlike MAC processing module 508, MAC processing module 604 may act on the Ethernet headers of these data packets. Particularly, MAC processing module 604 may perform a simple validity check of the these headers (e.g., that the destination address is one that is assigned to matching engine FPGA 600, that the source address is not clearly improper (e.g., a source address of ff:ff:ff:ff:ff:ff), and that the ethertype is known. A data packet that fails these simple validity checks may be discarded. For data packets that pass the simple validity checks, the values of certain Ethernet header fields may be maintained in metadata that accompanies the data packet as it traverses other modules of matching engine FPGA 600. [133] As discussed above, MAC processing module 604 may validate portions of the data packets in a cut-through or pipelined fashion. For example, processing of the first 176 bits of an Ethernet frame and reception of subsequent bits of the Ethernet frame may occur in parallel. [134] Unlike gateway FPGA 500, matching engine FPGA 600 may validate proposed transactions encoded in data packets not just based on the information in the data packets themselves but also based on state maintained in state memory 608. This unit of memory may also be DRAM, for example, and may store information from past transactions indexed based on any combination of fields in the data packet. For example, state may be record per source IP address (i.e., per entity), per security, or some combination of both. In some cases, the state may record, for one or more securities, the total monetary value of the proposed and/or fulfilled transactions in which the entity associated with a particular source IP address was involved. [135] Policy memory 610 contains policies, possibly in the form of rules, that control how and whether transactions are fulfilled. This unit of memory may also be DRAM, for example, and may store a table of such rules. Example rules may indicate that a particular entity or entities can only be involved in transactions with a total per-hour value of no more than $100,000 for a given security and no more than $1,000,000 over all securities. Other example rules may indicate that the particular entity or entities can only be involved in transactions with a total per-day value of no more than $1,000,000 for the given security and no more than $5,000,000 over all securities. [136] Transaction validation module 606 may receive the data packet from MAC processing 604 and parse the fields of the IP, TCP, and/or UDP headers, as well and the application-layer data, in order to identify the values of certain key fields. These values can be used, possibly in conjunction with information in state memory 608 and policy memory 610. To that point, transaction validation module 606 may be configured to apply one or more policies from policy memory 610. Continuing with the example above, these policies may cause transaction validation module 606 to identify the source IP address and the name of the security in each data packet. These policies may also cause transaction validation module 606 to obtain current running total of fulfilled transaction values associated with these IP addresses and securities from state memory 608. If a proposed transaction would cause the total specified in a policy to be exceeded, transaction validation module 606 may discard the data packet containing this proposed transaction. [137] To be clear, transaction validation module 606 may include any necessary IP, TCP, and/or UDP protocol processing and validation checks of the associated headers. In some embodiments, Internet Control Message Protocol (ICMP), Internet Group Message Protocol (IGMP), and other protocol processing may be supported as well. [138] When a data packet is discarded (e.g., by MAC processing module 604, transaction validation module 606, or some other module), information related to the data packet (e.g., at least some of its header values) may be written to log memory 612. This unit of memory may also be DRAM for example Periodically or from time to time log memory oldest log entry is overwritten by the newest entry when log memory 612 is full. Further, a rejection or error message may be transmitted to the sending client device (not shown in Figure 6). [139] In an alternative embodiment, the application-layer data containing the proposed transaction may be fed bitwise through these components prior to any FCS validation. In this manner, the proposed transaction is received as quickly as (at line rate) possible by transaction validation module 606. Alternatively, the proposed transaction could be encoded in one or more custom VLAN tags. Such an encoding may be based on or conform to the embodiments described in U.S. Patent No. 10,880,211, issued December 29, 2020, and hereby incorporated by reference in its entirety. The latter approach would result in the transaction being provided to transaction validation module 606 during Ethernet header reception, so the proposed transaction can be validated in parallel to the rest of the Ethernet frame being received. This can result in the validation completing several dozen nanoseconds earlier than if the proposed transaction is read from the application-layer data. [140] As noted above, the processing of matching engine 204 may be virtualized into m logically distinct matching engines, referred to as slices. Each of these slices may execute a matching algorithm on a disjoint subset of the incoming proposed transactions. These subsets may be defined by the securities they involve, their source IP address (originating entity), size of the proposed transaction in terms of number of securities or total value thereof, and/or combinations of these or other values. For instance, multiple matching engines, alternatively known as “Pools”, each potentially being private (“Private”) or public (“Public”), or dark (where newly entered bids or offers are not published via market data distribution mechanisms, hereinafter “Dark Pool”) or lit (where newly entered bids or offers are published via market data distribution mechanisms, hereinafter “Lit Pool”) may be implemented by way of these slices. [141] This has the advantage of being able to implement multiple Pools in parallel. As a consequence, proposed transactions that have been pending in one Pool for some period of time can be moved from that Pool to another simply by updating pending transaction memory 620 in 1-2 clock cycles. Previous techniques would have required cancelling and resending the proposed transaction or transmitting a representation of the transaction between physical devices or across networks. [142] The embodiments shown in Figure 6 involve one or more slices being supported on the same matching engine card But as noted above exchange 106 may matching engine cards as well as within such cards. To be clear, slices can be implemented on the card level (where each matching engine card is a separate slice), and/or multiple slices can be implemented on one or more of these cards. Security Slice Table 3 [143] Regardless, security / slice table 616 contains mappings of securities to slices for matching engine FPGA 600. Table 3 depicts an example of such a table, where the security with the number 5565 is handled by slice 1, the security with the number 1704 is also handled by slice 1, and the security with the number 498 is handled by slice 2. Any other security is handled by slice 3 in accordance with the default rule. As described above, more than just one field in the incoming data packets may be used to identify the slice. [144] Security lookup module 614 may receive the data packet from transaction validation module 606 and parse the fields the application-layer data, in order to identify the security in the proposed transaction. Security lookup module 614 may then look up the identified security in security / slice table 616. Any error experienced by security lookup module 614 that prevent the slice from being identified may be written to log memory 612 and may result in the proposed transaction being rejected. [145] Once the slice handling the security is identified, receipt confirmation module 618 may transmit a confirmation message to the originating entity. This message may indicate that the proposed transaction in the data packet has been received but not yet fulfilled. Such a message may use the source IP address of the data packet as its destination IP address, and may be transmitted by way of network interface 602 or another network interface (not shown). [146] Pending transaction memory 620, stores representations of validated, unmatched proposed transactions. This unit of memory may also be DRAM, for example. These proposed transactions may be stored in various ways. In some embodiments, all proposed transactions may be stored in a single table, where entries of the table specify the proposed transaction (e.g., an identifier of the security, a side, a number of units of the handle the transaction. In other embodiments, there may be a separate table for each slice, each of these tables containing entries that specify proposed transactions. [147] Matching module 622 may read the table or tables in pending transaction memory 620 and execute one or more matching algorithms on the information therein. Matching module 622 may be configured to execute separate matching algorithms per slice. These algorithms may be price/time based or pro-rata based for example. When a pair of proposed transactions are matched, the result is a transaction being fulfilled in accordance with the matching algorithm. In some cases, the matching algorithm may be subject to a random delay of 50-300 nanoseconds, for example. This can be used to defeat certain trading strategies that rely on predictive assessment, inference, or reverse inference techniques, among others, to gain unfair insights as to when matches may occur. [148] When such a transaction fulfillment occurs, information related to the transaction (e.g., an identifier of the security, a number of units of the security, a price, and/or the entities involved) may be written to fulfilled transaction log memory 624. This unit of memory may also be DRAM, for example. Periodically or from time to time, entries from fulfilled transaction log memory 624 may transmitted to another component or device for long-term storage. [149] Also when a transaction is fulfilled, fulfillment confirmation module 626 may transmit fulfillment confirmation messages to the originating entities. These messages may indicate that the respective proposed transactions have been fulfilled. Such a message may be transmitted by way of network interface 602 or another network interface (not shown). [150] In some embodiments, incoming proposed transactions may be routed directly to matching module 622. Receipt of such a proposed transaction may occur in parallel to processing by other modules of matching engine FPGA 600, and may trigger the execution of a matching algorithm for the proposed transaction and any other proposed transactions stored in pending transaction memory 620. In some cases, the fields of the application-layer data may be ordered so that those that are used to determine the validity of the proposed transaction are placed earlier in the ordering. These may be side, symbol, quality, and price. These fields may be followed by less important fields that are not needed for matching. Thus, as the application-layer data is received, the bits of these fields can be validated and/or provided to matching module 622 while the rest of the proposed transaction is being received or otherwise processed. [151] The rear of matching engine FPGA 600 contains an interface to PCIe bus 422 PCIe bus 422, and use signaling capabilities of PCIe bus 422 to manage redundancy and failure procedures. Matching engine FPGA 600 may also receive firmware updates by way of PCIe bus 422. In some embodiments, copies of proposed transactions and/or fulfilled transactions may be provided to external components by way of PCIe bus 422. These external components may carry out any back-office processing and other operational aspects of exchange 106 on this data. For example, a copy of the received Ethernet frame may be transmitted from MAC processing module 604 to PCIe bus 422 (not shown). V. Technical Advantages [152] The embodiments herein exhibit numerous technical advantages over conventional exchange designs. Notably, these embodiments can be arranged to operate at line speed, up to 10 gigabits per second or more. Thus, they are high-capacity. They are also low-latency, as the path for incoming data packets is designed so that these data packets are processed as they arrive, perhaps some of this processing occurring before the entire packet is received. [153] Further, these embodiments are more secure than traditional techniques. Finding network security exploits in custom hardware is more difficult than doing so in widely-deployed software-based network protocol stacks. Further, exploits that rely on making changes to code on the attacked device (e.g., by way of buffer overflow or overwriting of a program segment or memory) are virtually impossible to execute on custom hardware. [154] Moreover, the use of gateways at the perimeter of the exchange to filter out all data packets except those that are demonstrably from a known source and contain a valid transaction is beneficial. This architecture prevents many types of distributed denial of service (DDOS) attacks because the filtering can occur at line speed. This protects the interior of the exchange (e.g., the router(s) and the matching engine(s)) from inappropriate activity, whether that activity was intended or not intended. Further, in the unlikely situation that a gateway is overwhelmed by incoming data packets, the gateway will fail, causing the path to the interior of the exchange to also fail. Thus, the interior of the exchange remains protected. Nonetheless, the gateway, router, and matching engine components described herein could be implemented together on the same card or distributed across two cards rather than three cards. [155] One way in which line speed performance can be maintained is by operating the network interface transceivers (i.e., the SFP and/or PHY modules) at double their specified rate. Thus, a 10 gigabit per second transceiver is overclocked to 20 gigabits per second. This can serve to accommodate bursts of incoming data packets. [156] Further, the PCS module (or MAC processing module) clock counter can be used to generate timestamps when an incoming packet arrives to any of the exchange components (i.e., gateway, router, and/or matching engine), and then tag these incoming packets with their timestamps. The timestamps may follow the data packets through the path of each component as metadata, and/or be logged to memory. [157] Generating timestamps in this fashion allows for very accurate timestamping of proposed transactions. For instance, if a 10 gigabit network interface is operating a 322.26 MHz, each clock cycle is 3.1 nanoseconds. This is much more accurate than using the Precision Time Protocol (PTP), which can be subject to significant jitter. Such timestamps can then be used to provide highly-accurate timing measurements for each step of exchange processing, for example, as well as to be used in price/time matching algorithms. [158] A further set of advantages has to do with capacity management. When multiple matching engine cards are used, there may be a number of securities assigned to each card. If any particular security or set of securities is undergoing an unusually large volume of trading, this could result in the load across matching engine cards becoming unbalanced. When this happens a security from a more heavily-loaded card can be moved to a less heavily-loaded card. For example, transactions involving the security could be paused on its current card. Then, the order queue and/or set of proposed transactions involving the security may be transmitted from the current card to a newly-assigned card that is less populated. One or more routers may be updated to reflect this change (i.e., their routing tables are amended to direct transactions involving the security to the newly-assigned card). Then trading of the security resumes on the newly-assigned card. [159] Additionally, the hardware components defined herein can be used to facilitate the matching of any type of transactions, not just the security trading examples provided herein. For example, this architecture could facilitate machine-to-machine arbitrage of computing resources, where proposed transactions indicate an extent of processing, memory, and/or network resources available or offered. Other examples exist. VI. Example Operations [160] Figure 7 is a flow chart illustrating an example embodiment. The process illustrated by Figure 7 may be carried out by the exchange components described herein. However, the process can be carried out by other types of devices or device subsystems. The features shown therein. Further, these embodiments may be combined with features, aspects, and/or implementations of any of the previous figures or otherwise described herein. [161] Block 700 may involve receiving, by way of a first network interface of an FPGA based gateway, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects. The transaction subjects may be securities or other items of value and/or resources. [162] Block 702 may involve validating, by way of a sequence of validation logic circuitry of the FPGA based gateway, one or more headers or application-layer of the data packets in accordance with filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the first network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. [163] Block 704 may involve receiving, by way of a second network interface of an FPGA based router, the data packets that were validated from the FPGA based gateway. [164] Block 706 may involve comparing, by way of parsing and lookup circuitry of the FPGA based router, header field or application-layer field values in the data packets to those in a forwarding table with entries mapping the header field or application-layer field values to destination addresses, wherein the comparing determines one of the destination addresses for each of the data packets. The destination addresses may be medium access control (MAC) addresses, such as Ethernet addresses for example. [165] Block 708 may involve receiving, by way of a third network interface of an FPGA based matching engine, the data packets from the FPGA based router. [166] Block 710 may involve validating, by way of transaction validation circuitry of the FPGA based matching engine, the proposed transactions based on their related information from state memory and policies to be applied to the proposed transactions, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies. [167] Block 712 may involve matching, by way of matching algorithm circuitry of the FPGA based matching engine, pairs of pending transactions according to pre-determined criteria, wherein the pending transactions are proposed transactions that were validated stored in pending transaction memory. [168] In some embodiments, the sequence of validation logic circuitry is configured to validate Ethernet headers, IP headers, TCP or UDP headers, and the application-layer of the data packets as bits from these headers arrive at the FPGA based gateway. [169] In some embodiments, the filter rules identify field values of the one or more headers or application-layer of the data packets that are blacklisted, wherein the sequence of validation logic circuitry is configured to discard the data packets that match the field values that are blacklisted. [170] In some embodiments, the filter rules identify field values of the one or more headers or application-layer of the data packets that are whitelisted, wherein the sequence of validation logic circuitry is configured to discard the data packets that do not match the field values that are whitelisted. [171] In some embodiments, the FPGA based gateway also includes log memory, wherein the sequence of validation logic circuitry is configured to write information from the data packets that fail validation into the log memory. [172] In some embodiments, the FPGA based gateway is further configured to replace Ethernet headers of the data packets that were validated with new Ethernet headers containing destination Ethernet addresses specifying the FPGA based router. [173] In some embodiments, the header field or application-layer field values in the forwarding table indicate the transaction subjects of the data packets, wherein the parsing and lookup circuitry is also configured to compare the transaction subjects to those in the forwarding table. [174] In some embodiments, the header field or application-layer field values in the forwarding table indicate a combination of a source identifier and the transaction subjects of the data packets, wherein the parsing and lookup circuitry is also configured to compare the combination of the source identifier and transaction subjects to those in the forwarding table. [175] In some embodiments, the FPGA based router includes a bus connection to a host system, wherein the FPGA based router is also configured to receive updates to the forwarding table by way of the bus connection. [176] In some embodiments, the FPGA based router is further configured to replace the destination addresses of the data packets that were determined by way of the forwarding table with new destination Ethernet addresses specifying the FPGA based matching engine. [177] In some embodiments, the state memory contains information related previously fulfilled transactions involving transaction subjects or source identifiers in common with those of at least some of the data packets, wherein the source identifiers specify entities originating the proposed transactions. [178] In some embodiments, the policies to be applied to the proposed transactions are formulated as rules, wherein the rules specify limits to total amounts of certain transaction subjects that can be fulfilled within a predetermined period of time. [179] In some embodiments, the policies to be applied to the proposed transactions are formulated as rules, wherein the rules specify limits to total amounts of certain transaction subjects per particular instances of the source identifiers that can be fulfilled within a predetermined period of time. [180] In some embodiments, the FPGA based matching engine also includes log memory, wherein the transaction validation circuitry is configured to write information from the data packets that fail validation into the log memory. [181] In some embodiments, parts of the FPGA based matching engine support virtualization into slices, wherein the pending transaction memory stores proposed transactions that were validated with their associated slices, and wherein the matching algorithm circuitry is further configured to only match pairs of pending transactions with a common associated slice. [182] In some embodiments, the FPGA based matching engine also includes lookup table memory configured with mappings of the transaction subjects to the slices or with combinations of transaction subjects and source identifiers to the slices, wherein the source identifiers specify entities originating the proposed transactions. [183] In some embodiments, the FPGA based matching engine also includes lookup circuitry configured to retrieve the slices associated with data packets from the lookup table memory and provide the slices retrieved to the pending transaction memory or the matching algorithm circuitry. [184] In some embodiments, the FPGA based matching engine is also configured to transmit receipt confirmations for the proposed transactions that were validated but have not yet been matched. [185] In some embodiments, the FPGA based matching engine is also configured to transmit fulfilment confirmations for the proposed transactions that were matched. [186] Figure 8 is a flow chart illustrating an example embodiment. The process illustrated by Figure 8 may be carried out by the exchange components described herein. However the process can be carried out by other types of devices or device subsystems The features shown therein. Further, these embodiments may be combined with features, aspects, and/or implementations of any of the previous figures or otherwise described herein. [187] Block 800 may involve receiving, by way of a network interface of a field programmable gate array (FPGA) based gateway, data packets containing proposed transactions, wherein the proposed transactions are respectively associated with transaction subjects. [188] Block 802 may involve validating, by way of a sequence of validation logic circuitry of the FPGA based gateway, one or more headers or application-layer of the data packets in accordance with filter rules, wherein at least one of the sequence of validation logic circuitry is configured to validate a header of a respective data packet while another portion of the respective data packet is being received by the network interface, and wherein the validation logic circuitry is further configured to discard any data packets that do not conform to the filter rules. [189] In some embodiments, the sequence of validation logic circuitry is configured to validate Ethernet headers, IP headers, TCP or UDP headers, and the application-layer of the data packets as bits from these headers arrive at the FPGA based gateway. [190] In some embodiments, the filter rules identify field values of the one or more headers or application-layer of the data packets that are blacklisted, wherein the sequence of validation logic circuitry is configured to discard the data packets that match the field values that are blacklisted. [191] In some embodiments, the filter rules identify field values of the one or more headers or application-layer of the data packets that are whitelisted, wherein the sequence of validation logic circuitry is configured to discard the data packets that do not match the field values that are whitelisted. [192] In some embodiments, the FPGA based gateway also includes log memory, wherein the sequence of validation logic circuitry is configured to write information from the data packets that fail validation into the log memory. [193] In some embodiments, the FPGA based gateway is further configured to replace Ethernet headers of the data packets that were validated with new Ethernet headers containing destination Ethernet addresses specifying an FPGA based router and to transmit these data packets to the FPGA based router. [194] Figure 9 is a flow chart illustrating an example embodiment. The process illustrated by Figure 9 may be carried out by the exchange components described herein embodiments of Figure 9 may be simplified by the removal of any one or more of the features shown therein. Further, these embodiments may be combined with features, aspects, and/or implementations of any of the previous figures or otherwise described herein. [195] Block 900 may involve receiving, by way of a network interface of a field programmable gate array (FPGA) based router, data packets that were validated by an FPGA based gateway. [196] Block 902 may involve comparing, by way of a parsing and lookup circuitry of the FPGA based router, header field or application-layer field values in the data packets to those in a forwarding table with entries mapping the header field or application-layer field values to destination addresses, wherein the comparing determines one of the destination addresses for each of the data packets. [197] In some embodiments, the header field or application-layer field values in the forwarding table indicate the transaction subjects of the data packets, wherein the parsing and lookup circuitry is also configured to compare the transaction subjects to those in the forwarding table. [198] In some embodiments, the header field or application-layer field values in the forwarding table indicate a combination of a source identifier and the transaction subjects of the data packets, wherein the parsing and lookup circuitry is also configured to compare the combination of the source identifier and transaction subjects to those in the forwarding table. [199] In some embodiments, the FPGA based router includes a bus connection to a host system, wherein the FPGA based router is also configured to receive updates to the forwarding table by way of the bus connection. [200] In some embodiments, the FPGA based router is further configured to replace the destination addresses of the data packets that were determined by way of the forwarding table with new destination Ethernet addresses specifying an FPGA based matching engine and to and to transmit these data packets to the FPGA based matching engine. [201] Figure 10 is a flow chart illustrating an example embodiment. The process illustrated by Figure 10 may be carried out by the exchange components described herein. However, the process can be carried out by other types of devices or device subsystems. The embodiments of Figure 10 may be simplified by the removal of any one or more of the features shown therein. Further, these embodiments may be combined with features, aspects, and/or implementations of any of the previous figures or otherwise described herein. [202] Block 1000 may involve receiving by way of a network interface of a field transactions, wherein the proposed transactions are respectively associated with transaction subjects. [203] Block 1002 may involve validating, by way of transaction validation circuitry of the FPGA based matching engine, the proposed transactions based on their related information from state memory and policies to be applied to the proposed transactions, wherein the transaction validation circuitry is further configured to discard any data packets containing proposed transactions that do not conform to the policies. [204] Block 1004 may involve matching, by way of matching algorithm circuitry of the FPGA based matching engine, pairs of pending transactions according to pre-determined criteria, wherein the pending transactions are proposed transactions that were validated stored in pending transaction memory. [205] In some embodiments, the state memory contains information related to previously fulfilled transactions involving transaction subjects or source identifiers in common with those of at least some of the data packets, wherein the source identifiers specify entities originating the proposed transactions. [206] In some embodiments, the policies to be applied to the proposed transactions are formulated as rules, wherein the rules specify limits to total amounts of certain transaction subjects that can be fulfilled within a predetermined period of time. [207] In some embodiments, the policies to be applied to the proposed transactions are formulated as rules, wherein the rules specify limits to total amounts of certain transaction subjects per particular instances of the source identifiers that can be fulfilled within a predetermined period of time. [208] In some embodiments, the FPGA based matching engine also contains log memory, wherein the transaction validation circuitry is configured to write information from the data packets that fail validation into the log memory. [209] In some embodiments, parts of the FPGA based matching engine support virtualization into slices, wherein the pending transaction memory stores proposed transactions that were validated with their associated slices, wherein the matching algorithm circuitry is further configured to only match pairs of pending transactions with a common associated slice. [210] In some embodiments, the FPGA based matching engine also contains lookup table memory configured with mappings of the transaction subjects to the slices or with combinations of transaction subjects and source identifiers to the slices wherein the source [211] In some embodiments, the FPGA based matching engine also contains lookup circuitry configured to retrieve the slices associated with data packets from the lookup table memory and provide the slices retrieved to the pending transaction memory or the matching algorithm circuitry. [212] In some embodiments, the FPGA based matching engine is also configured to transmit receipt confirmations for the proposed transactions that were validated but have not yet been matched. [213] In some embodiments, the FPGA based matching engine is also configured to transmit fulfilment confirmations for the proposed transactions that were matched. VII. Closing [214] The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those described herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. [215] The above detailed description describes various features and operations of the disclosed systems, devices, and methods with reference to the accompanying figures. The example embodiments described herein and in the figures are not meant to be limiting. Other embodiments can be utilized, and other changes can be made, without departing from the scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations. [216] With respect to any or all of the message flow diagrams, scenarios, and flow charts in the figures and as discussed herein, each step, block, and/or communication can represent a processing of information and/or a transmission of information in accordance with example embodiments. Alternative embodiments are included within the scope of these example embodiments. In these alternative embodiments, for example, operations described as steps, blocks, transmissions, communications, requests, responses, and/or messages can be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. Further, more or fewer blocks and/or discussed herein, and these message flow diagrams, scenarios, and flow charts can be combined with one another, in part or in whole. [217] A step or block that represents a processing of information can correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique. Alternatively or additionally, a step or block that represents a processing of information can correspond to a module, a segment, or a portion of program code (including related data). The program code can include one or more instructions executable by a processor for implementing specific logical operations or actions in the method or technique. The program code and/or related data can be stored on any type of computer readable medium such as a storage device including RAM, a disk drive, a solid- state drive, or another storage medium. [218] The computer readable medium can also include non-transitory computer readable media such as non-transitory computer readable media that store data for short periods of time like register memory and processor cache. The non-transitory computer readable media can further include non-transitory computer readable media that store program code and/or data for longer periods of time. Thus, the non-transitory computer readable media may include secondary or persistent long-term storage, like ROM, optical or magnetic disks, solid-state drives, or compact disc read only memory (CD-ROM), for example. The non-transitory computer readable media can also be any other volatile or non- volatile storage systems. A non-transitory computer readable medium can be considered a computer readable storage medium, for example, or a tangible storage device. [219] Moreover, a step or block that represents one or more information transmissions can correspond to information transmissions between software and/or hardware modules in the same physical device. However, other information transmissions can be between software modules and/or hardware modules in different physical devices. [220] The particular arrangements shown in the figures should not be viewed as limiting. It should be understood that other embodiments could include more or less of each element shown in a given figure. Further, some of the illustrated elements can be combined or omitted. Yet further, an example embodiment can include elements that are not illustrated in the figures. [221] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purpose of illustration and are not intended to be