Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BLOCKCHAIN EVENT PROCESSING DATABASE SYSTEM
Document Type and Number:
WIPO Patent Application WO/2021/046129
Kind Code:
A1
Abstract:
Provided herein is an event processing database system that stores the events in an easy-to-query database. The event processing systems herein are easily and securely scalable, employ the use of a flexible query language to filter a variety of event stream structures in various ways, and are capable of detecting combinations of multiple events.

Inventors:
KANG CHANGU (US)
Application Number:
PCT/US2020/049068
Publication Date:
March 11, 2021
Filing Date:
September 02, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PLANARIA CORP (US)
International Classes:
A63B69/36; A61B5/11; H04L29/06
Foreign References:
US20170109735A12017-04-20
US20170287090A12017-10-05
US20160292672A12016-10-06
US20150082392A12015-03-19
US20170232300A12017-08-17
Attorney, Agent or Firm:
ASHUR, Dor (US)
Download PDF:
Claims:
CLAIMS

WHAT IS CLAIMED IS:

1. A method for processing a blockchain, comprising:

(a) storing a plurality of blockchain transactions from said blockchain in an indexed database, wherein each of said plurality of blockchain transactions is associated with a transaction timestamp;

(b) generating a query comprising a filter condition and a corresponding filter timestamp, wherein said filter timestamp indicates a time when said filter condition was last executed;

(c) using said query to determine if one or more blockchain transactions in said plurality of blockchain transactions in said indexed database (1) satisfies said filter condition and (2) is associated with a transaction timestamp that is greater than said filter timestamp; and

(d) executing a task associated with said filter condition if said one or more blockchain transactions (1) satisfies said filter condition and (2) is associated with said transaction timestamp that is greater than said filter timestamp.

2. The method of claim 1, further comprising updating said filter timestamp.

3. The method of claim 1, wherein (a) comprises periodically polling said blockchain.

4. The method of claim 1, wherein (a) comprises subscribing to push notifications from said blockchain.

5. The method of claim 1, wherein (a) comprises implementing said blockchain on a peer-to-peer network and directly monitoring said peer-to-peer network.

6. The method of claim 1, wherein said indexed database is a document database, a relational database, or a graph database.

7. The method of claim 1, wherein said filter condition comprises transaction data in said blockchain transaction corresponding to a defined data pattern.

8. The method of claim 1, wherein said filter condition comprises said one or more blockchain transactions corresponding to a defined transaction pattern.

9. The method of claim 1, wherein said filter condition is associated with a frequency at which said filter condition is to be executed.

10. The method of claim 1, wherein said task comprises generating or broadcasting a new blockchain transaction.

11. The method of claim 1, wherein (d) comprises executing a plurality of tasks associated with said filter condition.

12. The method of claim 1, wherein the query comprises a streaming query.

13. The method of claim 1, wherein executing the task is parallelized to multiple machines.

14. The method of claim 1, wherein executing the task comprises queueing a function execution.

15. The method of claim 1, wherein the task is attached to a plurality of filters.

16. The method of claim 1, wherein the indexed database comprise a document database, a relational database, a graph database, a stream processing system, a key-value database or any combination thereof.

17. A method for processing a transaction, comprising:

(a) storing a plurality of transactions from said in an indexed database, wherein each of said plurality of transactions is associated with a transaction timestamp;

(b) generating a query comprising a filter condition and a corresponding filter timestamp, wherein said filter timestamp indicates a time when said filter condition was last executed;

(c) using said query to determine if one or more transactions in said plurality of transactions in said indexed database (1) satisfies said filter condition and (2) is associated with a transaction timestamp that is greater than said filter timestamp; and

(d) executing a task associated with said filter condition if said one or more transactions (1) satisfies said filter condition and (2) is associated with said transaction timestamp that is greater than said filter timestamp.

18. The method of claim 17, further comprising updating said filter timestamp.

19. The method of claim 17, wherein (a) comprises periodically polling said blockchain.

20. The method of claim 17, wherein (a) comprises subscribing to push notifications from said blockchain.

21. The method of claim 17, wherein (a) comprises implementing said on a peer-to-peer network and directly monitoring said peer-to-peer network.

22. The method of claim 17, wherein said indexed database is a document database, a relational database, or a graph database.

23. The method of claim 17, wherein said filter condition comprises transaction data in said transaction corresponding to a defined data pattern.

24. The method of claim 17, wherein said filter condition comprises said one or more transactions corresponding to a defined transaction pattern.

25. The method of claim 17, wherein said filter condition is associated with a frequency at which said filter condition is to be executed.

26. The method of claim 17, wherein said task comprises generating or broadcasting a new transaction.

27. The method of claim 17, wherein (d) comprises executing a plurality of tasks associated with said filter condition.

28. One or more non-transitory computer storage media storing instructions that are operable, when executed by one or more computers, to cause said one or more computers to perform operations comprising:

(a) storing a plurality of blockchain transactions from said blockchain in an indexed database, wherein each of said plurality of blockchain transactions is associated with a transaction timestamp;

(b) generating a query comprising a filter condition and a corresponding filter timestamp, wherein said filter timestamp indicates a time when said filter condition was last executed;

(c) using said query to determine if one or more blockchain transactions in said plurality of blockchain transactions in said indexed database (1) satisfies said filter condition and (2) is associated with a transaction timestamp that is greater than said filter timestamp; and

(d) executing a task associated with said filter condition if said one or more blockchain transactions (1) satisfies said filter condition and (2) is associated with said transaction timestamp that is greater than said filter timestamp.

29. The media of claim 28, wherein the operations further comprise updating said filter timestamp.

30. The media of claim 28, wherein (a) comprises periodically polling said blockchain.

31. The media of claim 28, wherein (a) comprises subscribing to push notifications from said blockchain.

32. The media of claim 28, wherein (a) comprises implementing said blockchain on a peer-to-peer network and directly monitoring said peer-to-peer network.

33. The media of claim 28, wherein said indexed database is a document database, a relational database, or a graph database.

34. The media of claim 28, wherein said filter condition comprises transaction data in said blockchain transaction corresponding to a defined data pattern.

35. The media of claim 28, wherein said filter condition comprises said one or more blockchain transactions corresponding to a defined transaction pattern.

36. The media of claim 28, wherein said filter condition is associated with a frequency at which said filter condition is to be executed.

37. The media of claim 28, wherein said task comprises generating or broadcasting a new blockchain transaction.

38. The media of claim 28, wherein (d) comprises executing a plurality of tasks associated with said filter condition.

39. The media of claim 28, wherein the query comprises a streaming query.

40. The media of claim 28, wherein executing the task is parallelized to multiple machines.

41. The media of claim 28, wherein executing the task comprises queueing a function execution.

42. The media of claim 28, wherein the task is attached to a plurality of filters.

43. The media of claim 28, wherein the indexed database comprise a document database, a relational database, a graph database, a stream processing system, a key-value database or any combination thereof.

Description:
BLOCKCHAIN EVENT PROCESSING DATABASE SYSTEM

CROSS-REFERENCE

[0001] This application claims the benefit of U.S. Provisional Application No. 62/895,024, filed September 3, 2019, which is hereby incorporated by reference in its entirety herein.

BACKGROUND

[0002] A blockchain is a distributed system made up of a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. In addition to blocks, the transactions are also linked to one another using cryptography, ensuring a transparent immutable audit trail of all types of transactions. In essence a blockchain can be seen as a transparent public data storage. Just like any system that deals with data, it may be useful to monitor a blockchain for new events and programmatically run tasks when certain events happen.

SUMMARY

[0003] Provided herein is an event processing database system that stores the events in an easy-to-query database. The event processing systems herein are easily and securely scalable, employ the use of a flexible query language to filter a variety of event stream structures in various ways, and are capable of detecting combinations of multiple events.

[0004] In some embodiments, unlike traditional blockchains that employ distributed and linked blocks the blockchain systems described herein are centralized and indexed, enabling faster and easier access through searching and queries. Further, in some embodiments, contrary to classical blockchains, the queried blocks enable immutability while easing and providing improved and efficient search means.

[0005] One aspect provided herein is a method for processing a blockchain, comprising: (a) storing a plurality of blockchain transactions from said blockchain in an indexed database, wherein each of said plurality of blockchain transactions is associated with a transaction timestamp; (b) generating a query comprising a filter condition and a corresponding filter timestamp, wherein said filter timestamp indicates a time when said filter condition was last executed; (c) using said query to determine if one or more blockchain transactions in said plurality of blockchain transactions in said indexed database (1) satisfies said filter condition and (2) is associated with a transaction timestamp that is greater than said filter timestamp; and (d) executing a task associated with said filter condition if said one or more blockchain transactions (1) satisfies said filter condition and (2) is associated with said transaction timestamp that is greater than said filter timestamp.

[0006] In some embodiments, the method further comprises updating said filter timestamp.

In some embodiments, (a) comprises periodically polling said blockchain. In some embodiments, (a) comprises subscribing to push notifications from said blockchain. In some embodiments, (a) comprises implementing said blockchain on a peer-to-peer network and directly monitoring said peer-to-peer network. In some embodiments, said indexed database is a document database, a relational database, or a graph database. In some embodiments, said filter condition comprises transaction data in said blockchain transaction corresponding to a defined data pattern. In some embodiments, said filter condition comprises said one or more blockchain transactions corresponding to a defined transaction pattern. In some embodiments, said filter condition is associated with a frequency at which said filter condition is to be executed. In some embodiments, said task comprises generating or broadcasting a new blockchain transaction. In some embodiments, (d) comprises executing a plurality of tasks associated with said filter condition. In some embodiments, the query comprises a streaming query. In some embodiments, executing the task is parallelized to multiple machines. In some embodiments, executing the task comprises queueing a function execution. In some embodiments, the task is attached to a plurality of filters. In some embodiments, the indexed database comprise a document database, a relational database, a graph database, a stream processing system, a key -value database or any combination thereof.

[0007] Another aspect provided herein is a method for processing a transaction, comprising: (a) storing a plurality of transactions from said in an indexed database, wherein each of said plurality of transactions is associated with a transaction timestamp; (b) generating a query comprising a filter condition and a corresponding filter timestamp, wherein said filter timestamp indicates a time when said filter condition was last executed; (c) using said query to determine if one or more transactions in said plurality of transactions in said indexed database (1) satisfies said filter condition and (2) is associated with a transaction timestamp that is greater than said filter timestamp; and (d) executing a task associated with said filter condition if said one or more transactions (1) satisfies said filter condition and (2) is associated with said transaction timestamp that is greater than said filter timestamp.

[0008] In some embodiments, the method further comprises updating said filter timestamp.

In some embodiments, (a) comprises periodically polling said blockchain. In some embodiments, (a) comprises subscribing to push notifications from said blockchain. In some embodiments, (a) comprises implementing said on a peer-to-peer network and directly monitoring said peer-to-peer network. In some embodiments, said indexed database is a document database, a relational database, or a graph database. In some embodiments, said filter condition comprises transaction data in said transaction corresponding to a defined data pattern. In some embodiments, said filter condition comprises said one or more transactions corresponding to a defined transaction pattern. In some embodiments, said filter condition is associated with a frequency at which said filter condition is to be executed. In some embodiments, said task comprises generating or broadcasting a new transaction. In some embodiments, (d) comprises executing a plurality of tasks associated with said filter condition.

[0009] Another aspect provided herein is one or more non-transitory computer storage media storing instructions that are operable, when executed by one or more computers, to cause said one or more computers to perform operations comprising: (a) storing a plurality of blockchain transactions from said blockchain in an indexed database, wherein each of said plurality of blockchain transactions is associated with a transaction timestamp; (b) generating a query comprising a filter condition and a corresponding filter timestamp, wherein said filter timestamp indicates a time when said filter condition was last executed; (c) using said query to determine if one or more blockchain transactions in said plurality of blockchain transactions in said indexed database (1) satisfies said filter condition and (2) is associated with a transaction timestamp that is greater than said filter timestamp; and (d) executing a task associated with said filter condition if said one or more blockchain transactions (1) satisfies said filter condition and (2) is associated with said transaction timestamp that is greater than said filter timestamp.

[0010] In some embodiments, the operations further comprise updating said filter timestamp. In some embodiments, (a) comprises periodically polling said blockchain. In some embodiments, (a) comprises subscribing to push notifications from said blockchain. In some embodiments, (a) comprises implementing said blockchain on a peer-to-peer network and directly monitoring said peer-to-peer network. In some embodiments, said indexed database is a document database, a relational database, or a graph database. In some embodiments, said filter condition comprises transaction data in said blockchain transaction corresponding to a defined data pattern. In some embodiments, said filter condition comprises said one or more blockchain transactions corresponding to a defined transaction pattern. In some embodiments, said filter condition is associated with a frequency at which said filter condition is to be executed. In some embodiments, said task comprises generating or broadcasting a new blockchain transaction. In some embodiments, (d) comprises executing a plurality of tasks associated with said filter condition. In some embodiments, the query comprises a streaming query. In some embodiments, executing the task is parallelized to multiple machines. In some embodiments, executing the task comprises queueing a function execution. In some embodiments, the task is attached to a plurality of filters. In some embodiments, the indexed database comprise a document database, a relational database, a graph database, a stream processing system, a key -value database or any combination thereof.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] The novel features of the disclosure are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the disclosure are utilized, and the accompanying drawings of which:

[0012] Figure 1 schematically illustrates a blockchain event processing system that processes every event one by one;

[0013] Figure 2 provides a flowchart of an example process performed by the blockchain event processing system of Figure 1;

[0014] Figure 3 provides a flowchart for a “process event” subroutine of the example process of Figure 2;

[0015] Figure 4 schematically illustrates a database-based blockchain event processing system, in accordance with an embodiment herein;

[0016] Figure 5 provides a flowchart of an example process performed by the database-based blockchain event processing system of Figure 4, in accordance with an embodiment herein; [0017] Figure 6 provides a flowchart for a “process filter” subroutine of the example process of Figure 5, in accordance with an embodiment herein;

[0018] Figure 7 provides a flowchart of how the database-based blockchain event processing system constructs a timestamped filter in order to retrieve only the new events since its last processing run, in accordance with an embodiment herein; [0019] Figure 8 provides an exemplary code for providing a database structure for each database component in the blockchain event processing database system, in accordance with an embodiment herein;

[0020] Figure 9 provides an exemplary code for providing a database structure for the naive blockchain event processing system;

[0021] Figure 10 provides an exemplary code of an alternative embodiment of the blockchain event processing database system, in accordance with an embodiment herein;

[0022] Figure 11 is an example list of high level abstraction query languages used by the blockchain event processing database system, in accordance with an embodiment herein; [0023] Figure 12 is an example of an event serialization format, in accordance with an embodiment herein;

[0024] Figure 13, is an example of a combination filter, in accordance with an embodiment herein;

[0025] Figure 14 shows a non-limiting example of a computing device;

[0026] Figure 15 shows a non-limiting example of a web/mobile application provision system; and

[0027] Figure 16 shows a non-limiting example of a cloud-based web/mobile application provision system.

RETATT,ER DESCRIPTION

[0028] While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed.

[0029] Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.

[0030] Whenever the term “no more than,” “less than,” or “less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than,” or “less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.

[0031] Implementing a blockchain event processing system is relatively trivial when there are few transactions to monitor and process. For example, the Bitcoin (BTC) blockchain in 2019 has a block size limit of 1 ~ 4 megabytes per block where each block occurs every 10 minute or so on average. However in the future as blockchains start to scale to facilitate gigabytes and terabytes of transactions per block, it will be very inefficient to listen to and process every single incoming event in real-time as they happen.

[0032] A blockchain event processing system may have a filter and a task. A filter may be a programmatic function which is used to test whether an event passes a certain condition or not (900). After inspecting an event, the filter returns either true or false. A task is a program which is programmed to run certain actions when its associated FILTER is triggered by an event. For example, a system may want to monitor all transactions sent to a certain address (FILTER), and send an email notification (TASK) when it happens (EVENT) (the “task” column of the first row of 902). But a task may be any action that can be implemented as a program, for example a “create transaction” task could create another blockchain transaction when there is an incoming event which passes a filter (the “task” column of the second row of 902).

[0033] Figure 1 schematically illustrates a blockchain event processing system that processes every event one by one. The tasks and filters are submitted to event processor 102 by Client 100. The event processor 102 then stores the submitted filters into a filter database FILTER DB (106) and the submitted tasks into a task database TASK DB (104).

[0034] Figure 9 provides an exemplary code for providing a database structure for the naive blockchain event processing system. Although there are many ways to store the filters in the FILTER DB 106, Figure 9 shows one simple example where the FILTER DB 106 simply stores raw FILTER code which programmatically checks an incoming event for certain condition and either returns true or false (900). The FILTER DB 106 may also store other metadata about each filter, such as name, description, and other useful information. The example 900 uses JavaScript but the filter can be written in any programming language such as python, c, java, etc. depending on the implementation. Next, the event processor 102 may store all the registered tasks along with their related filters into TASK DB 104 (902). By indexing tasks against associated filters, the event processor 102 may later easily retrieve relevant tasks to run when an incoming event matches a filter. There may be one task per filter but there may also be multiple tasks per filter registered in case the client has registered multiple tasks to be triggered for the same filter. The event processor 102 also keeps listening to event listener 108. The event listener 108 monitors the blockchain 110 either by polling constantly, or by subscribing to the blockchain software's built-in push notification features such as ZeroMQ supported by Bitcoin, or through listening to the peer-to-peer network events by subscribing to the peer to peer protocol directly. There also may be various other ways to monitor the blockchain, such as monitoring the raw blockchain files, but the precise method of monitoring may be out of scope of the current embodiments. When the event listener 108 discovers a new event, it forwards the event to the event processor 102 which then runs the processing by utilizing the TASK DB 104 and the FILTER DB 106. Figure 2 illustrates the high level flowchart for this naive event processing system. Whenever the event listener 108 encounters a new event, it calls the event processor 102 with the EVENT data as argument (200).

[0035] Figure 12 is an example of an event serialization format. As shown, the events themselves are not constrained to any certain format, but one approach may be to use a serialization format that may be easy to process, such as JSON. The EVENT data may be a structured interpretation of the actual raw data. The event processor 102 then fetches all the filters it has from the FILTER DB 106 (202), sets the iteration INDEX to 0 (204), and then checks if the current index (in the beginning it's 0) may be greater than or equal to the total number of FILTERS it's tracking (206). If this condition is true from the beginning, that means the length of the FILTERS array is 0, which means the FILTER DB 106 does not contain any filters to track. In this case the event processor 102 simply ends the process since there may be no work to do. However if the INDEX is less than the FILTERS array length, it runs a subroutine called ProcessEvent which processes the EVENT with the filter at current INDEX (208). After it's finished processing the filter at the current INDEX, it increments the INDEX (210) and starts another round of loop, and the process repeats until the INDEX is greater than or equal to the FILTERS. length (206), which would mean the event processor 102 has completely iterated through all the filters, therefore jumping to the end state.

[0036] Figure 3 is a flowchart for the ProcessEvent subroutine. As seen in Figure 2, the ProcessEvent step (208) takes two arguments: FILTERS [INDEX] and EVENT, which correspond to the inputs FILTER and EVENT in Figure 3 (300). The goal of this subroutine may be to process a single EVENT with a single FILTER. The event processor 102 first checks whether the EVENT passes the FILTER test (302). The FILTER function may be written in any programming language that takes an event as input and returns true or false depending on whether the event matches the programmed logic or not (900). If the EVENT does not pass the FILTER test, the ProcessEvent subroutine ends and the flowchart jumps right to the end phase. However if the EVENT passes the FILTER test, it goes on to the next step (304) to first fetch all the TASKS related to the FILTER from the TASK DB 104. There may be one TASK per FILTER but there may also be multiple different TASKS associated with each FILTER if the client 100 submitted multiple different processing logic (TASK) to trigger for a single FILTER match.

[0037] Once the event processor has access to all the TASKS (304), it needs to iterate through the TASKS array and run each task. To do this, it first sets TASK INDEX to 0 (306), then it checks to see if the TASK INDEX is greater than or equal to the length of the TASKS array (308). If this condition is true from the beginning, this would mean that the length of the TASKS array is 0, which means there is no task to run, so it jumps to the end state. However if the TASK INDEX is smaller than the length of the TASKS array, it goes ahead to process the task at TASK INDEX of the TASKS array, while passing the EVENT as an argument (310). After the task is run, the event processor 102 increments the TASK INDEX (312) and starts the next round of the loop (308). The loop continues until the TASK INDEX finally reaches the length of the length of the TASKS array. When the TASK INDEX finally reaches the length of the TASKS array, the process ends.

[0038] It is important to note that the entire procedure described in Figure 1 and its subroutine Figure 2 needs to be repeated for every EVENT. This means, if there are 10,000 incoming transaction events per second, the entire flow illustrated in Figure 1 and Figure 2 needs to be repeated at 10,000 times per second. This is not efficient and is not scalable. [0039] Another problem with this system may be with the filter. There are two different approaches to implementing the filter function, each of which has its own unique set of problems. First, the filter function may take the form of a full fledged programming language such as JavaScript, C, Python, Java, or others, as seen in box 900 of Figure 9. In this case, there may be security issues because the filter exists as a full fledged program which can execute any arbitrary code. This may work in a restricted environment where all clients which submit tasks and filters (100) can be trusted, but if you are trying to implement a system that provides a public API (application programming interface) to the general public, it doesn't work because any hacker may submit a malicious piece of filter code that may harm the system, and therefore insecure. Second, instead of a full fledged programming language, the filter function may take the form of a more restricted DSL (domain specific language) which ensures that arbitrary code cannot be run, and only safe filtering operations can be run. One such example may be JQ, a JSON processing language. Another familiar example may be a regular expression engine which may be built into many programming languages. It may be possible to use this DSL based approach to check for certain patterns in any piece of data. The problem with this DSL based approach may be that most of these domain specific data processing languages are intentionally limited in their expressive power and therefore not flexible enough to express various types of complex patterns needed to filter.

[0040] Another problem with the naive event processing system may be that a system which can only process events one by one can only support “stateless event processing”, which means it can process a single event at a time and cannot process more complex events which are combinations of multiple events. For example, it may be impossible to create a filter which tests multiple different types of events simultaneously. Another example, to watch out for an event which may be defined as: “when a sequence of N chained transactions happen from an address”, this may be impossible to monitor this pattern when the system may be able to process only one event at a time and forget all the past events. It would be useful, therefore, to have an event processing system which may be scalable, and allows for flexible query language to filter various event stream structures in various ways, as well as providing the ability to detect sophisticated events which are combinations of multiple events, while not sacrificing security.

[0041] In some embodiments, unlike traditional blockchains that employ distributed and linked blocks the blockchain systems described herein are centralized and indexed, enabling faster and easier access through searching and queries.

Architecture

[0042] Figure 4 illustrates one embodiment of the event processing database system. The main architecture difference from the naive event processing system is it utilizes a database approach. Instead of processing every incoming event one by one in real-time, it stores the events in an easy-to-query storage (database) and run the filtering from the storage. After all, this difficulty in querying a structured dataset may be why database systems exist and why database query languages exist. Database systems provides the ability to store as much dataset as you want, index them in whichever structure as you want, and discover patterns from the persisted structured dataset from a holistic view. Also, database query languages make it easy to filter complex data patterns with a simple query. For example, filtering a whole chain of events can be represented as a connected graph format (“monitor when a new transaction event completes a certain chain of transaction sequence pattern”) In this case the events are stored in a graph database, and implement the filters using a graph database query language such as Gremlin graph traversal language. In other cases an event pattern may be filtered that involves relationships between multiple events. In this case a relational database may be used to store the events, and use a relational database query language (SQL) to filter the compound event. None of this is possible if the events are immediately processed as they come, and immediately forgotten. Basically, the database approach to event processing may be superior to the naive approach of filtering events one by one in real-time.

[0043] Compared to the system illustrated in Figure 1, the event processing database system in Figure 4 has two additional modules: EVENT DB (410) and CHECKPOINT DB (414). The EVENT DB 410 may be used for storing events before processing instead of trying to process all events as they come in real-time. The CHECKPOINT DB 414 may be used to keep track of when certain filters were processed last time, or when certain tasks were run last time, so that the event processor can keep filtering only the new events without redundancy. No assumptions are made on how the events themselves will be structured, but one example approach may be to use a data serialization format such as JSON to present the events in a structured way so it may be processed more easily (Figure 12).

[0044] First, a client 400 whose role may be to submit filters and tasks to the event processing database system. The task may be the actual function that will be executed when a relevant event happens, and the filter may be used to filter out only the relevant events for processing. Events are stored in EVENT DB 410 instead of immediately being processed in an ephemeral manner. Because the events are stored and indexed in the database it is much more efficient to filter the events at large scale and even filter events in bulk. Also, since the events are stored in a database, database query languages are used as filter. When the client 400 submits a filter and a task to event processor 402, the event processor 402 stores the task into TASK DB 404 and the filter into FILTER DB 406. The TASK DB 404 may contain all the registered tasks as well as their associated filters (802). The FILTER DB 406 may contain all the registered filters (804). The FILTER DB 406 may also contain other filter related metadata for each entry. The event processor 402 is also connected to event listener 408. The event listener 408 monitors EVENT DB 410 instead of directly monitoring the blockchain 412. The EVENT DB 410 is a database constructed by constantly listening to the blockchain 412's events and indexing them into various structured formats which can be easily queried and filtered by the event listener 408. The EVENT DB may monitor the blockchain through various methods, such as subscribing to the blockchain built-in push notification methods such as Zeromq, or by constantly polling the blockchain with the built-in blockchain fetch APIs such as JSON-RPC. Or it may be listening directly to the peer to peer network by implementing the blockchain peer to peer protocol itself. It may also monitor by directly watching the stored blockchain files or memory. These are just some examples, the monitoring method is not limited to what's mentioned here, there may be other ways of monitoring the blockchain state in real-time.

[0045] When storing the events, the EVENT DB 410 attaches some form of timestamp data to all of its EVENT entries, which will allow the Event Processor 402 to query for filters based on timestamps (806). This may be crucial for retrieving and processing only the new events which haven't been processed yet. While prior art systems directly processes every single blockchain event one by one in real-time, this embodiment of the blockchain event processing database system can utilize the EVENT DB 410 as an intermediate buffer which allows for more flexible and efficient filtering. The most obvious benefit may be that the events are stored and indexed, so they can be referenced back later, which gives the event processor 402 a long term memory, which not only makes simple event processing more efficient, but also enables the processing of composite type events which are made up of multiple events, or even an event which may be triggered when a certain sequence of individual events happen in a certain order. For example, you may want to monitor an exact transaction sequence pattern of an address A sending coins to address B, followed by B sending coins to address C. This would be impossible to monitor if all event filtering may be done on a single event.

[0046] Also the events can be batch processed since they don't have to be filtered one by one. For example for tracking 1000 incoming transactions per second instead of running a filter for all of 1000 events per second individually, the event listener 408 may query the EVENT DB 410 only once per second with the filter, and process the entire batched events array as a single unit. This may be possible because the filters are written in database query languages (804) and a single query filter returns all the events required in a single go. The power of this architecture may be that it enables the event processor 402 to filter and process events on its own terms, instead of reactively processing each and every event. For example, the event processor may be configured to query against EVENT DB 410 every 10 seconds, or every 1 minute, or every 1 hour. Some use cases may not require hyper-frequent event processing. In some cases the flexibility may be available to slow down the event processing interval, whereas in other cases a high processing speed may be maintained. The important part may be that this customization can be made on a per-filter basis, where certain filters are processed more frequently than other filters. The customization can also be made on a per-task basis, where multiple tasks which share the same filter may have diflerent priorities therefore have diflerent priorities for processing. This type of differentiation allows for custom prioritization of the event processing api. This prioritization information may be stored in TASK DB 404 (in case the task processing needs to have various priority levels), or may be stored in FILTER DB 406 (in case the filter processing needs to have various priority levels). The last component of the system is CHECKPOINT DB 414. The CHECKPOINT DB is a database which keeps track of all the filter checkpoints which indicate when was the last time this filter was processed (800). The timestamps stored in the CHECKPOINT DB, along with the timestamps stored along with every EVENT in the EVENT DB 410 are crucial for filtering only the new events since last processing, in order to make sure that the events are processed exactly once. But it is not limited to executing tasks exactly once. Because the timestamp data may be stored instead of being forgotten, it may be possible to utilize the timestamp in various creative ways to run tasks in more sophisticated ways.

Operation

[0047] Figure 5 is a high level flowchart of the primary embodiment of the event processing database system as described in Figure 4. In this embodiment, each event processing may be automatically triggered on a fixed interval, for example every 1 second. But it is not limited to this option. The whole point may be that the event processing may be done proactively by the event processor 402 instead of being reactively triggered by incoming events. The process may run every 1 second but also every 1 minute, and so on. Because the event queries always include a “greater than last checkpoint” condition, every query has high performance as long as the timestamps are properly indexed in the EVENT DB 410.

[0048] Because the main innovation of this system is that event processing can be done on the system's own terms instead of reactively processing every single incoming event as they come in in real-time, this flowchart does not start with an EVENT as input. Instead, the process involves iterating through all filters stored in FILTER DB 406 and iterating through them and processing each. When each process starts, the Event Processor 402 fetches all FILTERS stored in FILTER DB (500). Then the Event Processor 402 sets the FILTER INDEX to 0 and starts the iteration loop (502). First it checks if FILTER INDEX is greater than or equal to the total length of the FILTERS array (504). If this is true from the beginning, it means the FILTERS array is empty therefore there are no FILTERS to process, and the process ends. Otherwise, the event processor 402 goes onto the next step and processes the FILTER at current FILTER INDEX (FILTERS [FILTER INDEX]) by calling the ProcessFilter subroutine

[0049] (506) and passing the FILTERS [FILTER INDEX] as argument. After the ProcessFilter call returns, it increments the FILTER INDEX and goes back to step 504 to continue with the loop until it iterates through the entire FILTERS array, at which point the FILTER INDEX >= FILTERS. length condition will be true, and the process jumps to the end state.

[0050] Figure 6. is a flowchart of the ProcessFilter subroutine (506) which is the actual processing logic and is the core part of this implementation. The ProcessFilter subroutine call from Figure 5 (506) passes FILTERS [FILTER INDEX] as an argument, which is the current filter it's trying to process. Inside the ProcessFilter subroutine in Figure 6, this is relevant to the FILTER input (600), and it looks for the last CHECKPOINT associated with the FILTER (602). A CHECKPOINT is the timestamp at which the ProcessFilter call was executed last time for this specific FILTER. If this is the first time the FILTER is being processed, the CHECKPOINT DB query (602) wifi return an empty result because the checkpoint does not exist. Once the CHECKPOINT is acquired, the Event Processor 402 goes onto the next step to construct a one-time filter called TIMESTAMPED FILTER (604). The TIMESTAMPED FILTER is constructed by taking the original FILTER and creating a new conjunction “AND” filter which adds an additional condition which ensures that “the timestamp of an EVENT must be greater than the CHECKPOINT for the current filter”. [0051] Figure 7 demonstrates of using a MongoDB query language to represent a FILTER (700). It may be assumed that the events are stored in a certain format (In this case, the events are stored as a document with “out” attribute which has an “si” child attribute, but the details are not important, as events can be stored in any different format, for example a blockchain indexing system called BitDB stores all transaction objects as a MongoDB document, with various attributes representing input script push data, output script push data, transaction id, and so forth). [0052] Here, in order to only query the events since the last ProcessFilter call, an “AND” query may be formed of the original query and a timestamp related query which says, “items whose timestamps are greater than 1566616699610” (702). In this example, it may be assumed that the last time this ProcessFilter was executed was at timestamp 1566616699610. Also MongoDB may be used as an example but any programmer with database knowledge will understand that the same principle may be used to implement the same logic in any other database systems. The conjunction with the checkpoint timestamp ensures that the TIMESTAMPED FILTER returns only the subset of the events from the EVENT DB 410 which match the original FILTER AND have happened since the last ProcessFilter call for this particular filter. In case the ProcessFilter is being run for the first time, the CHECKPOINT result will be empty when it queries for the FILTER (602). In this case the TIMESTAMPED FILTER will be equivalent to the original FILTER since there may be no timestamp related filter to add (604). Once the TIMESTAMPED FILTER is prepared, the Event processor 402 queries the EVENT DB 410 with the TIMESTAMPED FILTER to fetch only the new events related to the FILTER since the last ProcessFilter call (606). Then it checks if there are any NEW EVENTS returned from the EVENT DB query (608). If the NEW_EVENTS is empty, then there has not been a new EVENT since last ProcessFilter call which pass the FILTER, so the subroutine needs to end. Before ending, there may be one more step. This may be where the CHECKPOINT is introduced. Before finishing the ProcessFilter call, the procedure call updates the checkpoint for the filter on CHECKPOINT DB (620). In step 620, the Event Processor 402 looks up whether the CHECKPOINT DB already has an entry associated with the FILTER (CHECKPOINT_DB.where(filter: FILTER)). If so, it updates its timestamp to the current timestamp acquired by calling a GET_TIMESTAMP() function (620).

[0053] Note that there may be multiple ways this GET_TIMESTAMP() function can be implemented. The simplest approach may be to generate a UNIX timestamp on the fly. Another way may be to utilize unique D fields supported by most database systems. For example, MongoDB automatically attaches an internal attribute named “ id” to all of its items when they are being created, where the id may be a combination of the creation timestamp and various other factors, and may be chronologically ordered. Since id attributes are chronologically ordered it fits our purpose of using the attribute as a timestamp, which can be used directly for checkpointing. Also various other database systems or database ORMs (object relational mapping) or database abstraction libraries automatically attach chronologically ordered unique identifiers to all their entries, so these attributes can be used without having to generate timestamps on our own. If the CHECKPOINT DB 414 finds that there may be no existing entry for the FILTER, it creates a new entry for the FILTER and sets its timestamp, so that the next time the ProcessFilter is called for the same FILTER it will have a checkpoint timestamp attached, as seen on 800.

[0054] If at step 608 the Event processor finds that the NEW EVENTS array length is greater than 0 (608), it needs to run the tasks associated with the FILTER, so it goes on to the next step where it looks up the TASK DB 404 for all the TASKS associated with the FILTER (610). There may be one or more TASKS attached to a FILTER because many different clients may have submitted different tasks for the same FILTER. For example, one client may want the FILTER to trigger a Websocket push TASK, while another client may want the same FILTER to trigger an email or SMS send action, while another client may want the same FILTER to create another blockchain transaction and broadcast. They all share the same FILTER but have different TASKS. Once all the TASKS related to the FILTER are found (610) the Event processor 402 needs to iterate through them and execute each. This execution can be either serially executed but also may be parallelized. To start the iteration process, the Event processor 402 sets the TASK INDEX to 0 (612). Then it checks whether the TASK INDEX is greater than or equal to the TASKS array length (614). If the TASK INDEX is greater than or equal to TASKS array length from the beginning, this would mean that the TASKS array is empty and there is no task to process, therefore it jumps to the next step 620 where it updates the CHECKPOINT DB 414 by updating the timestamp of the current FILTER on the CHECKPOINT DB 414 with the current timestamp via GET TIMESTAMP function call (620). However if TASK INDEX is less than the TASKS array length, then it means that there are no non-empty TASKS array to process, so a task item at TASK INDEX of the TASKS array (TASKS[TASK_INDEX]) is selected and run the task by executing the task code, passing the entire NEW_EVENTS array as argument (616). This may be a notable difference from prior art approaches where event processing may be executed for each individual event. With this embodiment of the current embodiments, it passes the entire relevant NEW_EVENTS array exactly once, which leaves room for much more flexible ways of processing events, and makes the whole process much more efficient. Once the task at TASK INDEX has finished, the Event Processor 402 increments the TASK INDEX (618) and continues on with the loop until the condition TASK INDEX >= TASKS. length becomes true (614), which would mean all items in the TASKS array have been iterated and executed. After all TASKS have been processed, the Event Processor updates the checkpoint for the FILTER by looking up the checkpoint entry associated with the FILTER from CHECKPOINT DB 414 and setting its timestamp (620). Then it ends the process.

[0055] In some embodiments, all the iteration and array queries may be implemented in a streaming fashion. Instead of fetching an entire array of database query results into memory and iterating them one by one, the database queries may stream the data to the subsequent steps in real-time. For example, the flowchart in Figure 5 fetches all filters from the FILTER DB (500) and then iterates through them until the terminal condition is met (504). Instead, the FILTER_DB.all() query in step 500 may create a real-time stream which feeds the results of the array into the subsequent steps as they are fetched, and the rest of the process including the ProcessFilter subroutine 506 may follow the same streaming paradigm, processing the filter one by one in a streaming format without waiting for the full FILTERS array to become fully available on memory. The same goes for step 602 where it fetches all CHECKPOINTS for the FILTER, or the step 606 where it queries all the NEW EVENTS since last checkpoint, or the step 610 where it queries TASK DB to return all TASKS related to the FILTER. Any iteration logic mentioned in the primary embodiment can be turned into a streaming approach where the items of any array can be streamed to the next step without having to wait for the full arrays to become available in memory.

[0056] In some embodiments, each component of the system may be distributed across multiple servers and locations. For example, the EVENT DB 410 may exist on server A, while TASK DB 404 may exist on server B, while FILTER DB 406 may exist on server C, while CHECKPOINT DB 414 may exist on server D, while Event Processor 402 may exist on server E, while event listener 408 may exist on server F, and so on. Each component in Figure 4 may exist as a self-contained microservice which communicates with one another through their own exposed API which can either be publicly accessible or privately accessible. Also, every task execution (616) may be parallelized to multiple machines.

[0057] In some embodiments each TASK execution (310) may involve queueing a function execution instead of immediately executing. For example, it may utilize job queue systems such as Apache Kafka, Redis, RabbitMQ, etc. to schedule jobs and immediately return, instead of waiting for each task to finish running.

[0058] In some embodiments, the TASK may have its own DSL (Domain Specific Language) to safely execute code. In the primary embodiment, the “TASK” column in the TASK DB 802 displays a JavaScript function (802). Often, allowing this type of arbitrary code execution can be dangerous, so the system may adopt a domain specific language or expose a limited API set to allow executing certain subset of tasks instead of allowing full fledged programming language code.

[0059] In some embodiments, the client 400 may exist as multiple modules of their own.

Each module may have its own fine grained access control schemes. For example, the client 400 may be broken down so that one of the modules may only submit filters, and another module may only submit tasks. There may exist another module which creates links between the filters and the tasks. The point is that there may be various ways to populate the TASK DB 404 and FILTER DB 406, and there may be various different configurations of providing access control, so that certain modules may have write access to TASK DB 404 but not to FILTER DB 406, and vice versa. Also the tasks may be hardcoded into TASK DB 404, and the filters may be hardcoded into FILTER DB 406 as well.

[0060] In some embodiments, the checkpoints can be more fine grained. For example, in the primary embodiment herein, timestamps are attached to filters on the CHECKPOINT DB (800). However the checkpoints may be attached to tasks instead of filters. This may be important when there are multiple tasks attached to a single filter. For example if filter X needs to trigger task A, task B, and task C, instead of maintaining a CHECKPOINT DB which attaches timestamps to filters, may maintain a CHECKPOINT DB which attaches timestamps to individual tasks A, B, and C. It may be also possible to attach checkpoints to an entry made up of a combination of a filter and a task (1000). This way TIMESTAMPED FILTER may be constructed for every task stored on CHECKPOINT DB 414 and TASK DB 404 (604). For example if checkpoints are attached for every unique combination of filters and their associated tasks, the step 620 may be modified to “CHECKPOINT_DB.where(filter: FILTER && task: TASK).set(timestamp: GET_TIMESTAMP())”“, which means “find the checkpoint on CHECKPOINT DB for the entry for the specific TASK which may be triggered by FILTER, and update its timestamp based on the GET_TIMESTAMP() function result”. The CHECKPOINT DB in this case may look like (1000) in Figure 10. However this method is not limited to the approaches mentioned so far. Depending on the relationships between events, filters, and tasks, the CHECKPOINT DB may be structured differently.

[0061] In some embodiments, a task may be attached to multiple filters, which means one task may be listening to multiple different conditions (filters). For example, a task named A may have been scheduled to be triggered for an event pattern that matches filter X, filter Y, or filter Z. In this case, the event processor 402 may be programmed in various ways.

Depending on use case, it may be programmed to redundantly trigger the task A all 3 times for each filter match (X, Y, and Z). Or it may be programmed to only trigger once. It is important to note that these embodiments represent a generic approach which can handle various types of relationships between events, filters, and tasks. The details of how the database schema associations are created may vary.

[0062] In other embodiments, each database component in Figure 4 may be implemented using any technology that allows for storing and retrieving data efficiently. The term “database” it is not strictly limited to the systems call “databases”. This may include all types of persistent data storage and query systems such as document database (MongoDB), relational database (pgsql, mysql, etc.), graph database (neo4j, janusgraph, etc.), stream processing systems (Apache Kafka, Redis, RabbitMQ, etc.), key-value database (LevelDB, RocksDB, etc.) and more. Even a file system may be considered a “database” in this context, as long as the files are structured in certain ways to make retrieval more efficient. Figure 10 provides an exemplary code of an alternative embodiment of the blockchain event processing database system. For example all filters may be stored in FILTER DB (1004), each with its own unique FILTER ID. Then this unique FILTER ID can be referenced in other tables such as TASK DB (1002) as a column, as well as CHECKPOINT DB as its own association column (1000).

[0063] The database may also exist as a completely in-memory data structure. Figure 8 provides an exemplary code for providing a database structure for each database component in the blockchain event processing database system. For example, everything in Figure 8 can be stored completely in memory as a JavaScript object or array and accessed programmatically while the program may be running. This applies to all database modules in Figure 4, including EVENT DB 410, CHECKPOINT DB 414, FILTER DB 406, and TASK DB 404. This ability to utilize various database systems may be especially important for EVENT DB 410 because this means the system may be able to store and filter very sophisticated patterns of events, for example a graph database may be able to store events in a graph structure, which makes it efficient to filter a graph event pattern among multiple transactions. Same goes for relational database for events that are combinations of multiple other relationally associated events. And so on. It's important to note that when the EVENT DB 410 stores transactions in a certain database system, the filters stored in the FILTER DB 406 will naturally take the form of the query language for the underlying database system used in EVENT DB 410.

[0064] In some embodiments, the timestamps stored for each filter or each task inside the CHECKPOINT DB 414 may be used along with the event itself, in the task processing carried out by the event processor 402. For example, an SSE (Server Sent Events) task may be triggered when certain event happens. Server-Sent Events (SSE) may be a server push technology enabling a browser to receive automatic updates from a server via HTTP connection. The Server-Sent Events EventSource API is standardized as part of HTML5 by the W3C. One feature of server sent events is its ability to associate an event with a unique id. Setting an ID lets the browser keep track of the last event fired so that if, the connection to the server is dropped, a special HTTP header (Last-Event-ID) is set with the new request. This lets the browser determine which event is appropriate to fire. The message event contains a e.lastEventld property. The timestamp from the CHECKPOINT DB 414 may be utilized as the ID field, so that when a user's client briefly goes offline and comes back online later, it may request a synchronization using the Last-Event-ID header set as the last checkpoint timestamp, and the server may implement an additional module which sends all the events which happened since last checkpoint. This may work the same way as the approach explained in 702. The backend system may check the Last-Event-ID HTTP header, and attach the “timestamp” query in conjunction to the original filter, and run it against the EVENT DB 410, which would return only the events which happened since the Last-Event- ID header value. This is just one example to demonstrate how the timestamps from the CHECKPOINT DB 414 may be used in addition to the event itself when triggering tasks. [0065] Figure 13 is an example of a combination filter. In some embodiments, the filters may be a combination of multiple other event filters. As shown, for example, 3 separate filters; FILTERl, FILTER2, and FILTER3, may be considered. Normally each filter works individually on its own, but a combination of these filters can be used to create a COMBINATION FILTER, which ensures that the three conditions FILTERl, FILTER2, and FILTER3 are satisfied simultaneously. This may be different from simply creating an AND conjunction query from the three filters, because the COMBINATION FILTER may be not targeted at a single event. For example, transaction A may pass the FILTERl condition, transaction B may pass the FILTER2 condition, and transaction C may pass the FILTER3 condition. In this case if the EVENT DB 410 may be simply queried with a conjunction query constructed from the three filters FILTERl && FILTER2 && FILTER3, there would be no result because there is not a single transaction which matches all three filters simultaneously. However this will pass the COMBINATION FILTER because all it's looking for may be “some” transaction to match FILTER1, and “some” transaction to match FILTER2, and “some” transaction to match FILTER3, simultaneously. It doesn't require the matched transactions to be the same for each filter. This feature may be only possible because of the database driven approach the current embodiments employ, because the system remembers the events even after processing them. It may be impossible to detect a pattern which exists over multiple event types like this, unless the events are stored in a database.

This feature may be used to detect various patterns in the network, such as detecting when a certain chain of transactions has completed over a long period of time. In order to implement this scheme, customization may be required to deal with checkpoints. For example, in the primary embodiment, the checkpoints may be updated every time an event processing may be run (620), which ensures that the event processing only monitors the least amount of dataset and process without redundancy. However, to monitor a combination of multiple events that may happen over a long time, the CHECKPOINT DB 414 should not update the checkpoint for the COMBINATION FILTER until a complete match is made. For example, if the COMBINATION FILTER is looking for FILTERl, FILTER2, and FILTER3 to simultaneously pass, it should not update the checkpoint when only FILTERl passes, because that would make the TIMESTAMPED FILTER (702) only monitor events from the point at which FILTERl passed. By not updating the CHECKPOINT DB 414 for any of the child filters, it may be ensured that the COMBINATION FILTER queries against all the events since the last match. This was one example, but the point is that, depending on circumstances the checkpoint may be updated in a different manner, at a different timing.

[0066] In some embodiments, the filters may be used for not just testing the conditions, but also be used to return processed data from the database. This may be possible because of the use of database systems for the event processing instead of programmatically processing each and every event manually. For example, a filter may be used which finds all the events from EVENT DB 410 which match a certain pattern. MongoDB may be used to store the events in EVENT DB 410, in which case a query may be made of “find” all events whose “out.sl” attributes match “19HxigV4QyBv3tHpQVclJEQyqlpzZVdoAut” (1100). This will return an array of full event objects. However an additional clause can be added so that the EVENT DB 410 only returns certain projected attributes from the selected events array. In this case the MongoDB's “project” command (1102) may be employed. [0067] On a related note, in some embodiments the filters may not be just raw database query languages, but a higher level abstraction language which may be either a superset or a subset of each database query language. For example, Figure 11 illustrates various ways of representing filters with a JSON object. Expressing the queries in JSON makes it easy to serialize and deserialize filters and even send them over the network in order to make remote queries. In example 1100 and 1102 a domain specific language may be used whose syntax convention may be to wrap the filters in a “q” attribute where the child attributes of “q” are MongoDB's commands such as “find” or “project” and the children of these command attributes are the actual MongoDB query language. The high level query language may also incorporate multiple query and filter languages into one. In example 1104 a MongoDB query may be combined with JQ query, where the “q” attribute represents the MongoDB query and the “r.f ’ attribute represents a JQ expression which processes the results returned from the MongoDB query represented by “q” ( JQ may be a light-weight JSON processor engine). In this case the event listener 408 may implement a query engine which queries EVENT DB 410 with the contents of the “q” attribute, and then runs an additional post-processing step by passing the result to a separate JQ process defined by the expression under “r.f’ attribute. The high level query language may be designed in a way that supports any type of database systems, such as key/value databases such as LevelDB or RocksDB (1106). Also the query language may encapsulate a relational database query into the “q” attribute in case the EVENT DB 410 may be a relational database. As mentioned previously, the EVENT DB can be implemented with any database system, so this high level abstraction query language may adopt a syntax which can encapsulate all database operations natively supported by any database systems.

[0068] The primary embodiment described in this document assumes that the whole process may be executed based on a time interval. For example the event processor 402 may run the processing routine of Figure 5 every 1 second. However it is not limited to this option. In other embodiments, the event listener 408 may implement a sophisticated job scheduler which determines when to run event processing for certain combinations of filters and tasks. The algorithm may be determined by previous event processing history for example. Also it may be determined by manual or programmatic customization. For example the filter process may be triggered more frequently for certain task, or a certain filter, but less frequently for others. This way, higher priority filters and higher priority tasks can process events more frequently while low priority filters and low priority tasks don't need to be processed as frequently.

Terms and Definitions

[0069] Unless otherwise defined, all technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. [0070] As used herein, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Any reference to “or” herein is intended to encompass “and/or” unless otherwise stated.

[0071] As used herein, the term “about” in some cases refers to an amount that is approximately the stated amount.

[0072] As used herein, the term “about” refers to an amount that is near the stated amount by 10%, 5%, or 1%, including increments therein.

[0073] As used herein, the term “about” in reference to a percentage refers to an amount that is greater or less the stated percentage by 10%, 5%, or 1%, including increments therein. [0074] As used herein, the phrases “at least one”, “one or more”, and “and/or” are open- ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.

Computing system

[0075] Referring to Figure 14, a block diagram is shown depicting an exemplary machine that includes a computer system 1400 (e.g., a processing or computing system) within which a set of instructions can execute for causing a device to perform or execute any one or more of the aspects and/or methodologies for static code scheduling of the present disclosure. The components in Figure. 14 are examples only and do not limit the scope of use or functionality of any hardware, software, embedded logic component, or a combination of two or more such components implementing particular embodiments.

[0076] Computer system 1400 may include one or more processors 1401, a memory 1403, and a storage 1408 that communicate with each other, and with other components, via a bus

1440. The bus 1440 may also link a display 1432, one or more input devices 1433 (which may, for example, include a keypad, a keyboard, a mouse, a stylus, etc.), one or more output devices 1434, one or more storage devices 1435, and various tangible storage media 1436. All of these elements may interface directly or via one or more interfaces or adaptors to the bus 1440. For instance, the various tangible storage media 1436 can interface with the bus 1440 via storage medium interface 1426. Computer system 1400 may have any suitable physical form, including but not limited to one or more integrated circuits (ICs), printed circuit boards (PCBs), mobile handheld devices (such as mobile telephones or PDAs), laptop or notebook computers, distributed computer systems, computing grids, or servers.

[0077] Computer system 1400 includes one or more processor(s) 1401 (e.g., central processing units (CPUs) or general purpose graphics processing units (GPGPUs)) that carry out functions. Processor(s) 1401 optionally contains a cache memory unit 1402 for temporary local storage of instructions, data, or computer addresses. Processor(s) 1401 are configured to assist in execution of computer readable instructions. Computer system 1400 may provide functionality for the components depicted in Figure 14 as a result of the processor(s) 1401 executing non-transitory, processor-executable instructions embodied in one or more tangible computer-readable storage media, such as memory 1403, storage 1408, storage devices 1435, and/or storage medium 1436. The computer-readable media may store software that implements particular embodiments, and processor(s) 1401 may execute the software. Memory 1403 may read the software from one or more other computer-readable media (such as mass storage device(s) 1435, 1436) or from one or more other sources through a suitable interface, such as network interface 1420. The software may cause processor(s) 1401 to carry out one or more processes or one or more steps of one or more processes described or illustrated herein. Carrying out such processes or steps may include defining data structures stored in memory 1403 and modifying the data structures as directed by the software.

[0078] The memory 1403 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., RAM 1404) (e.g., static RAM (SRAM), dynamic RAM (DRAM), ferroelectric random access memory (FRAM), phase-change random access memory (PRAM), etc.), a read-only memory component (e.g., ROM 1405), and any combinations thereof. ROM 1405 may act to communicate data and instructions uni directionally to processor(s) 1401, and RAM 1404 may act to communicate data and instructions bidirectionally with processor(s) 1401. ROM 1405 and RAM 1404 may include any suitable tangible computer-readable media described below. In one example, a basic input/output system 1406 (BIOS), including basic routines that help to transfer information between elements within computer system 1400, such as during start up, may be stored in the memory 1403. [0079] Fixed storage 1408 is connected bidirectionally to processor(s) 1401, optionally through storage control unit 1407. Fixed storage 1408 provides additional data storage capacity and may also include any suitable tangible computer-readable media described herein. Storage 1408 may be used to store operating system 1409, executable(s) 1410, data 1411, applications 1412 (application programs), and the like. Storage 1408 can also include an optical disk drive, a solid-state memory device (e.g., flash-based systems), or a combination of any of the above. Information in storage 1408 may, in appropriate cases, be incorporated as virtual memory in memory 1403.

[0080] In one example, storage device(s) 1435 may be removably interfaced with computer system 1400 (e.g., via an external port connector (not shown)) via a storage device interface 1425. Particularly, storage device(s) 1435 and an associated machine-readable medium may provide non-volatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for the computer system 1400. In one example, software may reside, completely or partially, within a machine-readable medium on storage device(s) 1435. In another example, software may reside, completely or partially, within processor(s) 1401.

[0081] Bus 1440 connects a wide variety of subsystems. Herein, reference to a bus may encompass one or more digital signal lines serving a common function, where appropriate. Bus 1440 may be any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. As an example and not by way of limitation, such architectures include an Industry Standard Architecture (ISA) bus, an Enhanced ISA (EISA) bus, a Micro Channel Architecture (MCA) bus, a Video Electronics Standards Association local bus (VLB), a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, an Accelerated Graphics Port (AGP) bus, HyperTransport (HTX) bus, serial advanced technology attachment (SATA) bus, and any combinations thereof. [0082] Computer system 1400 may also include an input device 1433. In one example, a user of computer system 1400 may enter commands and/or other information into computer system 1400 via input device(s) 1433. Examples of an input device(s) 1433 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device (e.g., a mouse or touchpad), a touchpad, a touch screen, a multi -touch screen, a joystick, a stylus, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), an optical scanner, a video or still image capture device (e.g., a camera), and any combinations thereof. In some embodiments, the input device is a Kinect, Leap Motion, or the like. Input device(s) 1433 may be interfaced to bus 1440 via any of a variety of input interfaces 1423 (e.g., input interface 1423) including, but not limited to, serial, parallel, game port, USB, FIREWIRE, THUNDERBOLT, or any combination of the above.

[0083] In particular embodiments, when computer system 1400 is connected to network 1430, computer system 1400 may communicate with other devices, specifically mobile devices and enterprise systems, distributed computing systems, cloud storage systems, cloud computing systems, and the like, connected to network 1430. Communications to and from computer system 1400 may be sent through network interface 1420. For example, network interface 1420 may receive incoming communications (such as requests or responses from other devices) in the form of one or more packets (such as Internet Protocol (IP) packets) from network 1430, and computer system 1400 may store the incoming communications in memory 1403 for processing. Computer system 1400 may similarly store outgoing communications (such as requests or responses to other devices) in the form of one or more packets in memory 1403 and communicated to network 1430 from network interface 1420. Processor(s) 1401 may access these communication packets stored in memory 1403 for processing.

[0084] Examples of the network interface 1420 include, but are not limited to, a network interface card, a modem, and any combination thereof. Examples of a network 1430 or network segment 1430 include, but are not limited to, a distributed computing system, a cloud computing system, a wide area network (WAN) (e.g., the Internet, an enterprise network), a local area network (LAN) (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, a peer-to-peer network, and any combinations thereof. A network, such as network 1430, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.

[0085] Information and data can be displayed through a display 1432. Examples of a display 1432 include, but are not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), an organic liquid crystal display (OLED) such as a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display, a plasma display, and any combinations thereof. The display 1432 can interface to the processor(s) 1401, memory 1403, and fixed storage 1408, as well as other devices, such as input device(s) 1433, via the bus 1440. The display 1432 is linked to the bus 1440 via a video interface 1422, and transport of data between the display 1432 and the bus 1440 can be controlled via the graphics control 1421. In some embodiments, the display is a video projector. In some embodiments, the display is a head-mounted display (HMD) such as a VR headset. In further embodiments, suitable VR headsets include, by way of non-limiting examples, HTC Vive, Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like. In still further embodiments, the display is a combination of devices such as those disclosed herein.

[0086] In addition to a display 1432, computer system 1400 may include one or more other peripheral output devices 1434 including, but not limited to, an audio speaker, a printer, a storage device, and any combinations thereof. Such peripheral output devices may be connected to the bus 1440 via an output interface 1424. Examples of an output interface 1424 include, but are not limited to, a serial port, a parallel connection, a USB port, a FIREWIRE port, a THUNDERBOLT port, and any combinations thereof.

[0087] In addition or as an alternative, computer system 1400 may provide functionality as a result of logic hardwired or otherwise embodied in a circuit, which may operate in place of or together with software to execute one or more processes or one or more steps of one or more processes described or illustrated herein. Reference to software in this disclosure may encompass logic, and reference to logic may encompass software. Moreover, reference to a computer-readable medium may encompass a circuit (such as an IC) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware, software, or both.

[0088] Those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality.

[0089] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

[0090] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by one or more processor(s), or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

[0091] In accordance with the description herein, suitable computing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will also recognize that select televisions, video players, and digital music players with optional computer network connectivity are suitable for use in the system described herein. Suitable tablet computers, in various embodiments, include those with booklet, slate, and convertible configurations, known to those of skill in the art.

[0092] In some embodiments, the computing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications. Those of skill in the art will recognize that suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Those of skill in the art will recognize that suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Those of skill in the art will also recognize that suitable mobile smartphone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®. Those of skill in the art will also recognize that suitable media streaming device operating systems include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®. Those of skill in the art will also recognize that suitable video game console operating systems include, by way of non-limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.

Non-transitory computer readable storage medium

[0093] In some embodiments, the platforms, systems, media, and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked computing device. In further embodiments, a computer readable storage medium is a tangible component of a computing device. In still further embodiments, a computer readable storage medium is optionally removable from a computing device. In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, distributed computing systems including cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media. Computer program

[0094] In some embodiments, the platforms, systems, media, and methods disclosed herein include at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable by one or more processor(s) of the computing device’s CPU, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), computing data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages.

[0095] The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.

Web application

[0096] In some embodiments, a computer program includes a web application. In light of the disclosure provided herein, those of skill in the art will recognize that a web application, in various embodiments, utilizes one or more software frameworks and one or more database systems. In some embodiments, a web application is created upon a software framework such as Microsoft® .NET or Ruby on Rails (RoR). In some embodiments, a web application utilizes one or more database systems including, by way of non-limiting examples, relational, non-relational, object oriented, associative, and XML database systems. In further embodiments, suitable relational database systems include, by way of non-limiting examples, Microsoft® SQL Server, mySQL™, and Oracle®. Those of skill in the art will also recognize that a web application, in various embodiments, is written in one or more versions of one or more languages. A web application may be written in one or more markup languages, presentation definition languages, client-side scripting languages, server-side coding languages, database query languages, or combinations thereof. In some embodiments, a web application is written to some extent in a markup language such as Hypertext Markup Language (HTML), Extensible Hypertext Markup Language (XHTML), or extensible Markup Language (XML). In some embodiments, a web application is written to some extent in a presentation definition language such as Cascading Style Sheets (CSS). In some embodiments, a web application is written to some extent in a client-side scripting language such as Asynchronous Javascript and XML (AJAX), Flash® Actionscript, Javascript, or Silverlight®. In some embodiments, a web application is written to some extent in a server- side coding language such as Active Server Pages (ASP), ColdFusion®, Perl, Java™, JavaServer Pages (JSP), Hypertext Preprocessor (PHP), Python™, Ruby, Tel, Smalltalk, WebDNA®, or Groovy. In some embodiments, a web application is written to some extent in a database query language such as Structured Query Language (SQL). In some embodiments, a web application integrates enterprise server products such as IBM® Lotus Domino®. In some embodiments, a web application includes a media player element. In various further embodiments, a media player element utilizes one or more of many suitable multimedia technologies including, by way of non-limiting examples, Adobe® Flash®, HTML 5,

Apple® QuickTime®, Microsoft® Silverlight®, Java™, and Unity®.

[0097] Referring to Figure 15, in a particular embodiment, an application provision system comprises one or more databases 1500 accessed by a relational database management system (RDBMS) 1510. Suitable RDBMSs include Firebird, MySQL, PostgreSQL, SQLite, Oracle Database, Microsoft SQL Server, IBM DB2, IBM Informix, SAP Sybase, SAP Sybase, Teradata, and the like. In this embodiment, the application provision system further comprises one or more application severs 1520 (such as Java servers, .NET servers, PHP servers, and the like) and one or more web servers 1530 (such as Apache, IIS, GWS and the like). The web server(s) optionally expose one or more web services via app application programming interfaces (APIs) 1540. Via a network, such as the Internet, the system provides browser-based and/or mobile native user interfaces.

[0098] Referring to Figure 16, in a particular embodiment, an application provision system alternatively has a distributed, cloud-based architecture 1600 and comprises elastically load balanced, auto-scaling web server resources 1610 and application server resources 1620 as well synchronously replicated databases 1630.

Mobile Application

[0099] In some embodiments, a computer program includes a mobile application provided to a mobile computing device. In some embodiments, the mobile application is provided to a mobile computing device at the time it is manufactured. In other embodiments, the mobile application is provided to a mobile computing device via the computer network described herein.

[0100] In view of the disclosure provided herein, a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, Java™, Javascript, Pascal, Object Pascal, Python™, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof.

[0101] Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples,

Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, Android™ SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK.

[0102] Those of skill in the art will recognize that several commercial forums are available for distribution of mobile applications including, by way of non-limiting examples, Apple® App Store, Google® Play, Chrome WebStore, BlackBerry® App World, App Store for Palm devices, App Catalog for webOS, Windows® Marketplace for Mobile, Ovi Store for Nokia® devices, Samsung® Apps, and Nintendo® DSi Shop.

Standalone Application

[0103] In some embodiments, a computer program includes a standalone application, which is a program that is run as an independent computer process, not an add-on to an existing process, e.g., not a plug-in. Those of skill in the art will recognize that standalone applications are often compiled. A compiler is a computer program(s) that transforms source code written in a programming language into binary object code such as assembly language or machine code. Suitable compiled programming languages include, by way of non-limiting examples, C, C++, Objective-C, COBOL, Delphi, Eiffel, Java™, Lisp, Python™, Visual Basic, and VB.NET, or combinations thereof. Compilation is often performed, at least in part, to create an executable program. In some embodiments, a computer program includes one or more executable complied applications. Web Browser Plug-in

[0104] In some embodiments, the computer program includes a web browser plug-in (e.g., extension, etc.). In computing, a plug-in is one or more software components that add specific functionality to a larger software application. Makers of software applications support plug ins to enable third-party developers to create abilities which extend an application, to support easily adding new features, and to reduce the size of an application. When supported, plug ins enable customizing the functionality of a software application. For example, plug-ins are commonly used in web browsers to play video, generate interactivity, scan for viruses, and display particular file types. Those of skill in the art will be familiar with several web browser plug-ins including, Adobe ® Flash ® Player, Microsoft ® Silverlight ® , and Apple ® QuickTime ® . In some embodiments, the toolbar comprises one or more web browser extensions, add-ins, or add-ons. In some embodiments, the toolbar comprises one or more explorer bars, tool bands, or desk bands.

[0105] In view of the disclosure provided herein, those of skill in the art will recognize that several plug-in frameworks are available that enable development of plug-ins in various programming languages, including, by way of non-limiting examples, C++, Delphi, Java™, PHP, Python™, and VB.NET, or combinations thereof.

[0106] Web browsers (also called Internet browsers) are software applications, designed for use with network-connected computing devices, for retrieving, presenting, and traversing information resources on the World Wide Web. Suitable web browsers include, by way of non-limiting examples, Microsoft ® Internet Explorer ® , Mozilla ® Firefox ® , Google ® Chrome, Apple ® Safari ® , Opera Software ® Opera ® , and KDE Konqueror. In some embodiments, the web browser is a mobile web browser. Mobile web browsers (also called microbrowsers, mini-browsers, and wireless browsers) are designed for use on mobile computing devices including, by way of non-limiting examples, handheld computers, tablet computers, netbook computers, subnotebook computers, smartphones, music players, personal digital assistants (PDAs), and handheld video game systems. Suitable mobile web browsers include, by way of non-limiting examples, Google ® Android ® browser, RIM BlackBerry ® Browser, Apple ® Safari ® , Palm ® Blazer, Palm ® WebOS ® Browser, Mozilla ® Firefox ® for mobile, Microsoft ® Internet Explorer ® Mobile, Amazon ® Kindle ® Basic Web, Nokia ® Browser, Opera Software ® Opera ® Mobile, and Sony ® PSP™ browser. Software Modules

[0107] In some embodiments, the platforms, systems, media, and methods disclosed herein include software, server, and/or database modules, or use of the same. In view of the disclosure provided herein, software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on a distributed computing platform such as a cloud computing platform. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location.

Databases

[0108] In some embodiments, the platforms, systems, media, and methods disclosed herein include one or more databases, or use of the same. In view of the disclosure provided herein, those of skill in the art will recognize that many databases are suitable for storage and retrieval of XXX information. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase. In some embodiments, a database is internet-based. In some embodiments, a database is web-based. In some embodiments, a database is cloud computing-based. In some embodiments, a database is a distributed database. In some embodiments, a database is implemented on one or more local computer storage devices.