Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
UTILIZING WEB APPLICATION FIREWALL AND MACHINE LEARNING TO DETECT COMMAND AND CONTROL
Document Type and Number:
WIPO Patent Application WO/2021/242285
Kind Code:
A1
Abstract:
A method for detecting Command and Control (C&C) toward a web application in a network includes: obtaining, using a Web Application Firewall (WAF) of the network, network traffic between the web application and a server outside the network; transmitting the network traffic from the WAF to a machine learning model; determining, using the machine learning model, whether the network traffic includes a command signature; in response to determining that the network traffic includes a command signature, generating a notification; and determining, based on the notification, whether the server is a C&C.

Inventors:
ALFRAIH MOHAMMED (SA)
HAZMI KHALID (SA)
OMAIR ZIAD (SA)
ALSHARIF SULTAN (SA)
Application Number:
PCT/US2020/039482
Publication Date:
December 02, 2021
Filing Date:
June 25, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SAUDI ARABIAN OIL CO (SA)
ARAMCO SERVICES CO (US)
International Classes:
H04L29/06; G06F21/52; G06N20/20; H04L29/12
Domestic Patent References:
WO2019063389A12019-04-04
Foreign References:
US8682812B12014-03-25
US20180063188A12018-03-01
US20190020670A12019-01-17
Attorney, Agent or Firm:
OSHA, Jonathan, P. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method for detecting Command and Control (C&C) toward a web application in a network, comprising: obtaining, using a Web Application Firewall (WAF) of the network, network traffic between the web application and a server outside the network; transmitting the network traffic from the WAF to a machine learning model; determining, using the machine learning model, whether the network traffic comprises a command signature; in response to determining that the network traffic comprises a command signature, generating a notification; and determining, based on the notification, whether the server is a C&C.

2. The method according to claim 1, wherein the machine learning model comprises a Random Forest (RF) classifier, and wherein the RF classifier comprises a plurality of decision trees.

3. The method according to claim 2, further comprising: obtaining, using the WAF, an original dataset that comprises a plurality of network traffic samples; and generating, from the original dataset, a plurality of bootstrapped datasets, wherein each of the plurality of decision trees corresponds to one of the plurality of bootstrapped datasets, and wherein each of the plurality of decision trees outputs a vote based on the network traffic.

4. The method according to claim 3, wherein the plurality of network traffic samples comprises a normal network traffic sample that does not have any command signature.

5. The method according to claim 3 or 4, wherein whether the network traffic comprises a command signature is determined based on the votes of the plurality of decision trees.

6. The method according to any one of claims 1-5, further comprising: in response to determining that the network traffic comprises a command signature, assigning an identifier to the network traffic, wherein the notification comprises the identifier.

7. The method according to any one of claims 1-6, further comprising: decrypting and reformatting the network traffic by the WAF before transmitting the network traffic to the machine learning model.

8. A network apparatus for detecting Command and Control (C&C) toward a web application in a network, comprising: a Web Application Firewall (WAF) that obtains network traffic between the web application and a server outside the network; a machine learning model that receives the network traffic from the WAF and determines whether the network traffic comprises a command signature, wherein, in response to determining that the network traffic comprises a command signature, the machine learning model generates a notification.

9. The network apparatus according to claim 8, wherein the machine learning model comprises a Random Forest (RF) classifier, and wherein the RF classifier comprises a plurality of decision trees.

10. The network apparatus according to claim 9, wherein the WAF obtains an original dataset that comprises a plurality of network traffic samples, wherein the machine learning model generates, from the original dataset, a plurality of bootstrapped datasets, wherein each of the plurality of decision trees corresponds to one of the plurality of bootstrapped datasets, and wherein each of the plurality of decision trees outputs a vote based on the network traffic.

11. The network apparatus according to claim 10, wherein the plurality of network traffic samples comprises a normal network traffic sample that does not have any command signature.

12. The network apparatus according to claim 10 or 11, wherein whether the network traffic comprises a command signature is determined based on the votes of the plurality of decision trees.

13. The network apparatus according to any one of claims 8-12, wherein, in response to determining that the network traffic comprises a command signature, the machine learning model assigns an identifier to the network traffic, and wherein the notification comprises the identifier.

14. The network apparatus according to any one of claims 8-13, wherein the WAF decrypts and reformats the network traffic before transmitting the network traffic to the machine learning model.

15. A network system that operates in a network, comprising: a web application, a Web Application Firewall (WAF), and a machine learning model, wherein the WAF obtains network traffic between the web application and a server outside the network, wherein the machine learning model receives the network traffic from the WAF, wherein the machine learning model determines whether the network traffic comprises a command signature, and wherein, in response to determining that the network traffic comprises a command signature, the machine learning model generates a notification.

Description:
UTILIZING WEB APPLICATON FIREWALL AND MACHINE LEARNING TO DETECT COMMAND AND CONTROL

BACKGROUND

[0001] Over the past decades, there has been growing awareness of the importance of cybersecurity among organizations worldwide. To defend a network against cyberattacks, monitoring tools such as Web Application Firewalls (WAFs) are often implemented to detect malicious network behavior such as scans, exploits, or spam emails generated by potential malware.

[0002] Cyberattacks may be carried out in the form of command and control (C&C).

Specifically, C&C may refer to an external server where an adversary (/. <? ., cyber attacker) has the power to control computing devices or applications within a victim network (/. <? ., the network being attacked) from one or more consoles using a domain name to exfiltrate data. For example, Domain Generation Algorithms (DGAs) may be used by an adversary to communicate with a C&C server so that the adversary can perform certain actions to gain access to and control the victim network from the C&C. These actions are generally referred to as command signatures. By using command signatures, an adversary may run commands or shell scripts to scan the network, discover ports, or exfiltrate data. Examples of command signatures include: reverse HTTP shell, command shell capture to detect questioned commands such as dir and nmap, session manipulation, crypto mining, scans, exploits, spam emails, and memory manipulation.

SUMMARY

[0003] In one aspect, the disclosure relates to a method for detecting C&C toward a web application in a network. The method includes: obtaining, using a WAF of the network, network traffic between the web application and a server outside the network; transmitting the network traffic from the WAF to a machine learning model; determining, using the machine learning model, whether the network traffic includes a command signature; in response to determining that the network traffic includes a command signature, generating a notification; and determining, based on the notification, whether the server is a C&C.

[0004] In another aspect, the disclosure relates to a network apparatus for detecting

C&C toward a web application in a network. The network apparatus includes: a WAF that obtains network traffic between the web application and a server outside the network; a machine learning model that receives the network traffic from the WAF and determines whether the network traffic includes a command signature. In response to determining that the network traffic includes a command signature, the machine learning model generates a notification.

[0005] In another aspect, the disclosure relates to a network system that operates in a network. The network system includes a web application, a WAF, and a machine learning model. The WAF obtains network traffic between the web application and a server outside the network. The machine learning model receives the network traffic from the WAF. The machine learning model determines whether the network traffic includes a command signature. In response to determining that the network traffic includes a command signature, the machine learning model generates a notification

[0006] Other aspects and advantages of the invention will be apparent from the following description and the appended claims.

BRIEF DESCRIPTION OF DRAWINGS

[0007] FIG. 1 is a block diagram of a network system according to one or more embodiments.

[0008] FIG. 2A illustrates a decision tree (DT) that may be used in a random forest

(RF) according to one or more embodiments.

[0009] FIG. 2B illustrates a RF that may be used in a machine learning model according to one or more embodiments.

[0010] FIG. 3A provides an example of an original dataset that may be used to train the machine learning model according to one or more embodiments. [0011] FIG. 3B provides an example of a bootstrapped dataset that is generated based on the original dataset of FIG. 3A and may be used to train the machine learning model according to one or more embodiments.

[0012] FIG. 4A shows another example of a bootstrapped dataset, and FIGs. 4B-4C each show a DT that corresponds to the bootstrapped dataset of FIG. 4A.

[0013] FIG. 5 is a flowchart that illustrates a method for detecting C&C according to one or more embodiments.

[0014] FIG. 6 is a flowchart that illustrates steps for determining whether network traffic includes a command signature according to one or more embodiments.

[0015] FIG. 7A shows a computing system on which components of one or more embodiments may be implemented.

[0016] FIG. 7B shows a network in which one or more embodiments may operate.

DETAILED DESCRIPTION

[0017] In light of the potential harm to a network caused by C&C, it is important to monitor and inspect the network traffic between the network and an external server, and take precautions whenever a suspicious command signature is detected in the network traffic. In one or more embodiments of the present disclosure, a WAF may be combined with a machine learning model to detect command signatures and raise an alert for any potential C&C threat. Embodiments of the present disclosure may thus increase the thoroughness and accuracy of network traffic inspection while requiring very limited human intervention. As such, network security may be improved.

[0018] Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. Like elements may not be labeled in all figures for the sake of simplicity.

[0019] In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.

[0020] Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers does not imply or create a particular ordering of the elements or limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.

[0021] In the following description of FIGs. 1-7B, any component described with regard to a figure, in various embodiments of the invention, may be equivalent to one or more like-named components described with regard to any other figure. For brevity, descriptions of these components will not be repeated with regard to each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments of the invention, any description of the components of a figure is to be interpreted as an optional embodiment which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.

[0022] It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a processor” includes reference to one or more of such processors.

[0023] Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide. [0024] It is to be understood that, one or more of the steps shown in the flowcharts may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope of the invention should not be considered limited to the specific arrangement of steps shown in the flowcharts.

[0025] Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.

[0026] FIG. 1 is a block diagram of a network system according to one or more embodiments. In FIG. 1, the network system 100 includes a network apparatus 110 and a web application 140 that operate in a network. The network may be a public network such as Internet, or may be a private network such as a local area network (LAN) built within an organization. Components in the network system 100 may communicate via cable or via wireless signals. In addition to the network apparatus 110 and the web application 140, the network system 100 may include components that are found in a general computer network. A computer network is later described with reference to FIG. 7B.

[0027] In FIG. 1, the web application 140 may include software code that controls the operation of other software or hardware. For example, the web application 140 may control a computer to access and process data, operate a machine to perform manufacturing, or forward network commands to another network system. The web application 140 may be accessible from inside or outside the network system 100. In reality, the web application 140 may sometimes be a potential target of cyberattacks.

[0028] The network apparatus 110 may be disposed on a communication path between an entry of the network system 100 and the web application 140. As such, network traffic 101 that enters the network system 100 from a server 199 outside the network may be intercepted and inspected by the network apparatus 110 before reaching the web application 140. Likewise, output of the web application 140 may be intercepted and inspected by the network apparatus 110 before being transmitted on the network traffic 101 toward the server 199. [0029] The network apparatus 110 includes a WAF 120 and a machine learning model 130. The WAF 120 and the machine learning model 130 may be communicatively connected via a data bus. The WAF 120 and the machine learning model 130 may include software code and/or hardware components. The WAF 120 and the machine learning model 130 may be physically implemented on separate electronic devices, or alternatively on a single electronic device. Examples of the electronic device may include a computer, a mobile terminal, a portable plug-in device, and a cloud server. In addition to the WAF 120 and the machine learning model 130, the network apparatus 110 may include components that are found in a general computing system. Such a general computing system is later described with reference to FIG. 7A.

[0030] In FIG. 1, the WAF 120 obtains the network traffic 101 between the server 199 and the web application 140. The network traffic 101 may be obtained in the form of bit streams. Before transmitting the obtained network traffic 101 to the machine learning model 130, the WAF 120 may decrypt and/or reformat the network traffic 101. For example, the WAF 120 may parse the network traffic 101 to obtain information such as the Internet Protocol (IP) address of the server 199, the IP address of the destination in the network system 100, identification of the web application 140, and the specific operation to be performed with the web application 140. Based on the obtained information, the WAF 120 may reformat the network traffic 101 in the form of individual data items which may each correspond to a separate access request to the web application 140. In addition, the WAF 120 may have features that are commonly found in general network firewalls. These common features are omitted from this disclosure for simplicity.

[0031] The machine learning model 130 may include software code that implements a supervised machine learning algorithm to determine whether the network traffic 101 includes a command signature. For example, the machine learning model 130 may include a RF classifier which may include a plurality of DTs. The DTs and the RF classifier are described in detail with reference to FIGs. 2A-2B.

[0032] Although only one server 199 and only one web application 140 are shown in

FIG. 1, one of ordinary skill in the art would understand that the network apparatus 110 may simultaneously communicate with multiple servers 199 and multiple web applications 140. Similarly, although only one WAF 120 and one machine learning model 130 are shown in FIG. 1, one of ordinary skill in the art would understand that the WAF 120 and the machine learning model 130 may each be implemented as multiple instances or as a multi-stage structure.

[0033] FIG. 2A illustrates a DT that may be used in a RF according to one or more embodiments. In general, a DT may refer a structure that receives an input, tests an attribute of the input at each node of the DT, branches from each node based on the test outcome, and makes a decision with regard to the classification of the input after testing for some or all of the attributes. As shown in FIG. 2A, the DT 233 receives network traffic 201 as an input from the WAF. As described above, the received network traffic 201 may be in the form of individual data items each corresponding to a separate request to access the web application.

[0034] At each node (represented as a box in FIG. 2A) of the DT 233, a test is made to determine whether the input network traffic 201 has an attribute. In particular, each attribute may correspond to a type of command signature which has been described earlier in this disclosure. Based on the test outcome (e.g., YES or NO), each node divides in two branches until the input network traffic 201 has been tested for all attributes on a decision path. After the input network traffic 201 has been tested for all attributes on the decision path, the DT 233 outputs a vote 235 as to which class the network traffic 201 falls in. This vote 235 may represent a decision made by the DT 233 as to whether the network traffic 201 includes a command signature, and if so, the type of the command signature.

[0035] FIG. 2B illustrates a RF that is used in a machine learning model according to one or more embodiments. In FIG. 2B, the machine learning model 230 includes a RF 231. The RF 231 includes a plurality of DTs 233 that may be the same as the DT 233 illustrated in FIG. 2 A.

[0036] In FIG. 2B, the network traffic 201 is input to each of the DTs 233. Based on the votes of the DTs 233, the RF 231 outputs a decision 237 on whether the network traffic 201 includes a command signature, and if so, the type of the command signature. In response to determining that the network traffic 201 includes a command signature, the machine learning model 230 generates a notification 239 such as an alert, a flag, a report, an email, or a warning. The notification 239 may then be analyzed by security personnel or an automated mechanism such as artificial intelligence to determine whether the network traffic 201 poses any C&C threat.

[0037] In some embodiments, in response to determining that the network traffic 201 includes a command signature, the machine learning model 230 may assign an identifier to the notification. The identifier may identify one or more of the type of the command signature, the external server that sends or receives the network traffic 201, and the web application targeted by the network traffic 201.

[0038] The machine learning model may be built and trained based on sample inputs and outputs. This type of machine learning is often referred to as supervised machine learning. According to one or more embodiments of the present disclosure, to build and train the machine learning model, the WAF provides the machine learning model with an original dataset that has a plurality of network traffic samples, where the classification of each network traffic sample (/. <? ., whether the network traffic sample has a command signature) is predetermined. Based on the original dataset, the machine learning model creates one or more bootstrapped datasets.

[0039] To further explain, FIGs. 3A and 3B provide an example of an original dataset and a bootstrapped dataset, respectively. As can be seen in FIG. 3A, the original dataset has six network traffic samples, numbered 1 to 6. Each network traffic sample is characterized using a plurality of attributes and the values of these attributes are shown in a corresponding column. In the datasets shown in FIGs. 3 A and 3B, these attributes are: whether the port scanning activity included in the network traffic sample is legitimate or suspicious, and whether the network traffic sample includes any of: data exfiltration commands; memory manipulation commands; crypto vulnerability exploit commands; mining commands; command shell commands; reverse HTTP shell; command shell capture; and session manipulation. For example, the row corresponding to network traffic sample #1 shows that the port scanning activity of the sample is legitimate, and that the sample has data exfiltration commands, memory manipulation commands, mining commands, command shell commands, reverse HTTP shell, command shell capture, and session manipulation, but does not have crypto vulnerability exploit commands. According to the same row, the classification of network traffic sample #1 is predetermined as containing a command signature.

[0040] Based on the original dataset in FIG. 3 A, the bootstrapped dataset in FIG. 3B is created. Using a bootstrapping algorithm, the network traffic samples in the bootstrapped dataset may be randomly selected from the original dataset, and one or more selected network traffic samples may be repeated in the bootstrapped dataset. In the bootstrapped dataset in FIG. 3B, network traffic samples #1, #2, and #4 are selected once from the original dataset, and network traffic sample #5 is selected twice from the original dataset. Because each network traffic sample remains unchanged after being selected to form the bootstrapped dataset, the attribute values and the classification of each network traffic sample remain unchanged.

[0041] It should be noted that the original dataset may include both malicious network traffic samples, /. <? ., those containing command signatures, and normal network traffic samples, /. <? ., those containing no command signature. As such, the machine learning model is trained to distinguish malicious network traffic from normal network traffic samples. In the example given in FIG. 3A, network traffic sample #6 is a normal network traffic sample.

[0042] While the original dataset in FIG. 3A has only six network traffic samples, the size of the original dataset is not limited to six. Similarly, while the bootstrapped dataset in FIG. 3B has only five network traffic samples, the size of the bootstrapped dataset is not limited to five. Further, the size of the bootstrapped dataset may be greater than, equal to, or smaller than the size of the original dataset.

[0043] The machine learning model learns from the network traffic samples in the bootstrapped dataset based on their attributes and predetermined classifications, and generates a DT. This process is explained with the reference to FIGs. 4A-4C.

[0044] FIG. 4A shows an example of a bootstrapped dataset with two network traffic samples, numbered 7 and 8. For simplicity, the corresponding original dataset is not shown. In the bootstrapped dataset of FIG. 4A, each network traffic sample has four attributes: whether the port scanning activity included in the network traffic sample is legitimate or suspicious; whether the IP address of the source of the network traffic sample is legitimate or suspicious; whether the network traffic sample intends to access a restricted sub-network; and whether the network traffic sample includes a command shell. The values of each attribute of network traffic samples 7 and 8 are shown in a corresponding column. As can be seen in FIG. 4A, network traffic sample 7 is a malicious network traffic sample while network traffic sample 8 is a normal network traffic sample.

[0045] A number of algorithms are known in the art to generate a DT based on a given bootstrapped dataset. FlGs. 4B and 4C show two different DTs that both correspond to the bootstrapped dataset in FIG. 4A. The two DTs both start from the same “root node” (the node on the very top) that determines whether the port scanning activity is legitimate or suspicious, but have different decision paths growing from their respective root nodes. As can been seen in the figures, three out of the four attributes are tested in the DT of FIG. 4B: legitimacy of port scanning activity; command shell; and legitimacy of source IP. Similarly, three out of the four attributes are tested in the DT of FIG. 4C: legitimacy of port scanning activity; command shell; and access to restricted sub-network.

[0046] When fed with network traffic with unknown classification, these DTs are capable of casting votes based on the attributes of the input network traffic to determine whether the input network traffic has a command signature. In real network environment when the network traffic is much more complex, it may be desired to generate more DTs to improve voting accuracy. Also, the algorithms may be configured to allow the DTs to grow deeper such that more attributes are tested. A deeper DT generally leads to improved voting accuracy but at the cost of increased consumption of computation resources.

[0047] For a given original dataset, the bootstrapping algorithm may be randomly performed multiple times to create a plurality of bootstrapped datasets, and in turn generate a plurality of DTs to form a RF. In general, increasing the total number of bootstrapped datasets may increase the accuracy of command signature detection.

[0048] FIG. 5 is a flowchart that illustrates a method for detecting C&C according to one or more embodiments.

[0049] At step 510, a WAF obtains network traffic between the web application and a server outside the network. [0050] At step 520, the WAF decrypts and reformats the network traffic.

[0051] At step 530, the network traffic is transmitted from the WAF to a machine learning model.

[0052] At step 540, the machine learning model determines whether the network traffic includes a command signature.

[0053] At step 550, in response to determining that the network traffic includes a command signature, the machine learning model assigns an identifier to the network traffic.

[0054] At step 560, the machine learning model generates a notification.

[0055] At step 570, it is determined whether the server is a C&C based on the notification.

[0056] It should be noted that the above steps may or may not be executed in the same order as they are described. For example, it is possible in some embodiments that step 550 is executed after step 560. Further, it should be noted that the not all of the above steps are required in all of the embodiments. For example, it is possible that some embodiments do not have step 520 or step 550.

[0057] FIG. 6 is a flowchart that illustrates steps for using a WAF and a machine learning model to determine whether network traffic includes a command signature according to one or more embodiments. The steps in FIG. 6 may be used to perform step 540 in FIG. 5. Further, the steps in FIG. 6 may correspond to the training and decision making operations of the machine learning model 230 described with reference to FIGs. 2 A and 2B.

[0058] At step 610, the WAF obtains an original dataset that includes a plurality of network traffic samples.

[0059] At step 620, the machine learning model generates a plurality of bootstrapped datasets from the original dataset, each bootstrapped dataset corresponding to one DT of a RF of the machine learning model.

[0060] At step 630, each DT outputs a vote based on the network traffic. [0061] At step 640, the machine learning model determines, based on the votes of the

DTs, whether the network traffic includes a command signature.

[0062] Some components of one or more embodiments may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in FIG. 7A, the computing system 700 may include one or more computer processors 702, non-persistent storage 704 (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage 706 (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface 712 (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), and numerous other elements and functionalities.

[0063] The computer processor(s) 702 may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system 700 may also include one or more input devices 710, such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.

[0064] The communication interface 712 may include an integrated circuit for connecting the computing system 700 to a network (not shown) (e.g., a LAN, a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.

[0065] Further, the computing system 700 may include one or more output devices

708, such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) 702, non-persistent storage 704, and persistent storage 706. Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms. [0066] The computing system 700 in FIG. 7A may be connected to or be a part of a network. For example, as shown in FIG. 7B, the network 720 may include multiple nodes (e.g., node X 722, node Y 724). Each node may correspond to a computing system, such as the computing system shown in FIG. 7A, or a group of nodes combined may correspond to the computing system shown in FIG. 7A. By way of an example, embodiments of the disclosure may be implemented on a node of a distributed system that is connected to other nodes. By way of another example, embodiments of the disclosure may be implemented on a distributed computing system having multiple nodes, where each portion of the disclosure may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system 700 may be located at a remote location and connected to the other elements over a network.

[0067] The nodes (e.g., node X 722, node Y 724) in the network 720 may be configured to provide services for a client device 726. For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device 726 and transmit responses to the client device 726. The client device 726 may be a computing system, such as the computing system shown in FIG. 7A. Further, the client device 726 may include and/or perform all or a portion of one or more embodiments of the disclosure.

[0068] The computing system or group of computing systems described in FIGs. 7A and 7B may include functionality to perform a variety of operations disclosed herein. For example, the computing system(s) may perform communication between processes on the same or different systems. A variety of mechanisms, employing some form of active or passive communication, may facilitate the exchange of data between processes on the same device. Examples representative of these inter-process communications include, but are not limited to, the implementation of a file, a signal, a socket, a message queue, a pipeline, a semaphore, shared memory, message passing, and a memory-mapped file. Further details pertaining to a couple of these non-limiting examples are provided below.

[0069] Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the disclosure may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user’s selection.

[0070] Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the disclosure, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system 700 in FIG. 7 A. First, the organizing pattern (e.g., grammar, schema, layout) of the data is determined, which may be based on one or more of the following: position (e.g., bit or column position, Nth token in a data stream, etc.), attribute (where the attribute is associated with one or more values), or a hierarchical/tree structure (consisting of layers of nodes at different levels of detail — such as in nested packet headers or nested document sections). Then, the raw, unprocessed stream of data symbols is parsed, in the context of the organizing pattern, into a stream (or layered structure) of tokens (where each token may have an associated token “type”).

[0071] Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML). [0072] The above description of functions presents only a few examples of functions performed by the computing system of FIG. 7A and the nodes and/or client device in FIG. 7B. Other functions may be performed using one or more embodiments of the disclosure.

[0073] While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the disclosure as disclosed herein. Accordingly, the scope of the disclosure should be limited only by the attached claims.