Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATIC TRIAGING OF NETWORK DATA LOSS PREVENTION INCIDENT EVENTS
Document Type and Number:
WIPO Patent Application WO/2021/071696
Kind Code:
A1
Abstract:
Automatically triaging network events such as data loss prevention (DLP) incidents is disclosed. A system can automatically triage or classify an incident using a prediction model. The prediction model can determine the classification based on similar incidents that were previously classified. Similar incidents are those incidents having profiles that match a profile of the incident. The profile can include one or more attributes that are representative of an incident. The system can arrive at a specific classification for the incident based on a classification of the similar incidents if the similar incidents satisfy one or more conditions.

Inventors:
ARMSTRONG KYLE (US)
BUTLER SKYLER (US)
Application Number:
PCT/US2020/053216
Publication Date:
April 15, 2021
Filing Date:
September 29, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTELISECURE (US)
International Classes:
G06Q10/00; H04L29/06
Foreign References:
US20120150773A12012-06-14
US8832780B12014-09-09
US9654510B12017-05-16
Attorney, Agent or Firm:
BARUFKA, Jack et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A system for facilitating reduction of computational resource usage for network event classification, the system comprising: a computer system comprising one or more processors programmed with computer program instructions that, when executed, cause the computer system to: provide network event profiles and reference feedback to a neural network to train the neural network, the neural network (i) generating predictions based on the network event profiles, (ii) assessing the predictions against the reference feedback, and (iii) updating one or more parameters of the neural network based on the assessment of the predictions; collect network event data related to a network event that is related to a data item, the network event data including incident data that triggered the network event, the network event data not including the data item; generate a network event profile for the network event based on the incident data such that network event profile includes (i) at least a portion of the incident data or a hash value related to the incident data and (ii) one or more attributes representative of the network event, the hash value being obtained based on a hashing of at least a portion of the incident data; and provide the network event profile to the neural network to determine a classification for the network event.

2. The system of claim 1, wherein the computer system is caused to: determine the classification of the network event as a first classification based on: a quantity of network events having reference profiles that match the network event profile exceeding a first threshold, and a confidence interval of the first classification exceeding a second threshold, wherein the first classification is one of one or more classifications of the network events having reference profiles that match the network event profile.

3. The system of claim 1, wherein the computer system is caused to: in response to determining that a classification mode is “audit” mode, determine the classification as a second classification, wherein the network event data includes a first classification indicating a classification of the network event determined by an entity other than the computer system.

4. A method for automatic classification of a network event, the method comprising: collecting, by a processor, network event data related to a network event that is related to a data item, the network event data including incident data associated with the network event; hashing, by the processor, at least a portion of the incident data to generate a hash value related to the incident data; generating, by the processor, a network event profile for the network event such that network event profile includes (i) the hash value and (ii) one or more attributes representative of the network event; and obtaining, by the processor, via a prediction model, a classification for the network event based on the network event profile, the prediction model being configured to generate the classification based on determining a match between the network event profile and reference profiles associated with network events, each of the reference profiles including a reference hash value and one or more reference attributes representative of the corresponding network event.

5. The method of claim 4, wherein obtaining the classification includes: determining the classification of the network event as one of one or more classifications of a set of network events having reference profiles that match the network event profile.

6. The method of claim 4, wherein obtaining the classification includes: determining the classification of the network event as different from one or more classifications of a set of network events having reference profiles that match the network event profile.

7. The method of claim 6, wherein determining the classification includes: determining the classification as a specified value, the specified value indicating a lack of data for classifying the network event.

8. The method of claim 4, wherein obtaining the classification includes: determining the classification of the network event as a first classification based on: a quantity of network events having reference profiles that match the network event profile exceeding a first threshold, and a confidence interval of the first classification exceeding a second threshold, wherein the first classification is one of one or more classifications of the network events having reference profiles that match the network event profile.

9. The method of claim 4, wherein obtaining the classification includes: determining the classification as a specified value, the specified value indicating a lack of data for classifying the network event based on a quantity of network events having reference profiles that match the network event profile being below a first threshold.

10. The method of claim 4, wherein obtaining the classification includes: determining the classification as a specified value, the specified value indicating a lack of confidence for classifying the network event based on a confidence interval of one or more classifications of network events having reference profiles that match the network event profile being below a second threshold.

11. The method of claim 4, wherein obtaining the classification includes: in response to determining a classification mode to be an “audit” mode: generating the classification as a second classification in the network event data, wherein the network event data includes a first classification indicating a classification of the network event determined by an entity other than the processor.

12. The method of claim 11, wherein the entity other than the processor includes a human user.

13. The method of claim 4, wherein the one or more attributes in the network event profile include (a) a user associated with the network event, (b) a user associated with a file on which network event is performed, (c) a policy that is violated, (d) a version of the policy that is violated, (e) an application associated with the network event, or (e) a computing device associated with the network event.

14. The method of claim 4, wherein hashing at least the portion of the incident data includes: identifying a first portion of the incident data that triggered the network event, and hashing the incident data without the first portion to generate the hash value.

15. The method of claim 4 further comprising: generating a report showing information for a group of network events, the information including, for each network event from the network events: a first classification and a second classification of the corresponding network event, wherein the first classification indicates a classification determined by an entity other than the processor and the second classification indicates the classification determined by the processor.

16. The method of claim 15, wherein the report includes: a first subset of network events from the group of network events in which the first classification and the second classification indicate the same classification, and a second subset of network events from the group of network events in which the first classification and the second classification indicate different classifications.

17. A computer-readable medium storing instructions that, when executed by one or more processors, cause operations comprising: collecting network event data related to a network event that is related to a data item, the network event data including incident data associated with the network event; generating a network event profile for the network event based on a type of the network event, the network event profile including one or more attributes representative of the network event; and obtaining, via a prediction model, a classification for the network event based on the network event profile, the prediction model being configured to generate the classification based a set of network events having reference profiles that match with network event profile, each of the reference profiles including one or more reference attributes representative of the corresponding network event.

18. The computer-readable medium of claim 17, wherein obtaining the classification includes: determining the classification of the network event as a first classification of the one or more classifications of the set of network events based on: a quantity of network events in the set of network events exceeding a first threshold, and a confidence interval of the first classification exceeding a second threshold.

19. The computer-readable medium of claim 17, wherein generating the network event profile includes: hashing at least a portion of incident data associated with the network event to generate a hash value, wherein the incident data triggered the network event, and adding the hash value to the network event profile.

20. The computer-readable medium of claim 19, wherein further comprising: determining the set of network events having reference profiles that match with network event profile, wherein each of the reference profiles includes a reference hash value that matches the hash value.

Description:
AUTOMATIC TRIAGING OF NETWORK DATA LOSS PREVENTION INCIDENT EVENTS

CROSS REFERENCE TO RELATED APPLICATIONS

[001] This application claims priority to U.S. Application No. 16/596,406, filed October 8, 2019, the subject matter of which is incorporated herein by reference in entirety.

FIELD OF THE INVENTION

[002] The invention relates to data loss prevention, and more specifically, to automatically triage data loss prevention incidents.

BACKGROUND OF THE INVENTION

[003] Organizations that leverage data loss prevention (DLP) products are well-aware of the hidden cost behind the triage of incidents, which include license purchases, setup costs, and consulting fees. The proper use of DLP requires an organization assign an individual to monitor incoming events. This individual is required to dissect the contents of each event, attempting to conclude whether the information included demonstrates actual data loss (e.g., data leak and/or data breach). Failure to properly perform an accurate analysis of each incident may result in substantial financial damages for an organization. An individual may take a significant amount of time, e.g., five minutes to analyze and triage an incident. This can be a time consuming and tedious process, especially with businesses where the incident backlogs are in the thousands.

[004] Given the scale of incidents, organizations may have to employ a significant number of staff to triage the incidents, which is not only a significant expense for an organization but can also be inefficient as it can increase the consumption of computing resources. Further, involving a such a large number of humans can increase the potential for human error in triaging the incidents significantly, which can result in a decreased accuracy of triaging. Accordingly, in such cases, time, energy, and other resources (whether machine or human resources) are needlessly utilized on the computer systems. These and other drawbacks exist. Thus, there is a need to triage the incidents more efficiently and more accurately.

SUMMARY OF THE INVENTION

[005] Aspects of the invention relate to methods, apparatuses, and/or systems for automatically triaging network events such as data loss prevention (DLP) incidents. In some embodiments, a DLP incident is a notification presented to a user, such as an administrator, detailing certain behaviors of interest, such as policy violations. For example, DLP applications can track user actions or other events in a computer network that share sensitive information such as social security numbers or credit card numbers. If such a number were to be sent in an email to an unauthorized user, the DLP application can determine that a policy is likely violated and create a DLP incident (“incident”) accordingly.

[006] In some embodiments, a system can automatically triage or classify an incident based on similar incidents that were previously classified. The system can arrive at a specific classification for the incident based on a classification of the similar incidents that were previously classified. In some embodiments, a classification of the incident is representative of a certain behavioral trait, such as a specific policy violation, an indeterminate behavior, a false positive, etc. The system can determine the similar incidents by selecting those incidents that have profiles matching a profile of the incident. The profile can include a username of the user associated with the incident, a policy that is violated by the incident, a policy match count that indicates a number of policies violated by the incident, a workstation where the incident occurred, a website where the incident occurred, a filename where the incident occurred, a file destination, policy version history, application used (in the movement of sensitive information), or the actual afflicting content.

[007] The classification for the incident can be determined in various ways, for example, based on a prediction model. In some embodiments, training information may be provided as input to a prediction model to generate predictions related to the classification of the incident. As an example, the training information may indicate a profile of an incident and a classification of the incident for multiple incidents. In some embodiments, classification result information may be provided as reference feedback to the prediction model. As an example, the classification result information may be related to a performance of the classification process (e.g., whether the predicted classification is correct or incorrect). The prediction model may update one or more portions of the prediction model based on the predictions and the classification result information. Subsequent to the updating of the prediction model, the prediction model may be used to process the incident profile to determine the classification of the incident.

[008] Various other aspects, features, and advantages of the invention will be apparent through the detailed description of the invention and the drawings attached hereto. It is also to be understood that both the foregoing general description and the following detailed description are examples and not restrictive of the scope of the invention. As used in the specification and in the claims, the singular forms of “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. In addition, as used in the specification and the claims, the term “or” means “and/or” unless the context clearly dictates otherwise.

BRIEF DESCRIPTION OF THE DRAWINGS

[009] FIG. 1A shows a system for automatically triaging a network event, in accordance with one or more embodiments.

[010] FIG. IB is a block diagram showing a computer system integrated with a data loss prevention system, consistent with various embodiments.

[Oil] FIG. 2 shows a machine learning model configured to facilitate automatic triaging of a network event, in accordance with one or more embodiments.

[012] FIG. 3A is a screenshot showing details of an incident, consistent with various embodiments.

[013] FIG. 3B shows a block diagram for generating a profile of the incident, consistent with various embodiments.

[014] FIG. 3C is a screenshot of an incident with the classification, consistent with various embodiments.

[015] FIG. 3D is a screenshot of a report showing incidents classified in audit mode, consistent with various embodiments.

[016] FIG. 4 shows a flowchart of a method of determining a classification of a network event, consistent with various embodiments.

[017] FIG. 5 shows a flowchart of a method of generating a network event profile, consistent with various embodiments.

[018] FIG. 6 shows a flowchart of a method of determining the classification of the network event, consistent with various embodiments.

[019] FIG. 7 shows a flowchart of a method of determining the classification of a network event via a prediction model, consistent with various embodiments.

[020] FIG. 8 shows a flowchart of a method of grouping redundant network events, consistent with various embodiments. [021] FIG. 9 shows a flowchart of a method of grouping similar network events, consistent with various embodiments.

DETAILED DESCRIPTION OF THE INVENTION

[022] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be appreciated, however, by those having skill in the art that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other cases, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention.

[023] FIG. 1A shows a system 100 for automatically triaging a network event, in accordance with one or more embodiments. As shown in FIG. 1A, system 100 may include computer system 102, client device 104 (or client devices 104a-104n), or other components. Computer system 102 may include network event subsystem 112, triaging subsystem 114, report generation subsystem 116, model subsystem 118, feedback subsystem 120, or other components. Each client device 104 may include any type of mobile terminal, fixed terminal, or other device. By way of example, client device 104 may include a desktop computer, a notebook computer, a tablet computer, a smartphone, a wearable device, or other client device. Users may, for instance, utilize one or more client devices 104 to interact with one another, one or more servers, such as server 122, or other components of system 100. In some embodiments, the server 122 and the client devices 104a-104n can be part of a local area network (LAN) and the server 122 can provide access to an external network, e.g., Internet, to the client devices 104a-104n. It should be noted that, while one or more operations are described herein as being performed by particular components of computer system 102, those operations may, in some embodiments, be performed by other components of computer system 102 or other components of system 100. As an example, while one or more operations are described herein as being performed by components of computer system 102, those operations may, in some embodiments, be performed by components of client device 104, server 122, or other machines (not illustrated). It should be noted that, although some embodiments are described herein with respect to machine learning models, other prediction models (e.g., statistical models or other analytics models) may be used in lieu of or in addition to machine learning models in other embodiments (e.g., a statistical model replacing a machine learning model and a non- statistical model replacing a non-machine-learning model in one or more embodiments).

[024] In some embodiments, system 100 triages, that is, classifies a network event. A network event can be related to any user activity in a computer network. For example, a network event can be a DLP incident (“incident”), which is representative of a user activity that likely violated a data access policy causing a possible data loss or data leak. The user activity can be related to one or more data items, e.g., files, emails, user interface such as web interface, etc. An incident can include one or more attributes (“network event data”) such as a username of the user associated with the incident, a policy that is violated by the incident, a policy match count, a workstation where the incident occurred, a website where the incident occurred, a filename where the incident occurred, a file destination, policy version history, application used (in the movement of sensitive information), or the actual afflicting content. As an example, triaging a network event includes determining a classification for the network event, which can be indicative of certain behavioral traits. In some embodiments, the classification can also indicate if the network event is a false positive, false negative, or if there is insufficient data for classification.

[025] When a network event is generated or received by computer system 102, system 100 determines the classification of the network event. In some embodiments, the determination of the classification can include system 100 determining a profile of the network event, determining previously classified similar network events, which are network events having profiles matching the profile of network event, and determining the classification of the network event based on the classification of the similar network events. As an example, system 100 can determine a profile of the incident, which includes one or more attributes of the incident (mentioned above), determine previously classified incidents (e.g., from network event database 138) that have profiles matching the profile of the incident, and determine the classification of the incident as the classification of the previously classified similar incidents.

[026] In some embodiments, determining the classification of the incident can include system 100 determining whether the similar incidents satisfy certain conditions. As an example, system 100 may determine a specific classification associated with the previously classified similar incidents as the classification of the incident based on (a) a quantity of the similar incidents identified exceeding a first threshold and (b) a confidence interval of the specific classification of one or more classifications associated with the similar incidents exceeding a second threshold. If either condition, (a) or (b), is not satisfied, system 100 may determine the classification as “not enough data to classify.”

[027] In some embodiments, an incident can have multiple status attributes indicating a classification of the incident. A first status attribute (referred to as “status” attribute) is indicative of the actual classification of the incident, which can be determined by the computer system 102 (e.g., as described above) or another entity, such as a human user. A second status attribute (referred to as “assuming status” attribute) is indicative of an “assumed” classification, which is determined by the computer system 102. The status attributes that are updated by computer system 102 can depend on a mode in which system 100 is operating. For example, in a first mode (referred to as “audit” mode), system 100 may update the “assuming status” attribute with the resulting classification and leave the status attribute with the original value. In a second mode (referred to as the “triage” mode), system 100 updates the status attribute with the resulting classification, and optionally, the “assuming status” attribute too. One of the benefits of running system 100 in the audit mode is that a user, e.g., an administrator who reviews the incidents, can compare the classification of an incident determined by a human user (e.g., as indicated in the status attribute) with the classification determined by computer system 102 (e.g., as indicated in the assuming status attribute) for further analysis. If the user determines that computer system 102 is correct in determining the classification, the user may update the status attribute of the incident with the classification indicated in the assuming status attribute. On the other hand, if the user determines that computer system 102 determined the classification incorrectly, the user may provide the feedback (e.g., an indication that computer system 102 is incorrect and the correct classification), which system 100 can use to improve the accuracy in determining the classification for subsequent incidents.

[028] In some embodiments, system 100 may train a prediction model to determine the classification of an incident. In some embodiments, system 100 may generate a profile of the incident and provide such information as input to a prediction model to generate predictions (e.g., related to classification of the incident). As an example, the profile may indicate a username of the user associated with the incident, a policy that is violated by the incident, a policy match count that indicates a number of policies violated by the incident, a workstation where the incident occurred, a website where the incident occurred, a filename where the incident occurred, a file destination, policy version history, application used (in the movement of sensitive information), or the actual afflicting content. In some embodiments, system 100 may provide classification result information as reference feedback to the prediction model, and the prediction model may update one or more portions of the prediction model based on the predictions and the classification result information. As an example, the classification result information can indicate a correct classification of the incident. In this way, for example, the prediction model may be trained or configured to generate more accurate predictions.

[029] As such, in some embodiments, subsequent to the updating of the prediction model, system 100 may use the prediction model to determine the classification of an incident. As an example, system 100 may obtain and provide information related to profile of the incident to obtain one or more predictions from the prediction model. System 100 may use the predictions to determine the classification of the incident. In one use case, the prediction model may generate a prediction specifying the classification of the incident. In another use case, the prediction model may generate a prediction specifying a probability of the determined classification for the incident. [030] In some embodiments, the prediction model may include one or more neural networks or other machine learning models. As an example, neural networks may be based on a large collection of neural units (or artificial neurons). Neural networks may loosely mimic the manner in which a biological brain works (e.g., via large clusters of biological neurons connected by axons). Each neural unit of a neural network may be connected with many other neural units of the neural network. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. In some embodiments, each individual neural unit may have a summation function which combines the values of all its inputs together. In some embodiments, each connection (or the neural unit itself) may have a threshold function such that the signal must surpass the threshold before it propagates to other neural units. These neural network systems may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. In some embodiments, neural networks may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation techniques may be utilized by the neural networks, where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for neural networks may be more free-flowing, with connections interacting in a more chaotic and complex fashion. [031] As an example, with respect to FIG. 2, machine learning model 202 may take inputs 204 and provide outputs 206. In one use case, outputs 206 may be fed back to machine learning model 202 as input to train machine learning model 202 (e.g., alone or in conjunction with user indications of the accuracy of outputs 206, labels associated with the inputs, or with other reference feedback information). In another use case, machine learning model 202 may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction (e.g., outputs 206) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another use case, where machine learning model 202 is a neural network, connection weights may be adjusted to reconcile differences between the neural network’s prediction and the reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to them to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the machine learning model 202 may be trained to generate better predictions.

[032] Subsystems 112-120

[033] In some embodiments, network event subsystem 112 facilitates management of network events, such as incidents. As described above, an incident is representative of a user activity that likely violated a data access policy causing a possible data loss, data breach or data leak. For example, if an organization of which the server 122 and client devices 104a-104n are a part defines a data access policy which considers sharing of social security numbers outside of the organization is a data breach, then any movement of information having social security numbers by users associated with client devices 104a-104n, e.g., via email, file upload to a cloud service, storing in a file in an unauthorized location, can potentially be considered a data access policy violation, and an incident can be generated for the same.

[034] FIG. 3A is a screenshot showing details of an incident 302, consistent with various embodiments. An incident 302 can include one or more attributes such as a policy 304 that is violated by the incident, a policy match count that indicates a number of policies violated by the incident, a type 308 of the incident, a username 310 of the user associated with the incident, a workstation 312 where the incident occurred/initiated, application 314 used (in the movement of sensitive information), a website 316 where the incident occurred, a filename where the incident occurred, a file owner 318, a file destination, policy version history, or the actual afflicting content 320. In the example of FIG. 3 A, a “credit card numbers (data identifiers) ” policy 304 is triggered upon a user sharing at a website 316 content having a number that resembles a credit card number, resulting in the generation of the incident 302. In one example, the policy 304 can be defined to consider a data identifier/number of format, “XXXX-XXXX-XXXX-XXXX,” as a credit card number and sharing of such number is a violation of the policy. Accordingly, when computer system 102 identifies content having such a number, an incident is generated. The incident 302 identifies the number resembling the credit card number as the afflicting content 320. In some embodiments, the afflicting content 320 can be part of an entire message/information the user shares at the website 316. For example, if the violation occurred based on the message, which has text "Hello, my partner ID is 4290-0569-1795-7022 with regard to the stay at...," the number that is captured as a credit card number "4290-0569-1795-7022," is the actual afflicting content 320 and the remainder of the content of the message, such as "Hello, my partner ID is, with regard to the stay at ... ,” is considered as non-afflicting content. Further, in some embodiments, there can be a number of policies, like the policy 304, within “credit card numbers.” An incident can violate one or more policies, and a policy match or violation count 354 indicates the number of policies violated by the incident. In FIG. 3A, the incident 302 matches one policy - policy 304.

[035] The attribute type 308 of the incident indicates that the policy violation occurred at, or is identified by, an endpoint server, such as server 122. The attribute username 310 indicates a name or user ID of the user who caused the violation the policy 304. The attribute workstation 312 indicates a name or IP address of the machine (e.g., associated with user) at which the violation of the policy 304 originated. The attribute application 314 indicates that the policy violation occurred in an application such as “Microsoft Internet Explorer.” The incident 302 also includes a status attribute 322, which indicates the classification of the incident 302. In some embodiments, the status attribute 322 is assigned a specific value, e.g., “new,” to indicate that the incident 302 is a new incident that is not yet classified either by computer system 102 or human user. Note that an incident can have less, more, or different attributes than that are shown in the screenshot of FIG. 3A.

[036] A classification of the incident is determined based on one or more attributes of the incident. In some embodiments, the actual attributes used in determining the classification of the incident depends on a type of the incident. FIG. 3B shows a block diagram for generating a profile of the incident, consistent with various embodiments. The incident 302 can be of multiple types. In some embodiments, the type 308 is representative of a location where the incident occurred/is identified. For example, a first type of incident, such as “Application” 380, can be representative of an incident that occurred in, or is identified by, a cloud-based application. A second type of incident, such as “Endpoint” 382 can be representative of an incident that occurred in, or is identified by, a server 122 or a client device 104 in the specified network. A third type of incident, such as “Discover” 384 can be representative of an incident that occurred in, or is identified by, a client device 104 (e.g., discovered by a scan of data-at-rest). A fourth type of incident, such as “Network” 386 can be representative of an incident that occurred in, or is identified by, in a network-based application, e.g., email.

[037] For an incident of type “Application” 380 or “Discover” 384 attributes such as a policy 304 violated, a policy version 352, a policy match or violation count 354, or a file owner 318 can be used for determining the classification. For an incident of type “Endpoint” 382 attributes such as a policy 304 violated, a policy version 352, a policy match or violation count 354, a file owner 318, or an endpoint app/application 314 used can be used for determining the classification. In another example, for an incident of type “Network” 386 attributes such as a policy 304 violated, a policy version 352, a policy match count or violation count 354, a message originator 356, or a sender/recipient of the message 358 can be used for determining the classification. The policy 304, as described above, is a policy that is violated and the policy version 352 is a version of the policy 304 that is violated. The policy version 352 may or may not be considered for determining the similar incidents. In cases where the policy version 352 is considered, two incidents are determined to be similar if both their policy and policy version attributes match (provided other attributes in their respective profiles also match). In cases where the policy version 352 is not considered, two incidents are determined to be similar if their policy attributes match (provided other attributes in their respective profiles also match) regardless of the policy version 352. The policy match or violation count 354 attribute provides a count of the number of policies violated by the incident. The file owner 318 attribute provides information regarding a user who is the owner of a file or other data item which triggered the violation. The message originator 356 attribute provides information regarding an originator of a message that violates the policy 304. The sender/recipient of the message 358 attribute provides information regarding a user who sends or receives a message that violated the policy 304. [038] Network event subsystem 112 generates a profile of the incident based on the type of the incident. Network event subsystem 112 can determine the type 308 of the incident 302 from the incident 302, obtain attributes information for the type 308 from the network event database 138 and generate a profile for the incident 302 accordingly. The profile can include one or more attributes that are representative of a specified type of the incident (e.g., described above), and can be used in determining the classification of the incident.

[039] In some embodiments, network event subsystem 112 can also include the non-afflicting content of the incident (also referred to as “incident data”) in the profile. In some embodiments, including the non-afflicting content in the profile enables determining only those incidents whose attributes as well as the non-afflicting content match as similar incidents and ignoring those incidents whose non-afflicting content is different even though the other attributes match.

[040] In some embodiments, network event subsystem 112 can also include variations of the non- afflicting content in the profile. For example, if the non-afflicting content is "Hello, my, partner ID is, with regard to the stay at .. some variations of the same could be "Hi, my, partner ID is, with regard to the stay at ... ,” “My, partner ID is, with regard to the stay at ... ,” “My, partner ID is, for the stay at ... ” etc. By including such variations, those incidents having (a) attributes that match with the attributes of the profile, and (b) non-afflicting content that match with the non- afflicting content or variations of the non-afflicting content in the profile can be determined as similar incidents. In some embodiments, network event subsystem 112 can determine the variations of the non-afflicting content in many ways, e.g., using natural language processing (NLP).

[041] Further, in some embodiments, network event subsystem 112 can include a hash value of the non-afflicting content, rather than the non-afflicting content, in the profile. Network event subsystem 112 can generate the hash value of the non-afflicting content by hashing the non- afflicting content using a hash function, such as secure hash algorithm (SHA). In some embodiments, using a hash value rather than the actual content can improve the speed, that is, reduce the amount of time consumed, in comparing the profile with other profiles to determine the classification.

[042] Triaging subsystem 114 facilitates determination of the classification for a specified incident. Triaging subsystem 114 can determine the classification in many ways, e.g., using a statistical or rule-based model. In the rule-based model, triaging subsystem 114 processes the profile information of the specified incident (e.g., generated by network event subsystem 112 as described above) to determine similar incidents that were previously classified. For example, triaging subsystem 114 may search a database, such as network event database 138, to find those previously classified events that have a profile matching the profile of the specified incident. Triaging subsystem 114 determines if a quantity of the similar incidents exceeds a first threshold. If the quantity does not exceed the first threshold, triaging subsystem 114 may classify the specified incident in a “not enough data” category indicating that there is not enough data for computer system 102 to classify the specified incident. On the other hand, if the quantity exceeds the first threshold, triaging subsystem 114 determines if a confidence interval associated with any classification of one or more classifications of the similar incidents exceeds a second threshold. If the confidence interval does not exceed the second threshold, triaging subsystem 114 may classify the specified incident in a “not enough data” category indicating that there is not enough data for computer system 102 to classify the specified incident. On the other hand, if the confidence interval exceeds the second threshold, triaging subsystem 114 determines the corresponding classification of the similar incidents as the classification for the specified incident.

[043] The thresholds can be user defined, e.g., by an administrator associated with the server 122. For example, the first threshold can be “10,” which indicates that at least “11” previously classified similar incidents are to be considered by triaging subsystem 114 to determine the classification of the specified incident based on the similar incidents. Similarly, the second threshold associated with the confidence interval can be “80%” which indicates that at least “81 out of 100” previously classified similar incidents have to have a particular classification for the particular classification to be considered by triaging subsystem 114 as the classification for the specified incident.

[044] Triaging subsystem 114 can consider two incidents as similar when the profiles of both the incidents match. The profiles of two incidents are matching when all the attributes of the profiles match. In some embodiments, the profiles of two incidents are matching when a specified set of attributes or a specified number of attributes the profiles match. The matching criterion for the profiles can be user defined, e.g., by an administrator associated with the server 122.

[045] As another example, triaging subsystem 114 can be implemented using artificial intelligence techniques, such as a prediction model, to determine the classification of a specified incident. In some embodiments, information related to profile of incidents may be obtained and provided to the prediction model as training data to configure or train the prediction model. Such information may be stored by computer system 102 in a storage system, e.g., training database 134. In some embodiments, model subsystem 118 may obtain the training data from the training database 134 and provide such information as input to a prediction model to generate predictions. Feedback subsystem 120 may provide classification result information as reference feedback to the prediction model, and the prediction model may update its configurations (e.g., weights, biases, or other parameters) based on the predictions and the classification result information. In some embodiments, feedback subsystem 120 may provide the classification result information as reference feedback to the prediction model to cause the prediction model to assess its predictions against the classification result information. As an example, the prediction model may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of the predictions. As an example, the predictions generated by the prediction model (e.g., based on the profile of incidents) may include predictions related to determining the classification of the incidents, or other predictions. The classification result information may include information related to a performance of the classification process (e.g., whether the predicted classification is correct or incorrect), or other information related to the classification.

[046] In some embodiments, subsequent to the updating of the prediction model, the prediction model may be used to determine a classification for a specified incident based on the profile of the specified incident. As an example, information related to the profile of the specified incident may be obtained and provided to the prediction model to obtain one or more predictions from the prediction model. The predictions obtained from the prediction model may be used to determine the classification for the specified incident, determine whether the specified incident satisfies one or more criteria related to a particular classification, or generate other determinations. As an example, the predictions may include a prediction specifying that the specified incident belongs to a first classification, the specified incident cannot be classified (for e.g., lack of data), a prediction specifying a probability of a particular classification (e.g., “X% Likelihood of the specified incident being business process violation”), or other prediction.

[047] In some embodiments, the prediction model may be configured or trained to recognize profiles of incidents that have a high likelihood of being associated with a particular classification similar to the classification of prior incidents that have been classified (e.g., based on training on prior information related to the profiles of incidents and their classification). As such, when profile related information of the specified incident (provided to the prediction model) matches such profiles, the prediction model will generate a prediction indicating that the specified incident should be classified under the particular classification.

[048] As can be appreciated from the foregoing description, triaging subsystem 114 can determine the classification for a specified incident in many ways. After determining the classification, triaging subsystem 114 can update the incident to indicate the classification. FIG. 3C is a screenshot of an incident with the classification, consistent with various embodiments. For example, as illustrated in FIG. 3C, triaging subsystem 114 can update the status attribute 322 or the assuming status attribute 334 to indicate the classification of the incident. As described above, whether either or both the status attributes are updated can depend on a mode in which computer system 102 is operating. For example, in the “audit” mode, triaging subsystem 114 may update “assuming status” attribute 334 with the determined classification and “assuming percentage” attribute 336 with a probability of the incident 302 belonging to the determined classification. Triaging subsystem 114 may not update status attribute 322 (that is, leave the status attribute 322 with the original value) in the audit mode. In the “triage” mode, triaging subsystem 114 can update status attribute 322 with the determined classification, and optionally, “assuming status” attribute 334 too. The screenshot of FIG. 3C also shows “chained incidents” attribute 330 which provides information regarding a set of incidents that correspond to the same network event chained together. Similarly, “redundant incidents” attribute 332 provides information regarding a set of incidents that share a set of attributes linked together. Additional details with respect to chained incidents and redundant incidents are described at least with reference to FIGS. 8 and 9. In some embodiments, audit mode enables a user to compare the classification determined by the computer system 102 as indicated in the assuming status attribute with the classification determined by another entity, e.g., human users, as indicated in the status attribute, as illustrated in FIG. 3D. [049] FIG. 3D is a screenshot of a report 348 showing incidents classified in audit mode, consistent with various embodiments. Reporting subsystem 116 facilitates generation of various reports associated with classification of incidents. Report 348 shows classification of incidents in audit mode under four different classifications, e.g., “False Positive,” “Indeterminate Behavior,” “Note enough data,” and “Uncertain”. For example, row 338 shows that a total of “213” incidents are classified as “False positive” (as indicated in the assuming status attribute) by computer system 102. Row 340 shows that “204” of the “213” incidents are classified as “False positive” (as indicated in the status attribute) by an entity other than the computer system 102, such as a human user. Similarly, row 342 shows that “9” of the “213” incidents are classified as “Indeterminate Behavior” (as indicated in the status attribute) by a human user. Block 344 shows that a total of “282” incidents are classified as “Indeterminate Behavior” (as indicated in the assuming status attribute) by computer system 102, and of those “282,” “256,” “24” and “2” incidents are classified as “Indeterminate behavior,” “authorized business process,” and “false positive” (as indicated in the status attribute) by an entity other than the computer system 102, such as a human user. Similarly, block 346 shows that a total of “426” incidents are classified as “not enough data” (as indicated in the assuming status attribute) by computer system 102, and of those “426,” “128,” “297” and “1” incidents are classified as “false positive,” “Indeterminate behavior,” and “personal use” (as indicated in the status attribute) by an entity other than the computer system 102, such as a human user.

[050] A user, e.g., an analyst or administrator associated with the server 122, can review the classification of the incidents, e.g., those that do not match with the classification of the computer system 102, and provide feedback to the computer system 102 accordingly. For example, the human user can indicate, e.g., via user feedback in the report 348, whether the computer system 102 or the user is correct in classifying the incidents in rows 342. If the computer system 102 is correct, the user can update the status attribute to the classification determined by the computer system 102, or if the user is correct, the computer system 102 can be configured to use the feedback to improve the accuracy in determining the classification for subsequent incidents.

[051] Reporting subsystem 116 can also facilitate in managing redundant incidents. In some embodiments, reporting subsystem 116 can group similar incidents, e.g., incidents that share a set of attributes, such as a type, username, email address, and policy matches, into a single group. Further, in some embodiments, the same network event can trigger multiple policy violations and hence, can result in multiple incidents being created. Reporting subsystem 116 can identify such incidents that correspond to the same network event and chain them together into a single group. [052] In some embodiments, the incidents are grouped using a “group” attribute in the incident. The group attribute would have the same group identification (ID) for all the incidents that belong to a group. Such grouping can facilitate easy review of incidents to a user and allow for efficient mass-classification of incidents. In some embodiments, if an incident is classified into a specified classification, then all other incidents in the group can be automatically classified into the specified classification. [053] FIG. IB is a block diagram showing the computer system integrated with a DLP system 152, consistent with various embodiments. Incidents can be generated by network event subsystem 112 or a system other than network event subsystem 112, e.g., a DLP system 152 that monitors the user activity in association with the server 122 and the client devices 104a-104n for any data access policy violations. Incidents can be stored in and accessed from network event database 138. DLP system 152, such as the one provided by Symantec of Mountain View, California, can discover, monitor and protect sensitive data for an organization, e.g., by monitoring the server 122, client devices 104a-104n and other entities associated with a network of the organization. DLP system 152 provides data control across a broad range of data loss channels: cloud apps, endpoints, data repositories, and email and web communications. DLP system 152 can facilitate in managing data loss policies, such as policy 304, and generate incidents in the event of any policy violations. Computer system 102 can be integrated with DLP system 152 (via hardware or software) to process the incidents for determining the classifications. In some embodiments, DLP system 152 can issue a command to or trigger computer system 102 to process the incidents for determining the classifications, grouping redundant incidents, or chaining related incidents.

[054] System 100 also includes a push/pull application programming interface (API) 154 to enable cloud service 156 associated with computer system 102 to receive or obtain incidents from DLP system 152, in real-time or near real-time. In some embodiments, the push portion of the push/pull API 154 can be located at DLP system 152, and push the information regarding incidents to the cloud service 156. Similarly, the pull portion of push/pull API 154 can be located at the cloud service 156 and can retrieve information regarding incidents from DLP system 152. In some embodiments, receiving information in near real-time includes receiving information regarding the incidents at cloud service 156 within a specified threshold duration from when an incident is generated by DLP system 152. Such information retrieval can facilitate an organization, e.g., user associated with server 122, to consume the incidents or review the reports, such as report 348, in real-time or near real-time.

[055] Example Flowcharts

[056] FIGS. 4-7 are example flowcharts of processing operations of methods that enable the various features and functionality of the system as described in detail above. The processing operations of each method presented below are intended to be illustrative and non-limiting. In some embodiments, for example, the methods may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the processing operations of the methods are illustrated (and described below) is not intended to be limiting.

[057] In some embodiments, the methods may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of the methods in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of the methods.

[058] FIG. 4 shows a flowchart of a method 400 of determining a classification of a network event, consistent with various embodiments. In an operation 402, network event data related to a network event may be obtained. As an example, the network event can be an incident and network event data related to the network event can be one or more attributes of the incident, e.g., as described at least with reference to FIGS. 1A and 3A. Operation 402 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[059] In an operation 404, a network event profile is determined using the network event data. As an example, the network event profile can be a profile of the incident and includes at least a subset of the attributes of the incident, as described at least with reference to FIGS. 1A and 5. Operation 404 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[060] In an operation 406, a classification of the network event is determined based on the network event profile. As an example, the profile of the incident is processed to determine a classification for the incident. In some embodiments, the classification of the incident is determined based on the classification of previously classified incidents having profiles matching with that of the incident, as described at least with reference to FIGS. 1 A and 6. Operation 406 may be performed by a subsystem that is the same as or similar to triaging subsystem 114.

[061] In some embodiments, at least a portion of the method 400 may be performed at a time when a new network event or a specified number of new network events are detected. The method 400 may also be performed in response to a user request for determining the classification. In some embodiments, a user, e.g., an administrator associated with server 122, can define when the method 400 is to be executed.

[062] A new network event can be an incident that is not yet classified, or an incident with a status attribute having a specified value, e.g., “new,” indicating that the incident is not yet classified.

[063] FIG. 5 shows a flowchart of a method 500 of generating a network event profile, consistent with various embodiments. In some embodiments, the method 500 can be performed as part of operation 404 of method 400. In an operation 502, a network event type is determined from the network event data. As an example, the network event type can be a type of the incident, e.g., “Application”, “Discover,” “Endpoint” or “Network.” Each of the types of the incident is associated with a distinct set of attributes. The type of the incident can be representative of a location where the incident occurred/is identified. The type of the incident can be obtained from the one or more attributes of the incident. Operation 502 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[064] In an operation 504, the attributes of the network event that may be used in determining the classification of the network event are identified based on the type of the network event. As example, for an incident of type “Application” or “Discover,” a policy violated, a policy version, a policy match count, or a file owner can be used for determining the classification. In another example, for an incident of type “Endpoint,” a policy violated, a policy version, a policy match count, a file owner, or an application used can be used for determining the classification. In another example, for an incident of type “Network,” a policy violated, a policy version, a policy match count, a message originator (originator of a message that violates the policy), or a sender/recipient of the message can be used for determining the classification. Operation 504 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[065] In an operation 506, a hash value of the incident data is determined. As an example, incident data is the non-afflicting content in an incident. In some embodiments, non-afflicting content is content of the incident other than afflicting content that likely caused the data policy violation. In some embodiments, hash value is generated using a hashing function such as SHA. Operation 506 may be performed by a subsystem that is the same as or similar to network event subsystem 112. [066] In an operation 508, a network event profile is generated based on the attributes and the incident data. As an example, the profile of the incident is generated using the attributes and the hash value determined in operations 504 and 506, respectively. Operation 508 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[067] FIG. 6 shows a flowchart of a method 600 of determining the classification of the network event, consistent with various embodiments. In some embodiments, the method 600 can be performed as part of operation 406 of method 400. In an operation 602, network event data related to network events that match the network event profile of a network event to be classified may be obtained. As an example, a set of incidents having profiles that match the profile of the incident to be classified is obtained. In some embodiments, the set of incidents are prior incidents that have been classified, e.g., by computer system 102 or another entity, e.g., a human user. The set of incidents may be stored in network event database 138.

[068] In an operation 604, whether a minimum number of matching network events are found is determined. As an example, whether a quantity of incidents in the set of incidents exceeds a first threshold may be determined. If the quantity of incidents does not exceed the first threshold, in operation 608, the classification for the incident is determined as “not enough data,” indicating a lack of data for classifying the incident.

[069] If the quantity of incidents exceeds the first threshold, in operation 606, whether a confidence interval of any of the classifications of the set of incidents exceeds a second threshold is determined. If the confidence interval does not exceed the second threshold, in operation 608, the classification for the incident is determined as “not enough data,” indicating a lack of data for classifying the incident.

[070] If the confidence interval of any of the classifications of the set of incidents exceeds the second threshold, in operation 610, the corresponding classification of the set of incidents is determined as the classification of the incident.

[071] In operation 612, whether the classification is determined in audit mode or triage mode is determined. If the classification is determined in audit mode, in operation 616, a second status attribute, such as “assuming status” attribute, which indicates an “assumed” classification of the incident is updated with the classification. If the classification is determined in triage mode, in operation 614, a “status” attribute, which indicates the actual classification of the incident, is updated with the classification. Optionally, the “assuming status” attribute of the incident may also be updated in addition to the status attribute in the triage mode.

[072] In some embodiments, a user, e.g., an administrator associated with server 122, can define in which mode the classification is to be determined.

[073] Operations 602-616 may be performed by a subsystem that is the same as or similar to triaging subsystem 114.

[074] FIG. 7 shows a flowchart of a method 700 of determining the classification of a network event via a prediction model, consistent with various embodiments. In operation 702, profiles of network events may be obtained. As an example, profiles of a set of incidents stored in a database 132 is obtained. Operation 702 may be performed by a subsystem that is the same as or similar to model subsystem 118.

[075] In operation 704, the obtained network event profiles may be provided as input to a prediction model to generate predictions. As an example, the predictions may be related to the classification of the set of incidents, such as whether a specified incident cannot be classified (for e.g., lack of data), a prediction specifying a probability of a particular classification (e.g., “X% Likelihood of the specified incident being business process violation”). Operation 704 may be performed by a subsystem that is the same as or similar to model subsystem 118.

[076] In an operation 706, classification result information may be provided as reference feedback to the prediction model. As an example, the classification result information may include information related to a performance of the classification process (e.g., whether the predicted classification is correct or incorrect), or other information related to the classification. In some embodiments, the reference feedback can cause the prediction model to assess its predictions against the classification result information. As an example, the prediction model may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of the predictions. Operation 706 may be performed by a subsystem that is the same as or similar to feedback subsystem 120.

[077] In an operation 708, subsequent to the updating of the prediction model, the prediction model may be used to determine the classification of an incident. As an example, profile of an incident may be obtained and provided to the prediction model to obtain one or more predictions from the prediction model. The predictions obtained from the prediction model may be used to determine the classification for the incident, determine whether the incident satisfies one or more criteria related to a particular classification, or generate other determinations. As an example, the predictions may include a prediction specifying that the incident belongs to a first classification, the incident cannot be classified (for e.g., lack of data), a prediction specifying a probability of a particular classification (e.g., “X% Likelihood of the incident being business process violation”), or other prediction. Operation 708 may be performed by a subsystem that is the same as or similar to triaging subsystem 114.

[078] FIG. 8 shows a flowchart of a method 800 of grouping redundant network events, consistent with various embodiments. In operation 802, a type of a specified network event may be obtained. As an example, a type of a specified incident is obtained from the specified incident. Operation 802 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[079] In an operation 804, a set of attributes of the specified network event that may be used in determining redundant network events are determined based on the type of the specified network event. As an example, the set of attributes that may be used in determining the redundant incidents are similar to the attributes associated with a profile of an incident. Operation 804 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[080] In an operation 806, network events that are of the same type as the specified network event and having attributes matching with the set of attributes of the specified network event are determined. As an example, incidents that are of the same type as the specified incident and having attributes matching with the set of attributes of the specified incident are determined. Operation 806 may be performed by a subsystem that is the same as or similar to network event subsystem 112

[081] In an operation 808, the specified network event and the network events determined in operation 806 are grouped into a single group. As an example, the specified incident and the incidents determined in operation 806 are grouped into a single group, e.g., by updating a group attribute such as “redundant incidents” attribute 332 associated with each of the incidents with a specified group ID. Operation 808 may be performed by a subsystem that is the same as or similar to reporting subsystem 116.

[082] FIG. 9 shows a flowchart of a method 900 of grouping similar network events, consistent with various embodiments. In operation 902, a type of a specified network event may be obtained. As an example, a type of a specified incident is obtained from the specified incident. Operation 902 may be performed by a subsystem that is the same as or similar to network event subsystem 112

[083] In an operation 904, a message ID attribute of the specified network event may be obtained. As an example, a message ID attribute provides an ID of a message/user activity that triggered a policy violation. In some embodiments, a message can violate multiple policies, which can result in generation of multiple incidents. Such incidents may have the same message ID. Operation 904 may be performed by a subsystem that is the same as or similar to network event subsystem 112

[084] In an operation 906, network events that are of the same type as the specified network event and having the same message ID as the specified network event are determined. As an example, incidents that are of the same type as the specified incident and having the same message ID as the specified incident are determined. Operation 906 may be performed by a subsystem that is the same as or similar to network event subsystem 112.

[085] In an operation 908, the specified network event and the network events determined in operation 906 are grouped into a single group. As an example, the specified incident and the incidents determined in operation 906 are grouped into a single group, e.g., by updating a group attribute such as “chained incidents” attribute 330 associated with each of the incidents with a specified group ID. Operation 908 may be performed by a subsystem that is the same as or similar to reporting subsystem 116.

[086] In some embodiments, the various computers and subsystems illustrated in FIGS. 1 A and IB may include one or more computing devices that are programmed to perform the functions described herein. The computing devices may include one or more electronic storages (e.g., prediction database(s) 132, which may include training data database(s) 134, model database(s) 136, network event database(s) 138, etc., or other electronic storages), one or more physical processors programmed with one or more computer program instructions, and/or other components. The computing devices may include communication lines or ports to enable the exchange of information within a network (e.g., network 150) or other computing platforms via wired or wireless techniques (e.g., Ethernet, fiber optics, coaxial cable, WiFi, Bluetooth, near field communication, or other technologies). The computing devices may include a plurality of hardware, software, and/or firmware components operating together. For example, the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices.

[087] The electronic storages may include non-transitory storage media that electronically stores information. The storage media of the electronic storages may include one or both of (i) system storage that is provided integrally (e.g., substantially non-removable) with servers or client devices or (ii) removable storage that is removably connectable to the servers or client devices via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storages may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storage may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionality as described herein.

[088] The processors may be programmed to provide information processing capabilities in the computing devices. As such, the processors may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. In some embodiments, the processors may include a plurality of processing units. These processing units may be physically located within the same device, or the processors may represent processing functionality of a plurality of devices operating in coordination. The processors may be programmed to execute computer program instructions to perform functions described herein of subsystems 112-120 or other subsystems. The processors may be programmed to execute computer program instructions by software; hardware; firmware; some combination of software, hardware, or firmware; and/or other mechanisms for configuring processing capabilities on the processors.

[089] It should be appreciated that the description of the functionality provided by the different subsystems 112-120 described herein is for illustrative purposes, and is not intended to be limiting, as any of subsystems 112-120 may provide more or less functionality than is described. For example, one or more of subsystems 112-120 may be eliminated, and some or all of its functionality may be provided by other ones of subsystems 112-120. As another example, additional subsystems may be programmed to perform some or all of the functionality attributed herein to one of subsystems 112-120.

[090] Although the present invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.