Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
GRANULAR SUPPORT VECTOR MACHINE WITH RANDOM GRANULARITY
Document Type and Number:
WIPO Patent Application WO/2009/094552
Kind Code:
A3
Abstract:
Methods and systems for granular support vector machines. Granular support vector machines can randomly select samples of datapoints and project the samples of datapoints into a randomly selected subspaces to derive granules. A support vector machine can then be used to identify hyperplane classifiers respectively associated with the granules. The hyperplane classifiers can be used on an unknown datapoint to provide a plurality of predictions which can be aggregated to provide a final prediction associated with the datapoint.

Inventors:
TANG YUCHUN (US)
HE YUANCHEN (US)
Application Number:
PCT/US2009/031853
Publication Date:
November 05, 2009
Filing Date:
January 23, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SECURE COMPUTING CORP (US)
TANG YUCHUN (US)
HE YUANCHEN (US)
International Classes:
G06F9/00; G06F15/00; G06F17/00; G06F21/00; G06N20/10
Foreign References:
US6662170B12003-12-09
US20060112026A12006-05-25
US20070239642A12007-10-11
Other References:
YUCHUN TANG: "GRANULAR SUPPORT VECTOR MACHINES BASED ON GRANULAR COMPUTING", SOFT COMPUTING AND STATISTICAL LEARNING, May 2006 (2006-05-01), pages 15 - 16, 107-110
Attorney, Agent or Firm:
VAN AACKEN, Troy A. (P.O. Box 1022Minneapolis, Minnesota, US)
Download PDF:
Claims:

CLAIMS

What is claimed is:

1. A method comprising: receiving a training dataset comprising a plurality of tuples and a plurality of attributes for each of the tuples; deriving a plurality of granules from the training dataset, each granule comprising a plurality of sample tuples and a plurality of sample attributes; processing the granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules; predicting a classification of a new tuple using each of the hyperplane classifiers to produce a plurality of predictions; aggregating the predictions to derive a decision on a final classification of the new tuple; and filtering a communication associated with the new tuple based upon the final classification of the new tuple.

2. The method of claim 1, wherein deriving each granule comprises: randomly selecting granule tuples from among the plurality of tuples with replacement; and randomly selecting granule attributes from the plurality of attributes without replacement.

3. The method of claim 2, wherein the selection of granule tuples and granule attributes for each granule is independent of the selection of granule tuples and granule attributes for other granules.

4. The method of claim 1, wherein the training dataset comprises a relational database of tuples with associated attributes, each of the tuples having a known classification.

5. The method of claim 1, further comprising validating a hyperplane classifier associated with a granule by attempting to classify a plurality of tuples from the training dataset which were not-included in the granule.

6. The method of claim 5, further comprising generating a hyperplane classifier effectiveness level based upon the validation of the granule against tuples from the training dataset which were not included in the granule.

7. The method of claim 5, wherein aggregating the predicted classifications comprises weighting the predictions based upon the hyperplane classifier effectiveness levels associated with the granules, respectively, and aggregating the weighted predictions.

8. The method of claim 1, further comprising: weighting the predictions based upon a distance of the new tuple from the hyperplane classifiers, respectively; aggregating the weighted predictions.

9. The method of claim 1, wherein each of the predictions comprises a vote, and aggregating the predictions comprises adding the votes together and determining which classification is most common.

10. The method of claim 1, wherein the tuples are gene sequences and the attributes are features of the gene sequences, whereby the method is operable to determine whether a gene sequence is likely to be share known characteristics of other gene sequences.

11. The method of claim 1 , wherein the tuples are documents and the attributes are features of the documents, whereby the method is operable to determine whether a document is likely to be share known characteristics of other documents.

12. The method of claim 1 , wherein the known characteristics comprise one or more of spam characteristics, virus characteristics, spyware characteristics, or phishing

characteristics, and the method is operable to determine whether the new tuple should be classified as including one or more of the known characteristics.

13. The method of claim 1, wherein processing the granules comprises processing the granules on a plurality of processors in parallel, the processors being operable to identify support vector machines associated.

14. The method of claim 13, further comprising selecting the plurality of processors based upon the processing power available on the respective processors.

15. A system comprising : a granule selection module operable to select a plurality of granules from a training dataset, each the granules comprising a plurality of tuples and a plurality of attributes; a plurality of granule processing modules operable to process granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules; one or more prediction modules operable to predict a classification associated with an unknown tuple based upon the hyperplane classifiers to produce a plurality of granule predictions; an aggregation module operable to aggregate the granule predictions to derive a decision on a final classification associated with the unknown tuple; a message filter operable filter a communication associated with the unknown tuple based upon the final classification of the unknown tuple.

16. The system of claim 15, wherein the unknown tuple comprises the features of an unclassified document, and the system further comprises a parsing module operable to parse the unclassified document to derive a plurality of unclassified attributes associated with the unknown tuple.

17. The system of claim 16, wherein the one or more prediction modules are operable to extract a portion of the unclassified attributes based upon the granule and the hyperplane

- 99 -

classifier, and is operable to compare the unclassified attributes to the hyperplane classifier to derive the prediction associated with the granule.

18. The system of claim 17, wherein the one or more prediction modules are operable to generate a prediction for each of the hyperplane classifiers to produce the plurality of granule predictions.

19. The system of claim 18, wherein the aggregation module is operable to count each of the predictions as a vote, and to derive the final prediction based upon which of the classifications accumulates the most votes.

20. The system of claim 15, further comprising: a validation module operable to test the hyperplane classifiers associated with respective granules on tuples that are not part of the respective granules, thereby producing a plurality of effectiveness metrics respectively associated with the hyperplane classifiers; and wherein the aggregation module is operable to weight the predictions based upon the effectiveness metrics respectively associated with the hyperplane classifiers.

21. The system of claim 15, wherein each prediction includes a distance metric from the hyperplane classifier, and the aggregation module is operable to weight each prediction based upon the associated distance metric.

22. The system of claim 15, wherein the plurality of granule processing modules are operable to be processed independently.

23. The system of claim 15 , wherein the plurality of granule processing modules are operable to be processed in parallel.

24. The system of claim 15, wherein the plurality of granule processing modules are operable to be executed by separate processors.

25. The system of claim 15, wherein the system is operable to classify a tuple as one or more of a spam risk, a phishing risk, a virus risk, or a spyware risk.

Description:

GRANULAR SUPPORT VECTOR MACHINE WITH RANDOM

GRANULARITY

BACKGROUND AND FIELD

This disclosure relates generally to data mining using support vector machines. Support vector machines are useful in providing input to identify trends in existing data and to classify new sets of data for analysis. Generally support vector machines can be visualized by plotting data into an n-dimensional space, n being the number of attributes associated with the item to be classified. However, given large numbers of attributes and a large volume of training data, support vector machines can be processor intensive. Recently analysts have developed an algorithm known as "Random Forests."

"Random Forests" uses decision trees to classify data. Decision trees modeled on large amounts of data can be difficult to parse and hence classification accuracy is limited. Thus, "Random Forests" utilizes a bootstrap aggregating (bagging) algorithm to randomly generate multiple bootstrapping datasets from a training dataset. Then a decision tree is modeled on each bootstrapping dataset. For each decision tree modeling, at each node a small fraction of attributes are randomly selected to determine the split. Because all attributes need to be available for random selection, the whole bootstrapping dataset is needed in the memory. Moreover, "Random Forests" has difficulty working with sparse data (e.g., data which contains many zeroes). For example, a dataset, formatted as a matrix with rows as samples and columns as attributes, has to be entirely loaded into the memory even when a cell is zero.

Thus, "Random Forests" is space-consuming, and when modeling the entire data matrix, "Random Forests" is also time-consuming, given a large and sparse dataset. The dataset cannot be parallelized on a distributed system such as a computer cluster, because it is time- consuming to transfer a whole bootstrapping dataset between different computer nodes.

SUMMARY

Systems, methods, apparatuses and computer program products for granular support vector machines are provided. In one aspect, methods are disclosed, which include: receiving a training dataset comprising a plurality of tuples and a plurality of attributes for each of the tuples; deriving a plurality of granules from the training dataset, each granule

comprising a plurality of sample tuples and a plurality of sample attributes; processing the granules using a support vector machine process to identify a hyperplane classifier associated with each of the granules; predicting a classification of a new tuple using each of the hyperplane classifiers to produce a plurality of predictions; and aggregating the predictions to derive a decision on a final classification of the new tuple..

Systems can include a granule selection module, multiple granule processing modules, one or more prediction modules and an aggregation module. The granule selection module can select a plurality of granules from a training dataset. Each of the granules can include multiple tuples and attributes. The granule processing modules can process granules using support vector machine processes identifying a hyperplane classifier associated with each of the granules. The one or more prediction modules can predict a classification associated with an unknown tuple based upon the hyperplane classifiers to produce multiple granule predictions. The aggregation module can aggregate the granule predictions to derive a decision on a final classification associated with the unknown tuple The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of a network environment including an example classification system.

FIG. 2 is a block diagram of an example classification system. FIG. 3 is a block diagram of a messaging filter using a classification system and illustrating example policies. FIG. 4 is a block diagram of an example distributed classification system.

FIG. 5 is a block diagram of another example distributed classification system. FIG. 6 is a flowchart illustrating an example method used to derive granules and classification planes.

FIG. 7 is a flowchart illustrating an example method used to derive classification associated with a new set of attributes for classification.

- ? -

FIG. 8 is a flowchart illustrating an example method used to derive granules and distribute granules to processing modules.

DETAILED DESCRIPTION

Granular support vector machines with random granularity can help to provide efficient and accurate classification of many types of data. For example, granular support vector machines can be used in the context of spam classification. Moreover, in some implementations, the granules, typically much smaller than the bootstrapping datasets before random subspace projection, can be distributed across many processors, such that the granules can be processed in parallel. In other implementations, the granules can be distributed based upon spare processing capability at distributed processing modules. The nature of the granules can facilitate distributed processing. The reduction in size of the training dataset can facilitate faster processing of each of the granules. In comparison to "Random Forests", this granular support vector machine with random granularity works well on large and sparsely populated datasets (e.g., data which contains a lot of zeroes or null sets), because all zeros or null sets are not needed in the memory. In some implementations, the classification system can be used to classify spam. In other implementations, the classification system can be used to classify biological data. Other classifications can be derived from any type of dataset using granular support vector machines with random granularity. FIG. 1 is a block diagram of a network environment including an example classification system 100. The classification system 100 can receive classification queries from an enterprise messaging filter 110. The enterprise messaging filter 110 can protect enterprise messaging entities 120 from external messaging entities 130 attempting to communicate with the enterprise messaging entity 120 through a network 140. In some implementations, the classification system 100 can receive a training dataset

150. The training dataset 150 can be provided by an administrator, for example. The classification system 100 can use granular support vector machine classification 160 to process the training dataset and derive hyperplane classifiers respectively associated with randomly selected granules, thereby producing a number of granular support vector machines (e.g., GVSM 1 170, GVSM 2 180 ... GVSM n 190).

In some implementations, the classification system 100 can derive a number of granules from the training dataset. The granules can be derived, for example, using a bootstrapping process whereby a tuple (e.g., a record in the dataset) is randomly selected for inclusion in the in-bag data. Additional tuples can be selected from among the entire training dataset (e.g., sampled with replacement). Thus, the selection of each tuple is independent from the selection of other tuples and the same tuple can be selected more than once. For example, if a training dataset included 100 tuples and 100 bootstrapping samples are selected from among the 100 tuples, on average 63.2 of the tuples would be selected, and 36.8 of the tuples would not be selected. The selected data can be identified as in-bag data, while the non-selected data can be identified as out-of-bag data. In some examples, the sample size can be set at 10% of the total number of tuples in the training dataset. Thus, if there were 100 tuples, the classification system 100 can select 10 samples with replacement.

The classification system 100 can then project the data into a random subspace. The random subspace projection can be a random selection of tuple attributes (e.g., features). In some implementations, the random selection of tuple attributes can be performed without replacement (e.g., no duplicates can be selected). In other implementations, the random selection of tuple attributes can be performed with replacement (e.g., duplicate samples are possible, but discarded). The random selection of tuples with a projection of the tuple attributes into a random subspace generates a granule. The granule can be visualized as a matrix having a number of rows of records (e.g., equal to the number of unique tuples selected from the training dataset) with a number of columns defining attributes associated with the granule.

The classification system 100 can then execute a support vector machine process operable to receive the data and to plot the data into an n-dimensional space (e.g., n being the number of unique tuples sampled during the bootstrapping process). The support vector machine process can identify a hyperplane classifier (e.g., a linear classifier) to find the plane which best separates the data into two or more classifications. In some implementations, adjustments to the support vector machine process can be made to avoid overfitting the hyperplane classifier to the datapoints. In various examples, there can be more than one potential hyperplane classifier which provides separation between the data. In such instances, the hyperplane classifier which achieves maximum separation (e.g., maximum

margin classifier) can be identified and selected by the support vector machine process. In some implementations, the support vector machine can warp the random subspace to provide better fit of the hyperplane classifier to the datapoints included in the granule.

The hyperplane classifiers (e.g., GVSM 1 170, GVSM 2 180 ... GVSM n 190) can then be used to analyze new data. In some implementations, a new tuple (e.g., set of attributes) with an unknown classification can be received. In other implementations, the classification system can receive an unparsed document and can parse the document to extract the attributes used for classification by the various granules.

In some implementations, the hyperplane classifiers can be stored locally to the classification system and can be used to derive a number of predictions for the classification of the new tuple. In other implementations, the hyperplane classifiers are stored by the respective processing modules that processed the granule and the new tuple can be distributed to each of the respective processing modules. The processing modules can then each respond with a predicted granule classification, resulting in a number of granule predictions equal to the number of derived granules.

The predicted classifications can be aggregated to derive a final classification prediction associated with the new tuple. In some implementations, the predicted classifications can be aggregated by majority voting. For example, each prediction can be counted as a "vote." The "votes" can then be tallied and compared to determine which classification received the most "votes." This classification can be adopted by the classification system as the final classification prediction.

In other implementations, the granule predictions can include a distance metric describing the distance of a datapoint associated with the new tuple from the hyperplane classifier. The distance can be used to weight the aggregation of the predicted classifications. For example, if the classification system were determining whether a set of data indicates a man versus a woman, and one hyperplane classifier predicts that the datapoint is associated with a woman while another hyperplane classifier predicts that the same datapoint is associated with a man, the distance of each from the hyperplane classifier can be used to determine which classifier to use as the final classification prediction. In other examples, it can be imagined that 5 hyperplane classifiers predict that the datapoint is associated with a man while 10 hyperplane classifiers predict that the datapoint is associated with a woman. In

those implementations where distance is used to provide a weighting to the predictions, if the 5 classifiers predicting that the datapoint is male have a greater aggregate distance from the respective hyperplane classifiers than the 10 classifiers predicting that the datapoint is female, then the final classification prediction can be male. In still further implementations, each of the hyperplane classifiers can have an effectiveness metric associated with the classifier. In such implementations, the effectiveness metric can be derived by validating the hyperplane classifier against the out-of-bag data not chosen for inclusion in the granule associated with the hyperplane classifier. Thus, for example, using a 10% bootstrapping process on a training sample of 100 records, there are expected to be about 93 out-of-bag tuples (e.g., datapoints). Those datapoints can be used in an attempt to determine the effectiveness of the hyperplane classifier derived with respect to the granule. If the hyperplane classifier, for example, is measured to be 90% effective on the out-of-bag data, the prediction can be weighted at 90%. If another hyperplane classifier is measured, for example, to be 70% effective on the out-of-bag data, the prediction can be weighted at 70%. In some implementations, if a hyperplane classifier is measured to be less than a threshold level of effectiveness on the out-of-bag data, the hyperplane classifier can be discarded. For example, if a hyperplane classifier is less than 50% effective on the out-of- bag data, it is more likely than not that the classification is incorrect (at least as far as the out- of-bag data is concerned). In such instances, it the hyperplane classifier could be based on datapoints which are outliers that do not accurately represent the sample. In some implementations, if a threshold number of hyperplane classifiers are discarded because they do not predict with a threshold effectiveness, the classification system can request a new training dataset, or possibly different and/or additional attributes associated with the current training dataset. In other implementations, the classification system can continue to run the support vector machine processing until a threshold number of hyperplane classifiers are identified.

FIG. 2 is a block diagram of an example classification system 100. In some implementations, the classification system 100 can include a granule selection module 210, a processing module 220, a prediction module 230, and an aggregation module 240. The granule selection module 210 can receive a training dataset 250 and can randomly select

(with replacement) tuples from the training dataset 250. In some implementations, the

random selection of the tuples can be based upon a bootstrapping process, whereby a selection of a tuple is made, the tuple is replaced and then another tuple is selected. In various examples, this process can continue until a threshold number of selections are made. As a specific example, 10% bootstrapping on a 100-tuple training dataset can mean that 10 selections are made (including duplicates). Thus, there are expected to be less than 10 unique sample tuples in the in-bag data, on average.

The in-bag tuples are then projected onto a random subspace. For example, the in- bag tuples can be visualized as datapoints plotted onto an n-dimensional space, where n equals the number of attributes associated with each tuple. If a dimension is removed, the datapoints can be said to be projected into the subspace comprising the remaining attributes.

In various implementations, the random subspace can be selected by randomly selecting attributes to remove from the subspace or randomly selecting the attributes that are included in the subspace. In some implementations, the random subspace is chosen by randomly selecting the attributes for inclusion in the granule without replacement (e.g., no duplicates can be selected, because once an attribute is selected, it is removed from the sample). Thus, an original matrix associated with the training dataset can be reduced into a granule. Granules can be continued to be selected until a threshold number of granules have been selected. The random selection of the granules and a smaller sample size can facilitate diversity among the granules. For example, one granule is unlikely to be similar to any of the other granules.

The processing module 220 can be operable to process the granules using a support vector machine process. The processing module 220 can use the support vector machine process to plot the tuples associated with a respective granule into an n-dimensional space, where n equals the number of tuples associated with the granule. The processing module 220 can identify a hyperplane classifier (e.g., linear classifier) which best separates the data based upon the selected category for classification. In some instances, multiple hyperplane classifiers might provide differentiation between the data. In some implementations, the processing module 220 can select the hyperplane classifier that provides maximum separation between the datapoints (e.g., a maximum margin classifier). In various implementations, the granular nature of the process can facilitate distributed processing of the granules. For example, if there are 10 granules to process on

five processors, each of the processors could be assigned to handle two granules. Some implementations can include a distribution module operable to distribute the granules among potentially multiple processing modules 220 (e.g., processors running a support vector machine processes on the granules). In such implementations, the distribution module can, for example, determine the available (e.g., spare) processing capacity and/or specialty processing available on each of a number of processors and assign the granules to the processors accordingly. Other factors for determining distribution of the granules can be used.

The prediction module 230 can receive features 260 (e.g., from an unclassified tuple) for classification. In some implementations, the features 260 can be received from a messaging filter 280. In such implementations, the features 260 can be derived from a received message 270 by a messaging filter 280. The messaging filter 280, for example, can extract the features 260. In some implementations, the messaging filter 280 can be a part of the classification system 100. In other implementations, the messaging filter 280 can query the classification system 100 by sending the attributes associated with the tuple to be classified to the classification system 100.

The prediction module 230 can compare datapoints associated with the features against each of the hyperplane classifiers derived from the granules to derive granule predictions associated with the respective hyperplane classifiers. For example, the prediction module 230 could plot the unclassified new tuple onto a random subspace associated with a first granule and associated hyperplane classifier and determine whether the unclassifier new tuple shows characteristics associated with a first classification (e.g., men) or a characteristics associated with a second classification (e.g., women). The prediction module 230 could continue this process until each of the hyperplane classifiers have been compared to a datapoint associated with the unclassified new tuple.

In some implementations, the prediction module 230 can include distributed processing elements (e.g., processors). In such implementations, the prediction module 230 can distribute classification jobs to processors, for example with available processing capability. In other implementations, the prediction module 230 can distribute classification jobs based upon which processors previously derived the hyperplane classifier associated with a granule. In such implementations, for example, a processor used to derive a first

hyperplane classifier for a first granule can also be used to plot an unclassified new tuple into the random subspace associated with the first granule and can compare the datapoint associated with the new tuple to the first hyperplane classifier associated with the first granule. The granule predictions can be communicated to the aggregation module 240. In some implementations, the aggregation module 240 can use a simple voting process to aggregate the granule predictions. For example, each prediction can be tallied as a "vote" for the classification predicted by the granule prediction. The classification that compiles the most votes can be identified as the final classification decision. In another implementation, each granule prediction can include a distance metric identifying the distance of datapoints associated with the unclassified new tuple from the respective hyperplane classifiers. The distance metric can be used to weight the respective granule predictions. For example, if there are three predictions, one for classifier A located a distance of 10 units from the hyperplane classifier, and two for classifier B located a distance of 2 and 5 units from their respective hyperplane classifier, then classifier A is weighted at 10 units and classifier B is weighted at 7 units. Thus, in this example, classifier A can be selected as the final classification prediction.

In other implementations, each of the predictions can be weighted by a Bayesian confidence level associated with the respective hyperplane classifiers. In some such implementations, the Bayesian confidence level can be based upon a validation performed on the hyperplane classifier using the out-of-bag data associated with each respective hyperplane classifier. For example, if a first hyperplane classifier is measured to be 85% effective at classifying the out-of-bag data, the predictions associated with the hyperplane classifier can be weighted by the effectiveness metric. The weighted predictions can be summed and compared to each other to determine the final classification prediction.

FIG. 3 is a block diagram of a messaging filter 300 using a classification system 310 and illustrating example policies 320-360. In various implementations, the policies can include an information security policy 320, a virus policy 330, a spam policy 340, a phishing policy 350, a spyware policy 360, or combinations thereof. The messaging filter 300 can filter communications received from messaging entities 380 destined for other messaging entities 380.

In some implementations, the messaging filter 300 can query a classification system 310 to identify a classification associated with a message. The classification system 310 can use a granular support vector machine process to identify hyperplane classifiers associated with a number of granules derived from a training dataset 390. The training dataset can include, for example, documents that have previously been classified. In some examples, the documents can be a library of spam messages identified by users and/or provided by third parties. In other examples, the documents can be a library of viruses identified by administrators, users, and/or other systems or devices. The hyperplane classifiers can then be compared to the attributes of new messages to determine to which classification the new message belongs.

In those implementations that include an information security policy, incoming and/or outgoing messages can be classified and compared to the information security policy to determine whether to forward the message for delivery. For example, the classification system might determine that the document is a technical specification document. In such an example, the information security policy, for example, might specify that technical specification documents should not be forwarded outside of an enterprise network, or only sent to specific individuals. In other examples, the information security policy could specify that technical documents require encryption of a specified type so as to ensure the security of the technical documents being transmitted. Other information security policies can be used. In those implementations that include a virus policy, the virus policy can specify a risk level associated with communications that are acceptable. For example, the virus policy can indicate a low tolerance for viruses. Using such a policy, the messaging filter can block communications that are determined to be even a low risk for including viruses. In other examples, the virus policy can indicate a high tolerance for virus activity. In such examples, the messaging filter might only block those messages which are strongly correlated with virus activity. For example, in such implementations, a confidence metric can be associated with the classification. If the confidence metric exceeds a threshold level set by the virus policy, the message can be blocked. Other virus policies can be used.

In those implementations that include a spam policy, the spam policy can specify a risk level associated with communications that is acceptable to the enterprise network. For example, a system administrator can specify a high tolerance for spam messages. In such an

example, the messaging filter 300 can filter only messages that are highly correlated with spam activity.

In those implementations that include a phishing policy, the phishing policy can specify a risk level associated with communications that are acceptable to the enterprise network. For example, a system administrator can specify a low tolerance for phishing activity. In such an example, the messaging filter 300 can filter even communications which show a slight correlation to phishing activity.

In those implementations that include a spyware policy, the spyware policy can specify a network tolerance for communications that might include spyware. For example, an administrator can set a low tolerance for spyware activity on the network. In such an example, the messaging filter 300 can filter communications that show even a slight correlation to spyware activity.

FIG. 4 is a block diagram of an example classification system 100 using distributed processing modules 400a-e. In some implementations, the classification system 100 can include a granule selection module 410, a distribution module 420, a prediction module 430 and an aggregation module 440. The classification system can operate to receive a training dataset 450, to derive a number of hyperplane classifiers from the training dataset, and then to predict the classification of incoming unclassified messages 460.

In some implementations, the granule selection module can receive the training dataset 450. The training dataset 450 can be provided, for example by a system administrator or a third party device. In some implementations, the training dataset 450 can include a plurality of records (e.g., tuples) which have previously been classified. In other implementations, the training dataset 450 can include a corpus of documents that have not been parsed. The granule selection module 410, in such implementations, can include a parser operable to extract attributes from the document corpus. In some implementations, the granule selection module 410 can randomly select granules by using a bootstrapping process on the tuples, and then projecting the tuples into a random subspace.

The distribution module 420 can operate to distribute the granules to a plurality of processing modules 400a-e for processing. In some implementations, the distribution module 420 can distribute the granules to processing modules 400a-e having the highest available processing capacity. In other implementations, the distribution module 420 can

distribute the granules to processing modules 400a-e based upon the type of content being classified. In still further implementations, the distribution module 420 can distribute the granules to processing modules 400a-e based upon other characteristics of the processing modules 400a-e (e.g., availability of special purpose processing power (e.g., digital signal processing, etc.)).

In some implementations, the distributed processing modules 400a-e can return a hyperplane classifier to the distribution module 420. The hyperplane classifiers can be provided to the prediction module 430. The prediction module 430 can also receive unclassified messages 460 and can use the hyperplane classifiers to provide granule classification predictions associated with each of the hyperplane classifiers.

The granule classification predictions can be provided to an aggregation module 440. The aggregation module 440 can operate to aggregate the granule classification predictions. In some implementations, the aggregation module 440 can aggregate the granule classification predictions to derive a final classification prediction based upon a simple voting process. In other implementations, the aggregation module 440 can use a distance metric associated with each of the granule classification predictions to weight the respective granule predictions. In still further implementations, the aggregation module 440 can use a Bayesian confidence score to weight each of the granule classification predictions. The Bayesian confidence score can be derived, for example, by validating the each respective hyperplane classifier associated with a granule against out-of-bag data not selected for inclusion in the granule. The resulting final classification prediction can be provided as output of the classification system 100.

FIG. 5 is a block diagram of another example classification system 100 having distributed processing and prediction modules 500a-e. In some implementations, the classification system 100 can include a granule selection module 510, a distribution module

520 and an aggregation module 530. The classification system 100 can operate to distribute the processing associated with both the granule processing to derive the hyperplane classifiers associated with the granules and the prediction processing to provide granule predictions based upon the derived hyperplane classifiers. In some implementations, the granule selection module 510 can receive the training dataset 540. The training dataset 540 can be provided, for example by a system administrator

or a third party device. In some implementations, the training dataset 540 can include a plurality of records (e.g., tuples) which have previously been classified. In other implementations, the training dataset 540 can include a corpus of documents that have not been parsed. The granule selection module 510, in such implementations, can include a parser operable to extract attributes from the document corpus. In some implementations, the granule selection module 510 can randomly select granules by using a bootstrapping process on the tuples, and then projecting the tuples into a random subspace.

In some implementations, the distribution module 520 can received the granules from the granule selection module 520. The distribution module 520 can distribute the granules to one or more distributed processing and prediction modules 500a-e. The distribution module

520 can also distribute an unclassified message to the distributed processing and prediction modules 500a-e.

Each distributed processing and prediction modules 500a-e can operate to execute a support vector machine process on the receive granule(s). The support vector machine process can operate to derive a hyperplane classifier(s) associated with the granule(s). Each distributed processing and prediction modules 500a-e can then use the derived hyperplane classifier(s) to generate a granule classification prediction (or predictions) associated with an unclassified message 550.

The granule classification predictions can be provided to an aggregation module 530. The aggregation module 530 can operate to aggregate the granule classification predictions.

In some implementations, the aggregation module 530 can aggregate the granule classification predictions to derive a final classification prediction based upon a simple voting process. In other implementations, the aggregation module 530 can use a distance metric associated with each of the granule classification predictions to weight the respective granule predictions. In still further implementations, the aggregation module 530 can use a

Bayesian confidence score to weight each of the granule classification predictions. The Bayesian confidence score can be derived, for example, by validating the each respective hyperplane classifier associated with a granule against out-of-bag data not selected for inclusion in the granule. The resulting final classification prediction can be provided as output of the classification system 100.

FIG. 6 is a flowchart illustrating an example method used to derive granules and classification planes. At stage 610, a training dataset is received. The training dataset can be received, for example, by a granule selection module (e.g., classification system 100 of FIG. 2). The training dataset, in various examples, can include parsed or unparsed data describing attributes of an item for classification. In some examples, the item can include documents, deoxyribonucleic acid (DNA) sequences, chemicals, or any other item that has definite and/or quantifiable attributes that can be compiled and analyzed. In other examples, the training dataset can include a document corpus operable to be parsed to identify attributes of each document in the document corpus. At stage 620, a plurality of granules are derived. The plurality of granules can be derived, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). In various implementations, the granule selection module can use a bootstrapping process to identify a random sampling of a received training dataset. The granule selection module can then project the random sampling into a random subspace, thereby producing a granule. In various implementations, the granule is much smaller than the original dataset, which can facilitate more efficient processing of the granule than can be achieved using the entire training dataset. In some implementations, the granule further supports distributed processing, thereby facilitating the parallel processing of the derived granules.

At stage 630, the granules are processed using a support vector machine process. The granules can be processed, for example, by a processing module (e.g., processing module 220 of FIG. 2). The support vector machine process can operate to derive a hyperplane classifier associated with each granule. The hyperplane classifiers can be used to provide demarcations between given classifications of the data (e.g., spam or non-spam, virus or non- virus, spyware or non-spyware, etc.). FIG. 7 is a flowchart illustrating an example method used to derive classification associated with a new set of attributes for classification. At stage 710 a new tuple and associated attributes can be received. The new tuple and associated attributes can be received, for example, by a prediction module (e.g., classification system 100 of FIG. 2). At stage 720 a prediction can be generated based upon each hyperplane classifier. The prediction can be generated, for example, by a prediction module (e.g., prediction module 230 of FIG. 2). In various implementations, the prediction module can use the

derived hyperplane classifiers to generate a granule classification prediction associated with each hyperplane classifier.

At stage 730, the granule classification predictions from each of the hyperplane classifiers can be aggregated. The predictions can be aggregated, for example, by an aggregation module (e.g., aggregation module 240 of FIG. 2). In various implementations, the granule classification predictions can be aggregated using a simple voting process, a distance between the datapoint and the hyperplane classifiers can be used to factor the final classification, or a Bayesian confidence can be used to weight the predictions based upon the confidence associated with the respective hyperplane classifiers. FIG. 8 is a flowchart illustrating an example method used to derive granules and distribute granules to processing modules. The method is initialized at stage 800. At stage 805, a training dataset is received. The training dataset can be received, for example, by a granule selection module (e.g., classification system 100 of FIG. 2). The training dataset, in various examples, can include parsed or unparsed data describing attributes of an item for classification. In some examples, the item can include documents, deoxyribonucleic acid

(DNA) sequences, chemicals, or any other item that has definite and/or quantifiable attributes that can be compiled and analyzed. In other examples, the training dataset can include a document corpus operable to be parsed to identify attributes of each document in the document corpus. At stage 810, a counter can be initialized. The counter can be initialized, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). In various implementations, the counter can be used to identify when enough granules have been generated based on the training dataset. For example, the number of granules for a given dataset can be a percentage (e.g., 50%) of the number of tuples in the training dataset. At stage 815, a bootstrap aggregating process is used to randomly select tuples from among the training dataset. The bootstrap aggregating process can be performed, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). In various implementations, the bootstrap aggregating process randomly selects a tuple from the training dataset, and replaces the tuple before selecting another tuple, until a predefined number of selections have been made. In such implementations, duplicates can be selected.

Thus, it is unknown how many tuples will be selected prior to the bootstrap aggregating

process, though it does ensure that the number of samples will be no greater than the number of selections made. In some examples, the predefined number of selections can be based upon a percentage (e.g., 10%) of the size of the training dataset.

At stage 820, the random sample of tuples is projected into a random subspace. The projection into a random subspace can be performed, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). The random subspace can be selected, in some implementations, by randomly selecting the features to be used within the granule, without replacement. For example, when a first feature is selected, the feature is not replaced into the group, but removed so as not to be selected a second time. Such random selection guarantees that the granule will include a predefined number of features in each granule.

At stage 825, the generated granule is labeled as the nth granule, where n is the current counter value. The granule can be labeled, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2).

At stage 830, the counter is incremented (n = n + 1). The counter can be incremented, for example, by a granule selection module (e.g., granule selection module 210 of FIG. 2). At stage 835, the counter can be compared to a threshold to determine whether a predefined number of granules have been generated. If a predefined number of granules have not been generated, the process returns to stage 815 and generates additional granules until the specified number of granules have been generated. However, if the counter has reached the threshold at stage 835, the process can continue to stage 840 where the granules can be distributed. The granules can be distributed, for example, by a distribution module (e.g., distribution module 420, 520 of FIGS. 4 and 5, respectively). In various implementations, the granules can be distributed based upon the characteristics of a plurality of processing modules or the characteristics of the granules themselves.

At stage 845, the granules can be processed. The granules can be processed, for example, by distributed processing module (e.g., distributed processing modules 400a-e, 500a-e of FIGS. 4 and 5, respectively). In some implementations, the distributed processing modules can be executed by multiple processors. In additional implementations, the distributed processing modules can execute a support vector machine process on each generated granule to derive a hyperplane classifier associated with each generated granule.

The hyperplane classifier can be compared to unclassified data to derive a classification prediction associated with the unclassified data.

At optional stage 850, the hyperplane classifiers can be validated. The hyperplane classifiers can be validated, for example, by distributed processing modules (e.g., distributed processing modules 400a-e, 500a-e of FIGS. 4 and 5, respectively). In some implementations, each hyperplane classifier can be validated using respective out-of-bag data associated with the granule used to generate the hyperplane classifier. Thus, each hyperplane classifier can be tested to determine the effectiveness of the derived hyperplane classifier.

At optional stage 855, a determination is made which hyperplane classifiers to use in prediction modules based upon the validation. The determination of which hyperplane classifiers to use can be performed, for example, by a distributed processing module (e.g., distributed processing modules 400a-e, 500a-e of FIGS. 4 and 5, respectively). In some implementations, a threshold effectiveness level can be identified whereby if the validation does not meet the threshold, it is not used for predicting classifications for unclassified datasets. For example, if a hyperplane classifier is validated as being correct less than 50% of the time, the classification associated with the hyperplane classifier is incorrect more often than it is correct. In some implementations, such hyperplane classifier can be discarded as misleading with respect to the final classification prediction.

The method ends at stage 860. The method can be used to efficiently derive a plurality of hyperplane classifiers associated with a training dataset by distributing the granules for parallel and/or independent processing. Moreover, inaccurate hyperplane classifiers can be discarded in some implementations.

In various implementations of the above description, message filters can forward, drop, quarantine, delay delivery, or specify messages for more detailed testing. In some implementations, the messages can be delayed to facilitate collection of additional information related to the message.

The systems and methods disclosed herein may use data signals conveyed using networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks (e.g., wireless local area networks, wireless metropolitan area networks, cellular networks, etc.), etc. for communication with one or more data

processing devices (e.g., mobile devices). The data signals can carry any or all of the data disclosed herein that is provided to or from a device.

The methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by one or more processors. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform methods described herein.

The systems and methods may be provided on many different types of computer- readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.

The computer components, software modules, functions and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that software instructions or a module can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code or firmware. The software components and/or functionality may be located on a single device or distributed across multiple devices depending upon the situation at hand. This written description sets forth the best mode of the invention and provides examples to describe the invention and to enable a person of ordinary skill in the art to make and use the invention. This written description does not limit the invention to the precise terms set forth. Thus, while the invention has been described in detail with reference to the examples set forth above, those of ordinary skill in the art may effect alterations, modifications and variations to the examples without departing from the scope of the invention.

As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of "and" and "or"

include both the conjunctive and disjunctive and may be used interchangeably unless the context clearly dictates otherwise.

Ranges may be expressed herein as from "about" one particular value, and/or to "about" another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent "about," it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. These and other implementations are within the scope of the following claims.