Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR DETECTING ANOMALIES IN A DATA SET
Document Type and Number:
WIPO Patent Application WO/2020/060544
Kind Code:
A1
Abstract:
A computer-implemented method and system for detecting anomalies in an unlabeled data set of data records is provided. The method applies a classification method to a training data set obtained from the unlabeled data set to generate a classification model associating a predicted output class to each training data record. The predicted output class is compared with the original output class, in order to detect an anomalous data record in the presence of a discrepancy between the original and the predicted output class. The method may assign a confidence score to the predicted output class by the classification model, wherein the anomalous data record is detected on the basis of a threshold of the confidence score. For example, the classification method may be based on Boolean functions synthesis by a Shadow Clustering algorithm, and the classification model is in the form of a set of conditional rules.

Inventors:
MUSELLI MARCO (IT)
COSTACURTA MASSIMILIANO (IT)
VERDA DAMIANO (IT)
FERRARI ENRICO (IT)
Application Number:
PCT/US2018/051694
Publication Date:
March 26, 2020
Filing Date:
September 19, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RULEX INC (US)
International Classes:
G06N7/02; G06F17/10; G06N5/02
Foreign References:
US20080109388A12008-05-08
US8418249B12013-04-09
US20110145178A12011-06-16
US20100293124A12010-11-18
Attorney, Agent or Firm:
CHANG, Jeffrey (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method for detecting anomalies in an unlabeled data set, wherein the unlabeled data set comprises data records of input variables and at least one output variable dependent from one or more input variables, said method comprising:

a. from the unlabeled data set, generating, by a computing device, a training data set comprising training data records and original output classes, wherein:

i. each training data record is obtained by excluding from a corresponding data record at least the output variable, and

ii. an original output class corresponding to a value of the output variable of the corresponding data record is associated to the training data record;

b. applying, by the computing device, a classification method to the training data set, to generate a classification model associating a predicted output class to the training data record, wherein the predicted output class is selected from the original output classes associated to the training data records; and

c. comparing, by the computing device, the original output class and the predicted output class, wherein an anomalous data record is detected in a presence of a discrepancy between the original output class and the predicted output class.

2. The method of claim 1, further comprising assigning a confidence score to the predicted output class by the classification model, wherein the anomalous data record is detected based on a threshold of the confidence score.

3. The method of claim 2, wherein the predicted output class is the output class which optimizes a probability score assigned to each output class by the classification model.

4. The method of claim 3, wherein the confidence score of the predicted output class is assigned based on the probability scores associated to the output classes.

5. The method of claim 1, further comprising submitting the anomalous data record to a validation step, to generate a corrected data set.

6. The method of claim 1 , wherein the classification model is in a form of a set of conditional rules of input variables.

7. The method of claim 6, wherein the classification method is based on Boolean functions synthesis, said method comprising:

a. converting the training data records into binary strings of a Boolean space by a coding that preserves ordering and distance;

b. generating at least one cluster, the cluster comprising binary strings covered by an implicant, wherein the corresponding training data records are associated to the same output class; and c. from the implicant, generating a conditional rule.

8. The method of claim 7, wherein the coding is inverse only-one coding.

9. The method of claim 7, wherein the implicant is generated by a Shadow Clustering algorithm.

10. The method of claim 7, further comprising assigning one or more significance parameters to each conditional rule, wherein probability scores are assigned based on the significance parameters of the conditional rules satisfied by the training data record.

11. The method of claim 7, wherein a validation step of the anomalous data record is performed by or under supervision of a human operator based on the conditional rules verified by the anomalous data record.

12. The method of claim 1, wherein the input variables are related to at least one category selected from the group comprising names, codes, time values, address components, control parameters, and numeric values.

13. The method of claim 1, wherein the data records of the unlabeled data set are related to an application selected from the group comprising: auto insurance damage claims, a business process, a medical patient records, a purchase order, and a supply chain management.

14. The method of claim 1 , wherein the method is integrated in a business or operations software application selected from the group comprising: enterprise resource planning (ERP), customer relationship management (CRM), product lifecycle management (PLM), and electronic health records management (EHRM).

15. An apparatus comprising:

a processor; and

memory storing computer-executable instructions that, when executed by the processor, cause the apparatus to:

from an unlabeled data set comprising data records of input variables and at least one output variable dependent from one or more input variables, generate a training data set comprising training data records and original output classes, wherein:

each training data record is obtained by excluding from a corresponding data record at least the output variable, and

an original output class corresponding to a value of the output variable of the corresponding data record is associated to the training data record;

apply a classification method to the training data set, to generate a classification model associating a predicted output class to the training data record, wherein the predicted output class is selected from the original output classes associated to the training data records; and compare the original output class and the predicted output class, wherein an anomalous data record is detected in a presence of a discrepancy between the original output class and the predicted output class.

16. The apparatus of claim 15, wherein the computer-readable instructions, when executed by the processor, further cause the apparatus to:

assign a confidence score to the predicted output class by the classification model, wherein the anomalous data record is detected based on a threshold of the confidence score.

17. The apparatus of claim 16, wherein the predicted output class is the output class which optimizes a probability score assigned to each output class by the classification model.

18. The apparatus of claim 17, wherein the confidence score of the predicted output class is assigned based on the probability scores associated to the output classes.

19. The apparatus of claim 15, wherein the computer-readable instructions, when executed by the apparatus, cause the apparatus to:

submit the anomalous data record to a validation step, to generate a corrected data set.

20. The apparatus of claim 15, wherein the classification model is in a form of a set of conditional rules of input variables.

Description:
METHOD FOR DETECTING ANOMALIES IN A DATA SET

BACKGROUND

[0001] In data mining, anomaly detection is the identification of items, events or observations which do not fulfill die most frequent patterns in a dataset. Anomaly detection is applicable in a variety of domains, such as intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, and detecting ecosystem disturbances. It is often used in preprocessing to remove anomalous data from a dataset.

[0002] A review on anomaly detection may be found in Chandola, V., Banerjee, A., and Kumar, V.

2009. Anomaly detection: A survey. ACM Comput. Surv. 41, 3, Article 15 (July 2009).

[0003] Main methods for detecting anomalies in a data set are unsupervised or supervised methods, depending if they operate on an unlabeled or labeled data set. In a labeled data set, the labels associated with a data record denote whether that data record is normal or anomalous. In an unlabeled data set, no information on the behavior of a data record are available. It should be noted that obtaining labeled data that is accurate as well as representative of all types of behaviors, is often prohibitively expensive. Labeling is often done manually by a human expert and hence substantial effort is required to obtain the labeled training data set. Typically, getting a labeled set of anomalous data instances that covers all possible types of anomalous behavior is more difficult than getting labels for normal behavior.

[0004] Unsupervised anomaly detection techniques detect anomalies in an unlabeled data set.

Unsupervised anomaly detection is based on the assumption that the majority of the instances in the data set are normal by looking for instances that seem to fit least to the remainder of the data set. Two main approaches are known: statistical approach and clustering. In statistical approach, anomalies are identified on the basis of a deviation from common statistical properties of a distribution, including mean, median, mode, and quantiles. For instance, an anomalous data point is one that deviates by a certain standard deviation from the mean. In clustering techniques, data records are grouped together in such a way that data records in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). Data records that are close to each other according to a suitable definition of distance (measure) form a cluster and are considered normal. Isolated data records that are far from a cluster are considered anomalies.

[0005] One disadvantage of unsupervised anomaly detection techniques is that, while they are able to identify anomalies, they do not provide results in an intelligible form which may be easily understood and analyzed. In other words, unsupervised anomaly detection techniques detect an anomalous data record, but they do not explain why it is anomalous. Therefore, no support for validating the anomalous data record is received, as well as for deciding whether the data record is effectively anomalous.

[0006] Supervised anomaly detection is based on tire availability of a labeled data set. A typical approach in such cases is to build a predictive model from the labeled data set, for associating any further data record not comprised in the training data record to a normal or anomaly class. Any further data record is therefore compared against the model to determine which class it belongs to and to detect if it has a normal or an anomalous behavior. Typically, the predictive model is a classification model generated by applying a classification method to the training data set. Classification is used to learn a model (classifier) from a set of labeled data (training data set) and then to classify a test data record into one of the classes using the learned model (testing).

[0007] Classification-based anomaly detection techniques operate in a similar two-phase step. The training phase learns a classifier using the available labeled data set as training data set. The testing phase classifies a test data record as normal or anomalous, using the classifier. Among classification methods, methods based on decision trees, neural networks and Support Vector Machines may be used. One drawback of supervised anomaly detection is that the labeled data set is assumed to be correct. Any anomalous data record in the labeled data set which is used as training data set may affect the classification model. Therefore, the classification model may be inaccurate. On the other hand, the classification model may be in an intelligible form.

BRIEF DESCRIPTION OF THE INVENTION

[0001] Insofar, classification methods have not been applied to successfully detect anomalies in an unlabeled data set. There is the need for a method of detecting anomalies in an unlabeled data set and providing results in an intelligible form for a human supervisor, thereby providing tire anomalous data records and a model for interpreting the reasons why the data record is considered anomalous. More information for validating tire anomalous data records may be generated.

[0002] The present specification relates to a computer-implemented method and system based on a classification approach for detecting anomalies in an unlabeled data set.

[0003] It is disclosed a computer-implemented method for detecting anomalies in an unlabeled data set comprising data records of independent variables and at least one variable which is known to depend on one or more of the independent ones. The independent variables will be referred to as inputs and the dependent one will be referred to as output. The method comprises a step of generating from the unlabeled data set, by a computing device, a training data set comprising training data records and original output classes. Each training data record is obtained by excluding from a corresponding data record at least the output variable; an original output class corresponding to the value of the corresponding output variable of the data record is associated to the training data record. Said method further comprises a step of applying, by tire computing device, a classification method to the training data set, to generate a classification model associating to each training data record a predicted output class selected from a set of original output classes associated to the training data records. The method further comprises a step of comparing, by the computing device, the original output class and the predicted output class, in order to detect an anomalous data record in the presence of a discrepancy between the original and the predicted output class.

[0004] The computer-implemented method may further comprise a step of assigning a confidence score to the predicted output class by the classification model, and that the anomalous data record is detected based on a threshold of the confidence score.

[0005] The predicted output class may be the output class which optimizes a probability score assigned to each output class by the classification model. [0006] The confidence score of the predicted output class may be assigned based on the probability scores associated to the output classes.

[0007] The anomalous data record may be submitted to a validation step to generate a corrected data set.

[0008] The classification model may be in the form of a set of conditional rules of input variables.

[0009] The classification method may be based on Boolean functions synthesis, wherein that said method may comprise: a step of converting the training data records into binary strings of a Boolean space by a coding that preserves ordering and distance; a step of generating at least one cluster of binary strings, the cluster comprising binary strings covered by an implicant, wherein the corresponding training data records are associated to the same output class; a step of generating a conditional rule from the implicant.

[0010] The implicant may be generated by a Shadow Clustering algorithm.

[0011] The coding may be the inverse only-one coding.

[0012] The method may comprise a step of assigning one or more significance parameters to each conditional rule, wherein the probability scores may be assigned based on the significance parameters of the rules satisfied by the training data record.

[0013] The validation step may be performed by or under the supervision of a human operator on the basis of the conditional rules verified by the anomalous data record based on the conditional rules verified by tire anomalous data record.

[0014] The input variables may be related to at least one category selected from the group comprising names, codes, time values, address components, control parameters, and numeric values.

[0015] The data records of the unlabeled data set may be related to an application selected from the group comprising: auto insurance damage claims, a business process, a medical patient records, a purchase order, and a supply chain management.

[0016] The method may be integrated in a business or operations software application selected from the group comprising: enterprise resource planning (ERP), customer relationship management (CRM), product lifecycle management (PLM), and electronic health records management (EHRM).

[0017] It is also disclosed an apparatus comprising a processor and memory storing computer- executable instructions that, when executed by the processor, cause the apparatus to: from an unlabeled data set comprising data records of input variables and at least one output variable dependent from one or more input variables, generate a training data set comprising training data records and original output classes, wherein:

each training data record is obtained by excluding from a corresponding data record at least tire output variable, and

an original output class corresponding to a value of the output variable of the corresponding data record is associated to the training data record;

apply a classification method to the training data set, to generate a classification model associating a predicted output class to the training data record, wherein the predicted output class is selected from the original output classes associated to the training data records; and compare the original output class and the predicted output class, wherein an anomalous data record is detected in a presence of a discrepancy between the original output class and the predicted output class.

[0018] The computer-readable instructions, when executed by the processor, may further cause the apparatus to assign a confidence score to the predicted output class by the

classification model, wherein the anomalous data record is detected based on a threshold of the confidence score.

[0019] The predicted output class is the output class which optimizes a probability score assigned to each output class by the classification model.

[0020] The confidence score of the predicted output class is assigned based on the probability scores associated to the output classes.

[0021] The computer-readable instructions, when executed by the apparatus, cause die apparatus to:

[0022] submit the anomalous data record to a validation step, to generate a corrected data set.

[0023] The classification model may be in a form of a set of conditional rules of input variables.

BRIEF DESCRIPTION OF DRAWINGS

[0024] Figure 1 shows a block diagram of an apparatus configured to detect anomalies in an unlabeled data set in accordance with aspects of the subject matter described herein.

[0025] Figure 2 shows a block diagram of an embodiment of the disclosed method.

[0026] Figure 3 shows a block diagram of another embodiment of the disclosed method, wherein a classification method based on Boolean function synthesis is used.

[0027] Figure 4 shows a block diagram of a further embodiment of the disclosed method, wherein a classification method based on a Support Vector Machine is used.

[0028] Figure 5 shows a block diagram of another embodiment of the disclosed method, wherein a classification method based on a neural network is used.

DETAILED DESCRIPTION

[0029] Examples described herein relate to methods and system for detecting anomalies in a data set.

[0030] The data set is a structured collection of data organized in data records. The data of the data record correspond to values assumed by a set of variables. The variables may be ordered variables and nominal (or categorical) variables. A variable X j is ordered when X j varies within an interval B } of the real axis and an ordering relationship exists between its values. A variable x } is nominal when the set A, of values assumed by x j have no natural ordering, e.g. the employment of a customer or the ZIP code. The data records may have the same number of data, e.g., they have the same length. In the case a data record has a shorter length with respect to the length of the other data record, e.g., a value of a certain variable is missing, the value that variable may be set to a reference value, for instance may be set to 0, or may be given the categorical attribute“Not Available”. [0031] The disclosed method places no restrictions on the type of variables. For example, they may represent names, codes, time values, address components, control parameters, or numeric values.

[0032] In an embodiment, the data records in the unlabeled data set contain information related to a business process, such as a work order. The data records may be related to at least one category selected from the group consisting of product names, minimum quantities, lead times, and shipping methods.

[0033] In another embodiment, die data records in the unlabeled data set contain information related to medical patient records. The data records may be related to at least one category selected from the group consisting of standard codes for clinical diagnoses, prescribed treatments and medications, and medical equipment and supplies used.

[0034] In a further embodiment, the data records in the unlabeled data set contain information related to auto insurance damage claims. The data records may be related to at least one category selected from the group consisting of vehicle make and model, accident location and time, damage class, payment limit, and preferred repair provider.

[0035] Examples described herein have the advantage of detecting anomalies in parameters related to a real time or nearly real time production or logistic process with no or minimal supervision of a human operator, therefore reducing the risk that an entire batch of production is rejected.

[0036] The data set used herein may comprise an unlabeled data set, therefore it is not known whether a data record is normal or an anomalous. What is known is that the value assumed by at least one variable, known as output variable, is dependent from the value assumed by one or more other variables, indicated as input variables. Thereby, the value of the output variable in a data record is determined according to the value assumed by one or more other variables. In the unlabeled data set, there is a knowledge of which variable is a dependent variable. The output variable may not depend from all the input variables, but only from one or some of the input variables. It is not required that it is known die specific input variable or variables from which the output variable depends. Moreover, an input variable may further depend from one or more other variables.

[0037] Therefore, in the disclosed method it is known that at least one variable is an output variable.

In the case that there is more than one output variables, the disclosed method may be applied to each output variable independently.

[0038] According to one aspect described herein, the disclosed method is an unsupervised method for detecting anomalies in an unlabeled data set, by which it is meant the identification of anomalous data records of the unlabeled data set, characterized by having a value of an output variable which does not conform to an expected value, as deduced from the whole unlabeled data set. An anomaly in the data set may be originated by different causes. For instance, when the value of the output variable is the result of a measurement of a physical entity, it may be due to a statistical error or to noise source affecting the measurement. In other cases, the anomaly may be due to a mistake in handling the data. In a further case, the value of the output variable may be correct, but an anomaly may be originated by some variation in the system underlying the data set.

[0039] In an example, the disclosed method may comprise a computer-implemented method, thereby it is carried out by or on a computer or other electronic apparatus capable of performing the prescribed method step on the unlabeled data set with no or minimal intervention of a human operator. Therefore, also disclosed is a program comprising instructions which, when the program is executed by the computer, cause the computer to carry out die disclosed method.

[0040] The disclosed method or the related computer program may be integrated in business or operations software applications used by management or operational personnel in an enterprise or organization to collect, store, manage, reference, and work process information, such as those for enterprise resource planning (ERP), customer relationship management (CRM), product lifecycle management (PLM), and electronic health records management (EHRM).

[0041] Thereby, according to another aspect described herein, it is disclosed a computer- implemented method which is particularly suitable for automatic data check of a very large amount of data, being capable of processing unlabeled data sets comprising many thousands of data records, or even more, in a short time (e.g., of the order of minutes). Ibis feature may be useful, for example, when the data to be checked are relative to parameters of a continuously running process, such as a production process, which may require continuous monitoring and/or real-time intervention. Furthermore, the data records may comprise a large number of data, for example die data record may comprise many hundreds of data, or even more.

[0042] Figure 1 depicts an apparatus 100, configured for detecting anomalies in an unlabeled data set in accordance with aspects of the subject matter described herein. The apparatus 100 comprises a processor 110 and a memory 120. The memory 120 may comprise random access memory (RAM), read-only memory (ROM), one or more hard drives, and/or any other type of computer-readable medium or memory. All or portions of apparatus 100 may reside on one or more computers or computing devices. Apparatus 100 or portions thereof may be provided as a stand-alone system or as a plug-in or add-in.

[0043] Apparatus 100 or portions thereof may include information obtained from a service (e.g., in the cloud) or may operate in a cloud computing environment. A cloud computing environment can be an environment in which computing services are not owned but are provided on demand. For example, information may reside on multiple devices in a networked cloud and/or data can be stored on multiple devices within the cloud.

[0044] The memory 120 stores computer-executable instructions which are executed by the processor in order to carry out the disclosed method. The memory 120 may further store process data, such as the unlabeled data set, the training data set, and/or data output. Alternatively, a separate memory, not shown in the figure, may be used for storing process data. The memory 120 and die processor 110 may be in communication for data exchange in both directions. The apparatus 100 may further comprise one or more output devices 130 for outputting process output data. In an embodiment, process output data may be in electronic form, such as an electronic file, and output device 130 may interface with another computing device, not shown in the figure, which receives the output data for further processing. For example, the output device 130 may comprise a network interface to communicate with other devices, such as via an external network. The external network may comprise a wired network, a wireless network, and/or a combination thereof. In another embodiment, output device 130 may comprise a display for displaying output data, such as a computer monitor.

[0045] Output data may comprise anomalous data records or a reference thereto, input variables identified as anomalous within an anomalous data record, corrected data records, parameters related to the confidence of the proposed correction, such as the confidence seme, as well as an explanation on the proposed correction. The disclosed method to detect anomalies in an unlabeled data set is based on a new approach of implementing a classification method. In a general classification method, a learning machine is trained on a first data set, called a training data set, comprising training data records, wherein each data record is associated to a corresponding output class. On the basis of the examples of the training data set, the classification method generates a classification model to predict or verify the output class associated to a further data record or to a second data set of data records which are not included in the training data set. In the prior art classification approach, the training data set is assumed as correct, and the classification method is used to detect anomalies in data records other than those of the training data set, wherein the anomaly corresponds to an output class different from the output class predicted by the classification model. Moreover, an anomaly in the training data set, for instance corresponding to an erroneous output class associated to a training data record, may propagate to the classification model, thereby generating a wrong or inaccurate classification model.

[0046] Contrary to the approach of the prior art, in aspects described herein, a classification method is used to generate a classification model from a training data set, and the classification model is then used to detect anomalies in the data records of the same training data set. As a result, the corresponding original data record is detected as anomalous.

[0047] As represented in Figure 2, the first step 201 of the disclosed method is to generate a training data set from the unlabeled data set. Thereby, according to another aspect described herein, it is provided a method to solve a data mining problem by a classification approach.

[0048] The training data set comprises training data records, wherein each training data record is obtained from a corresponding original data record of the unlabeled data set by excluding from the original data record at least the output variable. In other words, the training data records may comprise a portion of the corresponding original date records, wherein the portion does not include the output variable. In an embodiment, the training data record comprises all the data of the corresponding data record with the exception of the output variable. Therefore, the training data set may comprise all the input variables. Then an original output class which corresponds to the value of the output variable of the data record is associated to each training data record. The output class may be coincident to the value of the output variable of die corresponding unlabeled data record, for instance an unlabeled data record having an output variable assuming the nominal value“good” may be associated to the corresponding output class“good”. In other cases, the values of the output variable may be discretized in output class, thereby two unlabeled data records with output variables assuming values within a certain discretization interval may correspond to the same output class. For instance, in the case of an ordered output variable ¾ e R, two unlabeled data records with output variables assuming the values greater than 1 and lower than 2 may be associated to the output class 1-2. [0049] It is understood that the generation of the training data set from original data set may not give raise to a data set distinct from the unlabeled data set, but it is sufficient that the input and output variables of the unlabeled data records are treated as components of the corresponding training data record and output class as provided by the disclosed method.

[0050] As represented in Figure 2, the second step 204 of the disclosed method is to generate a classification model from the training data set and the original output classes. In principle, any classification method generating a classification model may be used in the aspects described herein, provided that the classification model associates to the training data record an output class selected from the set of output classes associated to the training data records. For instance, if each training data record is associated to an output class of the set of output classes comprising [1, 2, 4, 5], the classification model might not associate a data record to the output class [3], as this value is not present in the output classes.

[0051] In one embodiment, the classification method generates the classification model in the form of a black box. In other words, the model is constituted by algebraic functions which allow to get a prediction of the output class. This shows what is predicted but does not enable a human expert to understand and interpret why it is predicted. Among black box classification techniques, neural networks or Support Vector Machine (SVM) techniques may be used.

[0052] In another embodiment, the classification method is a rule generation method generating a classification model in the form of one or more conditional rules of the type:

if {conditions} then [output class),

wherein:

{conditions} indicates a condition on one or more input variables or a set of conditions on one or more input variables of a data record linked by a logical operator, and

{output class} indicates an output class associated to the data record.

[0053] The logical operators linking different conditions may be AND and/or OR.

[0054] In one embodiment, < conditions > is the logical product (AND) of m* conditions c«, with Z = 1, m. k , on the components x j , whereas <consequence> gives a class assignment y = p for the output. In general, according to the type of the variable x j a condition cu in the premise of the rule has one of the following forms:

• a threshold condition x j > A, x j < m, or A < X j < m, where A and m are two values in the domain By of x j , if Jty is an ordered variable,

• a membership condition X j e A, where A is a non-empty subset of the domain By. if x j is a nominal variable.

[0055] One of the advantages offered by a rule generation method is that the logical rules may be understood and interpreted by a human expert. In one embodiment, the rule generation method is based on decision trees techniques. In an embodiment, the rule generation method is based on a shadow clustering method.

[0056] Illustrative classification methods and their implementation are described in the following of the present specification. [0057] According to aspects described herein, the classification model associates a predicted output class to each training data record of the training data set in step 208 and the predicted output class is compared with the original output class in step 210, as represented in Figure 2; in the case of a discrepancy between the predicted output class and the original output class, an anomaly is detected, and the corresponding data record of the unlabeled data set is considered an anomalous data record. A discrepancy means that the original output class is different from the predicted output class.

[0058] Once an anomaly has been detected, the anomalous data record may be then submitted to a validation step 211. The validation step 211 may confirm the original data record, or it may correct the original data record with the predicted output class, or the original data record may be rejected and excluded from the original data set. In one embodiment, the validation step 211 may be implemented by computer, in which case an automatic data correction is carried out on the basis of a criterion that takes into account the predicted output class and eventually the parameters characterizing the reliability of the prediction. In another embodiment, the validation step 211 is carried out by submitting the anomalous data records to a human expert, or operate*·. The manual validation step 211 may be used in the case of a classification method generating intelligible rules, specifically in tire case of Shadow Clustering method, wherein the human expert may take advantage also from the interpretation of the conditional rules verified by the anomalous data record. The validation step 211 is performed only on the anomalous data records, which typically are a few cases, therefore the supervision or intervention of the human operator is needed only for a limited time and it has the advantage of taking into account the knowledge and experience acquired on the system underlying the data set. In the case of a validation step 211 requiring a human action, the threshold parameter of the confidence score may be fixed so that the anomalies detected do not exceed a certain amount, or a certain frequency. In this way, the human expert is required to address only the most critical cases.

[0059] As a result of the validation step 211, a corrected data set may be generated. The corrected data set may then be processed to detect further anomalies and the method may be reiterated until no more corrections are detected. Therefore, in the first reiteration, a first classification model is generated by applying the classification method to a first training data set generated from the unlabeled data set, and, in the case that at least a first anomalous data record is detected on the basis of the prediction of the first classification model according to the disclosed method, the first anomalous data record may be validated in a first validation step. In the case that no anomalies are detected, or that an anomaly is detected but the original data record is confirmed in first validation step, the original unlabeled data set does not comprise anomalies and therefore the whole data set is validated. In the opposite case, wherein an anomaly is detected and corrected in the validation step, a first corrected unlabeled data set is generated, which differs from the original unlabeled data set in at least one data record. The first corrected unlabeled data set may be submitted to a second iteration which detects anomalies according to the disclosed method. In the second iteration, the classification method is applied to the first unlabeled corrected data set to generate a second classification model, and anomalies are detected on die basis of the predictions of the second classification model in which case a second corrected data set may be generated as in the first iteration. [0060] In an embodiment described herein, the disclosed method associates a confidence score to the predicted output in step 209. This step is optional. The confidence score is a parameter, or a set of parameters, representative of the reliability or robustness associated to each prediction. Not all the predictions may have the same level of reliability. For instance, a predicted output value that finds a basis on a multiplicity of unambiguous training data records, that is all assigning univocally the same output class, may have a reliability which is greater than a predicted output value based on one training data record, or based on a multiplicity of ambiguous data records, assigning different output classes to identical or similar training data records. The confidence score may in general depend on the classification method and on the classification model generated therefrom Moreover, in the case of a specific classification method and classification model, different algorithms may be implemented to generate the confidence score. The confidence score associated to a predicted output class may therefore assume different values, according to the specific classification method, classification model and algorithm implemented. The confidence score may assume a value comprised in a finite range defined by a first value and a second value, and the range again depends on the specific classification method, classification model and algorithm used.

[0061] Considering a specific classification method, classification model and algorithm for generating die confidence score, a greater value of the confidence typically corresponds to a higher reliability of the prediction. Usually, the first (minimum) value corresponds to a prediction which is considered totally unreliable, while the second (maximum) value corresponds to a prediction which is considered fully reliable. It is possible to implement algorithms that define a confidence score assuming a minimum value when the prediction is considered fully reliable, and a maximum value when the prediction is considered totally unreliable. In the following of the present specification, guidelines and examples for generating a confidence score when using different classification methods are given. Taking into account the complexity of the algorithms and the total number of data records typically involved in real applications, which may easily exceed many thousands of data records, tire confidence score may be generated by a computing device.

[0062] Once the confidence score and its use are disclosed, and with the support of the following examples, a person skilled in the art may easily define different algorithms to implement the disclosed method according to aspects described herein.

[0063] The confidence score is used to trigger the anomaly detection of the training data record. A threshold of the confidence score is fixed, and a discrepancy between the predicted output class and the original output class is detected as an anomaly on the basis of the confidence score assigned to tire predicted output value. If the maximum value of the confidence score corresponds to a prediction which is considered fully reliable, this may be die case when tire confidence score is less than, or less or equal to, the threshold value. If the maximum value of the confidence score corresponds to a prediction which is considered totally unreliable, this may be the case when the confidence score is greater than, or greater or equal to, the threshold value. In other words, a discrepancy between the predicted output class and the original output class may be a condition to detect an anomaly, but it may not be sufficient, and only if the predicted output is considered sufficiently reliable an anomalous data record is detected. [0064] By changing the value of the threshold of the confidence score, it is possible to tune the sensitivity of anomaly detection thereby reaching a tradeoff between false positive cases (too many fake anomalies detected) and false negative cases (true anomalies not detected). Furthermore, it may be possible to adjust the threshold of the confidence parameter in order to detect a total number of anomalies in a certain range.

[0065] In another embodiment described herein, the classification method or the classification model associates a predicted output class to the training data record on the basis of a probability score in step 208. This step is optional. Therefore, for each training data record, the classification method assigns a probability score to each output class of the set of possible output classes, and the predicted output class is the output class which optimizes the probability score. In the simplest formulation, the probability score corresponds to the probability that is it ranges from 0 to 1, and the sum of all die probability scores amounts to 1. A probability of 1 associated to an output class corresponds to the most reliable prediction that the classification model can provide. In the case of a probability as probability score, the predicted output class of a data record is the output class having the maximum probability. If a tie occurs, the class associated to more samples in the training set may be used. If we have a tie also from this point of view, it is split according to die alphabetical ordering. When alternative algorithms are used, the probability scores may not be normalized to 1, but nevertheless may be transformed into the aforementioned probability.

[0066] Also the confidence score of the predicted output class may be assigned on the basis of the probability scores associated to the output classes. Therefore, as an example, an algorithm implementing the confidence score may be proportional to the probability score of the predicted output class, and/or to a measure of the gap between the probability score of the predicted output class and the second highest probability score. The confidence score may also depend on the number of training data records on which the prediction is based.

Implementation of classification methods

1. Standard classification problem

[0067] In standard classification problems points x in a multidimensional space are to be associated with one of q possible classes. In particular, if d is the dimension of the domain D of x, its components Xj, with j = 1, ..., d, can have one of the following two types:

• ordered: when Xj varies within an interval Bj of the real axis and an ordering relationship exists between its values;

• nominal (or categorical): when the set A/ of values assumed by X j have no natural ordering, e.g. the employment of a customer or the ZIP code.

[0068] Several types of variables that can be encountered in practical applications, such as integer or continuous attributes, dates, times or ordinal attributes, can be mapped into ordered variables, with possible changes in the definition of distance and/or of the operators that can act on them.

[0069] Any function g : D ® [0. q - 1 }, called classifier, provides a possible output class y = g(x) in correspondence of any input pattern x e D. Generally, tire classification of a point x is not deterministic, but a probability P(x,y) can be defined, which provides a measure of the likelihood of assigning the class y e {0, q - 1 } to the pattern x. Then, solving a standard classification problem consists in finding the optimal classifier g * that provides the most probable output class y * = g * ( x) in correspondence of any input pattern x e D, e.g., the class y * that maximizes the probability P(x,y) when y e [0, ..., q - 1 }.

[0070] In most cases it is supposed that no a priori knowledge about the function g * is available, but its behavior must be retrieved only by analyzing a collection of n observations S = { (xi, yi), i = l . n], called training data set, wherein an observation is a training data record. It follows that any classification method aims at constructing a classifier g, e.g., a classification model, that approximates g * in some way, just by looking at the pairs included in S. A common way of achieving this result consists in minimizing the empirical error e en * > on the training set S, given by:

2. Classification methods based on Boolean functions synthesis

2.1 Description of the classification method

[0071] Classification methods based on Boolean functions synthesis may be used according to aspects described herein. They are also known as Logic Learning Machine ( LLM ).

[0072] Classification methods based on Boolean functions synthesis map the training data set S into binary strings of a Boolean space, thus obtaining the partial truth table of a Boolean function, which is then reconstructed to derive a set of conditional rules for the classification model g that solves the original classification problem

[0073] For sake of simplicity, consider the two-class case q = 2, where the output y can assume the value 0 or 1; more complex situations for q > 2 can be reduced to a sequence of two-class problems. The general procedure adopted by methods on Boolean function synthesis is represented in Figure 3 and comprises a step 301, wherein the training data set comprising training data records and original output classes is generated from the unlabeled data set.

[0074] Then, in step 302, die training data records are converted into binary strings of a Boolean space, by a coding that preserves ordering and distance. As known in the art, a coding mapping a space A into a space B preserves ordering and distance if, given two elements x, y in A, their ordering relationship (x = y, x < y or x > y) and their distance is preserved after mapping them in B. The conversion into binary strings may comprise the following steps:

[0075] a. Discretization: Ordered variables X j are discretized by choosing a proper set of cutoffs; as a result, each (ordered or nominal) X j can be transformed into a positive integer variable U j and the original domain D of the classification problem can be mapped on a subset K of the (/-dimensional space of positive integer numbers.

[0076] For every ordered variable X j it is determined a finite set of b j - 1 cutoffs /¾ / , with 1 = 1. b j - 1, such that the function g assumes a constant value on every interval (-¥, y¾i], (br_, br], ..., (b jj i j -i, +¥). Thus, an integer variable «,· can be defined in the following way:

[0077] If integer variables u j e {1,2. b j ] are also used to map the b j values of nominal variables

X j , the training set S is transformed into a collection U of n pairs («,·, yi), with i = 1, n, being «,· a vector of d integers. Although cutoffs bm can be chosen in a smart way according to a proper discretization algorithm in order to reduce the complexity of the classification problem at hand while maintaining the information included in the training set S, it is also possible to consider as possible cutoffs all the distinct values (apart from the last one) assumed by the variable x j in S.

[0078] b. Binarizathm: Through the proper coding preserving ordering and distance, each transformed integer variable u, is mapped into a binary string z/, then, by concatenating the d binary strings obtained in this way, the original training set S gives rise to two collections 7 and F of binary strings z, wherein z e 7 if it derives from a pair (x,·, yi) e S with y; = 1, and wherein z e F if it derives from a pair (x,·, yi) e S with yi = 0).

[0079] In the binarization phase the b j values of integer variables U j are coded into binary strings ¾· of length b j through a coding that preserves ordering and distance. An illustrative coding is the inverse only-one coding that associates with «,· = k the binary string having all the bits set to 1 apart from the Jfcth bit which is 0. Through the inverse only-one coding it is also possible to treat in a natural way missing value possibly included in the training set: in this case, a binary string with no 0 values is adopted as mapping.

[0080] Once all tire integer variables u, have been coded, the whole vector u is associated with tire binary string z with length 6 obtained by concatenating the strings Z j derived by transforming the components u,. In this way the original training set S gives rise to two collections 7 and F of binary strings z with length b: in 7 are included the binary vectors derived (through U) from the pairs (x,·, yi) e S with y, = 1, whereas in F strings corresponding to patterns in S with y, = 0 are inserted.

[0081] Therefore, the use of binary code typical of a computing device is intrinsic in Logic Learning Machine.

[0082] The method further comprises a step 303 of generating one or more clusters of binary strings, wherein the binary strings in a cluster are associated to the same output class. Each cluster is identified or represented by an implicant, which covers all the binary strings of the cluster.

[0083] In step 303, binary strings that belong to the same output class and are close to each other according to a proper definition of distance are grouped together. A basic concept in the procedure is the notion of cluster. A cluster is the collection of all the binary strings having the value 1 in a fixed subset of components; as an example, the eight binary strings Ό1001 \ O1011’,‘01101’,‘11001’, * 01111·,‘11011’,‘11101’,‘11111’ form a cluster since all of them only have the value 1 in the second and in the fifth component.

[0084] The sets 7 and F are viewed as a portion of the truth table of a positive Boolean function Az), which is then reconstructed by using a proper reconstruction algorithm that produces a set C of binary strings, called implicants. From C the Disjunctive Normal Form (DNF) of Az) is readily obtained.

[0085] Suppose that the sets 7 and F generated by the binarization phase are disjoint, that is they share no common elements (7 n F = 0); if this is not the case, the elements belonging both to T and F are to be removed from 7 or from F to ensure the fulfillment of the hypothesis, therefore, in place of T and F the sets T = 1\F and F' = F\T are considered. Due to properties of the inverse only-one coding, the sets T and F can be viewed as a portion of the truth table of a positive Boolean function fiz), which can be simply written in its Disjunctive Normal Form (DNF) as a logical sum (OR) of logical products (AND) among some of the b components of string z. The complement operator (NOT) is not involved in the DNF expression of /(z).

[0086] Thus, any algorithm for synthesizing a partially defined positive Boolean junction (pdpBf) can be used to reconstruct a consistent f(z) from T and F, e.g., a function /(z) such that:

[0087J In fact, since the application of the inverse only-one coding produces only binary strings having length b and including d bits with value 0, if T and F contain only a very small subset of the total possible strings of this kind, several consistent positive Boolean functions f(z) exist that verify the partial truth table determined by T and F. It follows that the method employed for positive Boolean function synthesis must adopt a proper induction principle for determining which of the consistent functions j(z) leads to the classifier g that best approximates the optimal g * . An illustrative technique of this kind is Shadow Clustering (SC), which will be described in the following section.

[0088] The notion of covering between binary strings is central in positive Boolean function synthesis: given two strings z, w e {0,1 }* we say that z covers w (in symbols z £ tv) if z* £

X k for every k = 1. b. At the end of its construction, SC (as any other method for synthesizing a pdpBf) generates a collection C of m binary strings in {0,1}*, which are called implicants. The resulting positive Boolean function /(z) assumes value 1 for all and only the strings z covered by one of the implicants in C.

[0089] These strings can be easily retrieved by observing that the value 0 inside implicants plays the role of a don’t care symbol: by substituting the value 1 in every place where a 0 is present, all the binary strings z covered by each implicant is found. From the set C the DNF of the resulting positive Boolean function/(z) can be readily derived, by writing the logical sum of m logical products, each one obtained from a different implicant in C, using as operands the components of z associated with the bits having value 1 in the implicant. The logical product derived by an implicant in this way assumes value 1 only in correspondence of binary strings z covered by that implicant. As a consequence, the DNF assumes value 1 for all the strings z covered by some implicant in C.

[0090] Therefore, in step 303 it is generated at least one cluster of binary strings, the cluster comprising binary strings covered by an implicant, wherein the corresponding training data records are associated to the same output class.

[0091] The disclosed method further comprises a step 304, wherein each implicant in C may be directly transformed into a conditional rule concerning the classification problem at hand, thus producing the desired model g that approximates the optimal g * . Because the Boolean function is positive, the logical operators linking different conditions of the conditional rule may be AND and/or OR. [0092] After the application of the inverse only-one coding in the binarization phase, every bit included in the binary string ¾· associated with an ordered variable JC, is associated with an interval in the domain of JC,·. Similarly, every bit in the string ¾ derived by a nominal variable X j corresponds to a possible value assumed by JC,. This property makes it possible to directly transform any implicant in C into an intelligible rule concerning the classification problem at hand. It is sufficient to consider the substrings w, of the implicants w e C associated with the variables JC,·; and derive from them a set of conditions Oinked through the AND operator) to be used in the premise of the corresponding rule.

[0093] Three different situations can be encountered:

1. The substring vv,· includes only values 0; in this case all the possible binary substrings ¾ are covered by vv,· and therefore no condition on the associated variable x j has to be included in the premise of the resulting rule.

2. The associated variable JC, is nominal; then, each bit of the substring vv, corresponds to a particular value assumed by JC,. Since only coding strings ¾· having a value 0 in one of the positions where vv, has also a 0 are covered by vv,, the associated membership condition is JC,· e S„ where B,· is the subset of nominal values for which a value 0 in vv,· is present.

3. The associated variable JC, is ordered; in this case, die execution of SC always produces implicants vv with a substring vv, including a single run of 0s, e.g., a single sequence of consecutive bits with value 0. It can be easily seen that only coding strings ¾ having a value 0 in one of the positions of this run are covered by vv,·. Since each bit in vv,· is associated with an interval in the domain of Xj, it is straightforward to derive the whole interval (A,m] corresponding to the run of 0s. The resulting threshold condition on JC,· to be included in the premise part of the resulting rule is then A < JC, < m, which can become X j £ m (resp. JC,· > A) if the run of 0s includes the first (resp. last) bit of the substring vv,.

[0094] In methods based on Boolean functions synthesis, the classification model comprises the conditional rules defined in step 304. The conditional rules are intelligible rules which may be interpreted by a human operator. Therefore, in the case that the validation step is performed by or under the supervision of a human operator, the validation step may occur on the basis of the conditional rules verified by the anomalous data record.

[0095] A further advantage of the use of classification methods based on Boolean functions synthesis, in particular of using a Shadow Clustering technique, is that the set of conditional rules may comprise overlapping rules, that is a data record may satisfy more than one rule. It may also satisfy two conflicting rules, wherein a training data record satisfies two conditional rules associating the training data record to two different output classes. This allows to take into account multiple patterns of functional dependency at the same time for the same record, while die use of decision trees forces to rely, progressively, on the most significant discriminating threshold for each input attribute, in turn.

[0096] Significance parameters may be assigned to each conditional rule in step 305, wherein the significance parameters express the importance or strength of the rule. For instance, a rule which originates from a large number of training record, may be more significant than a rule originating from a few training records. Two illustrative significance parameters covering C and error E may be defined as follow. [0097] Considering a classification model comprising a set R of n rules r x , ...r n , wherein the rule h has one among N possible outputs j? z . The number of patterns belonging to output class l is denoted as Ni.

[0098] The rule h has m z conditions

[0099] Any condition Cy defines a domain Dy in the input space, which can be either a set of values for nominal attributes or an interval for ordered attributes.

[0100] The rule h has covering C(r z ), and error E(r{). C(r z ) is defined as the ratio of training data records with the output predicted by h which fulfill the hypotheses (or conditions) of h ; E (h) is defined as the ratio of training data records with any output except the one predicted by h which fulfill the hypotheses of h .

[0101] A probability score may be assigned in step 306 to each output class on tire basis of the significance parameters of the rules satisfied by the training data record. As an example, the following illustrative procedure may be followed:

[0102] Let be a training data record and let { j y for each j and

] be the set of rules satisfied by x. A probability seme wi may be associated to each

output class according to the following formula:

[0103] By observing the formula, we can notice that the probability score of the l class is maximum (and equal to N{) for a pattern if is equal to 0. The product itself, in turn, depends on the covering and error of the verified rules. More specifically, if a perfect rule were verified, the score would be equal to it increases with covering and decreases with error.

[0104] A probability for each output class l is given by

[0105] In step 308, a predicted output class is associated to each training data record by the set of conditional rules.

[0106] For instance, the predicted output class may correspond to the output class with the highest probability score.

[0107] A confidence seme in then associated to the predicted output class in step 309. An

exemplary confidence score may be defined for instance as where is the

highest score and w lz is the second highest score.

[0108] The original output class and the predicted output class are therefore compared in step 310 and an anomalous data record is detected in a presence of a discrepancy between the original output class and the predicted output class.

[0109] Once an anomaly has been detected, the anomalous data record may be then submitted to a validation step 311. The validation step 311 may confirm die original data record, or it may correct the original data record with the predicted output class, or tire original data record may be rejected and excluded from the original data set. As a result of the validation step 311, a corrected data set may be generated. Shadow Clustering algorithm

[0110] Starting from two disjoint sets T and F of binary strings with length b, the purpose of a method for reconstructing positive Boolean functions is to generate a collection C of implicants such that for each z e T there is at least one w e C that covers z (e.g., w <z) and for each z e F there is now e C that covers z. This target can be reached if T does not include a string v that covers some z e F. If this is not the case, conflicting elements are to be removed from T or from F to ensure the fulfillment of the hypothesis. As described in the previous section, it is then immediate to write down from C the DNF expression of the resulting positive Boolean function/(z), which assumes value 1 in correspondence of all the binary strings z e T and value 0 for all z e F.

[0111] Since in general the sets T and F contains only a small portion of the truth table of fife), several consistent DNF expressions for f($) can be found and an induction principle has to be defined in order to achieve the optimal positive Boolean functiony(z) that corresponds to the classifier g(x) that best approximates the optimal g * (x). A possible choice amounts to considering the Occam’s Razor principle, according to which the simplest DNF expression (in terms of number of logical products and of operands inside them) has to be searched for. Other choices involve a proper measure that evaluates the quality of resulting implicants.

[0112] In fact, if m = 171 (resp. np = IFI) is the number of elements in the set T (resp. F), for any binary string w we can define four quantities:

• the number TP(w) ( true positive ) of elements in T that are covered by w;

• the number FP(W) (false positive ) of elements in F that are covered by w;

• the number FN(w) = m- TP(W) (false negative ) of elements in T that are not covered by w;

• the number TN(H>) = n F - FP(W) ( true negative ) of elements in F that are not covered by w.

[0113] Starting from these four quantities, several quality measures can be defined; a good choice is the novelty N(w) having the following form:

[0114] Once the quality measure Q(w) is selected, finding the collection C of implicants that achieve the maximum of å > Q(w) while covering all the elements in T is an NP-complete problem and consequently cannot be efficiently solved by any algorithm. A near-optimal solution can be found through the Shadow Clustering (SC) algorithm that adopts the following iterative approach:

d. Set wi = 0.

4. If / > 0 then set w tmax = 1 and go to Step 3.

5. Add the implicant w to the set C. Remove from Vail the strings z that are covered by w (e.g., for which w £ z). If V is not empty go to Step 2.

6. Simplify the set C by merging elements and by removing from it redundant implicants w whenever possible.

[0115] Initially the collection C is empty and T is copied into the set V that may be iteratively reduced to include the strings z not yet covered by elements in C (Step 1). Then, implicants w to be inserted into C are iteratively constructed by changing the bits with value 0 that maximize a desired quality measure Q(w) (Steps 2-5). In particular, once the Zth bit is selected at Step 3 and its value wi is changed to 1, at Step 3b the variable x j associated with the substring Z j including w / is considered. If it does not contain 0 values, e.g., it does not cover any feasible value for x Jt or two runs of Os are present in Zj, being x j an ordered variable, then the bit wi is reset to zero and another index is taken into account.

[0116] At Step 3c die index / max leading to die maximum value Q max for Q(w) is determined. If such an index cannot be found current implicant w is added to C, since changing further bits with value 0 does not increase quality measure Q(w). Then, the strings z e V that are covered by w are removed from V (Step 5) and Steps 2-5 are repeated for generating new near optimal implicants to be added to C, if V is not empty.

[0117] Finally, at Step 6 elements in C that can be merged are treated and possibly redundant implicants w (covering only strings in T already covered by other elements of Q are removed from C, thus simplifying the resulting positive Boolean functions fiz).

2.3 Example

[0118] Consider an illustrative unlabeled data set of two input variables, x \ and x % and one output variable, y, as represented in Table 1. We assume that the variable x \ denotes the set size for the items to be produced and that the variable X2 denotes a production mode. According to these two features, it is assumed that the supply chain determines how to procure the raw materials for production: internally or externally. A variable y stores this information and it can assume tire value“internal” or“external”. As described, the output variable y depends on xi and xi. Variable xi is numerical and variables xi and y are categorical.

Table 1: illustrative unlabeled data set

[0119] The training data records comprises variables JCI and x 2 , and they are associated to output classes corresponding to variable y.

[0120] The first step is the discretization of the variables. Discretization comprises transforming quantitative attributes, which can assume an infinite number of values, into discrete attributes, subdividing the set of possible values assumed by them into a number of intervals. Each of the intervals is associated with an integer value, so a quantitative attribute u j is defined as follows:

[0121] Note that it may be necessary to choose the number of intervals b j and the thresholds /? k , with Jk = 1. b j . l , of each interval. The output value of each observation is considered, by trying to create intervals in such a way that, for two observations with different output class there is at least one attribute for which the two observations fall into different ranges. In other words, the aim is not to create ambiguities. It follows that:

[0122] Also nominal attributes are transformed in this phase, for instance the attribute «2 associated with X2 is defined as:

[0123] Finally, output classes are redefined as 0 (internal) and 1 (external), so that the training data set with associated output classes after discretization is shown in Table 2.

Table 2: training data set after discretization

[0124J In the binarization phase, the discretized training data set is turned into a Boolean domain: each integer value assumed by the generic « j attribute is substituted with a binary string. More specifically, each attribute u j is encoded by the inverse only-one coding, by creating a string h j , which consists of b } bits, with value 1, except for the bit corresponding to the value assumed by u j .

[0125] For instance, u \ = 3 is associated with the string hi = 1101. Each input vector u is then represented with a string of bits z, derived from the chaining of strings h j . This resulting string z may be composed by elements Z j with j = 1,.., b, where b the new training set has input values in {0, 1 } b and output values in {0, 1 }.

[0126] This training set can be divided into two subsets T and F containing the strings z associated with the output class y = 1 and y = 0, respectively:

• T= {101110, 110110, 110110}

• F = {011101, 011101, 101110, 111001, 111001}

[0127] The training data dataset and corresponding output classes obtained after binarization is reported in Table 3:

Table 3: training data set after binarization

[0128] The next step is to reconstruct the positive Boolean function by the Shadow Clustering algorithm from the binarized training data set, to define a set of implicants each covering clusters of binary strings.

[0129] As the sets T and F generated by the binarization phase are not disjoint (in fact, T r\ F ¹ 0), we define the sets T = 7 \F and F' = F\T and then consider the two couple of sets:

[0130] The application of the Shadow Clustering algorithm produces for T die implicant:

vi = 000010

and, for F, the implicants:

000001

001110

[0131] Consider the problem defined by Table 3 and suppose that implicants for subset F have to be constructed. To this aim, the sets F' and T are considered. Let us employ only the novelty as a quality measure and let us consider the case a = 1, corresponding to novelty defined in equation (2). According to the Shadow Clustering procedure (see the corresponding paragraph), the implicant is initialized to 000000. The 000000 implicant covers all tire Boolean strings, both in F' and in T, therefore TP = 4, FP = 3, FN = TN = 0 and, according to equation (2), N(v) = 0.

[0132] Then the iteration described in step 3 of the Shadow Clustering procedure is started. Each bit is switched to 1 and the quality measure of the obtained implicant is computed. The results are reported in Table 4:

Table 4: Shadow Clustering, first iteration

[0133J Notice that, since the substring of an implicant associated with an input variable should include only a run of 0s, not all the potential implicants are valid. In particular implicants 010000 and 0010000 are not valid since each of them includes two runs of 0s in the part of the implicant associated with Jti and therefore is not considered at Step 3b of the Shadow Clustering procedure. According to Table 4, the implicant with the highest quality measure is 000001 and is therefore selected at Step 4 of the Shadow Clustering procedure. After this first iteration the built implicant does not generate false positive and therefore no other bits should be switched to 1. As a matter of fact, as Table 5 shows, the quality measures associated with the new potential implicants at a new iteration is lower than the quality measure of the starting implicant (000001) and therefore, according to Step 3c of Shadow Clustering, no bit is switched to 1 and the iteration at Step 3 stops.

Table 5: Shadow Gustering, second iteration

[0134] Notice that the implicant 000011 is not valid since it contains only Is in the part associated with xi (see point 3b of die Shadow Gustering procedure). Through this procedure, we derived the v 2 implicant. Similarly, also vi and v 3 are derived by Shadow Gustering.

[0135] From the implicants, taking into account the ranges of the discretized variables corresponding to bits having value 1, it is possible to generate the following conditional rules:

[0136] Referring to the three rules extracted in the example, their covering and error are the following:

[0137] Scoring all the training data records in the training data set except the 3 rd and the 4 th is trivial: each of them only verifies one rule, predicting“internal” and 8 th training data records) or“external” (5 th and 6* training data records) output. Concerning the 3* and the 4 111 training data records (xi = 20, X2 = b), which instead verify the conflicting rules n and n, the output scores w 0 and w x are computed as follows:

[0138] Then, class 1, that is“external” is predicted for the two training data records. This corresponds to suggesting a correction for the 4* training data record, even if there are two identical instances with two different output classes and, consequently tire ambiguity cannot be trivially solved by choosing the most frequent output class. The correction is proposed according to the behavior of the other similar training data records in the training data set with respect to the functional dependency ( ) In other words, the algorithm suggests“external” for the 4 th training data record because, even if the“internal” class is more frequent in the whole training data set, all the other training data records in the training data set with xi being in the same range (that is, 17-35) and with the same value of xi (that is, b) are associated with that output class. The following Table 6 summarizes the example:

Table 6. Summary of the results on tire illustrative training data set

3. Classification methods based on decision tree

[0139J A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represent classification rules.

[0140] Classification methods based on decision tree are described in, for example, Quinlan, J.

Ross. C4. 5: programs for machine learning. Elsevier, 2014. The classification model is in the form of disjoint conditional rules, that is each data record satisfy only one rule.

[0141] Exemplary parameters used in tire disclosed method may be defined as follows. Alternative definitions of the parameters falling within the scope of the invention may be defined as well.

[0142] Significance parameters, for instance covering and error, may be defined as in the case of classification methods based on Boolean function synthesis.

[0143] Both a confidence and a probability score can be associated to each output class by the same procedure described for Shadow Clustering, applying it on the conditional rules constituting the decision tree. The only difference is that it might not be possible to have more than one output score greater than 0, for each record.

4. Classification methods based on SVM

[0144] SVM is a classifier based on the identification of an optimal hyperplane of separation between different classes.

[0145] Figure 4 represents the general procedure adopted by methods based on SVM. A Support Vector Machine constructs a hyperplane or set of hyperplanes in a high- or infinite- dimensional space, which can be used for classification, regression, or other tasks. Intuitively, a good separation can be achieved by the hyperplane (or, more generally, by the geometrical kernel) that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier.

[0146] Classification methods based on SVM are described in, for example, Byun, Hyeran, and Seong-Whan Lee. "Applications of support vector machines for pattern recognition: A survey." Pattern recognition with support vector machines. Springer, Berlin, Heidelberg, 2002.

[0147] Therefore, the classification model is in the form of a classification function, whose shape allows to distinguish output classes with the biggest possible margin.

[0148] Figure 4 represents the general procedure adopted by methods based on an SVM.

[0149] From the unlabeled data set, the training set comprising the training data records (support vectors) and the corresponding original output classes is generated in step 401.

[0150] The support vectors are then represented in a projected space defined by die input variables in step 402.

[0151] A hyperplane in the projected space is then determined in step 403, wherein the hyperplane divides the projected space into at least two regions, each region corresponding to one output class.

[0152] The class y for any input vector x is then given by the following formula, where K(·,·) denotes a kernel function used to perform a non-linear classification by constructing an optimal hyperplane in the projected space:

[0153] Different well-known training algorithms can be applied to compute the coefficients <¾· and the offset b.

[0154] Therefore, the classification model is generated in the form of a classification function in step 404, associating an output class to an input data record.

[0155] The classification model is then applied to the training data records, thereby associating the corresponding predicted output class to each training data record in step 408.

[0156] The original output class and the predicted output class are therefore compared in step 410 and an anomalous data record is detected in a presence of a discrepancy between the original output class and the predicted output class.

[0157] A confidence score may be assigned to each predicted output to trigger tire anomalous detection in step 409. The confidence score may be defined as follows. Alternative definitions of the parameters falling within the scope of the invention may be defined as well.

[0158] When the Support Vector Machine model is applied to a record, it returns (together with the predicted value for that record) a continuous value, which can be considered a significance parameter for the prediction.

[0159] The confidence score represents an approximation of the maximum a posteriori (MAP) probability, so that a score may be associated with each output class based on this value, as shown in Sollich, Peter. "Bayesian methods for support vector machines: Evidence and predictive class probabilities." Machine learning 46.1-3 (2002). [0160] Once an anomaly has been detected, the anomalous data record may be then submitted to a validation step 411. The validation step 411 may confirm the original data record, or it may correct the original data record with the predicted output class, or the original data record may be rejected and excluded from the original data set. As a result of the validation step 411 , a corrected data set may be generated.

5. Classification methods based on neural networks

[0161] Neural networks can be represented as an interconnection of learning layers, each of which is constituted by a linear approximator.

[0162] Classification methods based on neural networks are described in, for example, Gurney, Kevin. An introduction to neural networks. CRC press, 2014.

[0163] Figure 5 represents the general procedure adopted by methods based on a neural network.

[0164] Neural networks are comprised of a sequence of learning layers, each layer comprising one or more nodes (neurons), wherein each neuron is interconnected with the nodes (e.g., all the nodes) of the previous layer in die sequence of layers (neuron input) and to the nodes (e.g., all the nodes) of die next layer in the sequence of layers (neuron output). Each neuron is characterized by an activation function which is determined by the training procedure of the neural network on the training data set. The neural network comprises at least two layers, such as a first layer which receives the data records at the input and a last layer whose output is associated to an output class.

[0165] From the unlabeled data set, the training data set comprising the training data records and the corresponding original output classes is generated in step 501.

[0166] Then, the training data set are introduced to a first layer of a neural network in step 502; the activation functions of the nodes of the neural network are then iteratively determined in step 504, wherein the output of the last layer of the neural network is associated to an output class.

[0167] The classification model is in the form of the group of the activation functions charactering the neural network and the training data set.

[0168] The classification model is then applied to the training data records in step 508, by introducing each training data record to the neural network with the activation function previously defined and determining the corresponding predicted output class.

[0169] The original output class and the predicted output class are therefore compared in step 510 and an anomalous data record is detected in a presence of a discrepancy between the original output class and the predicted output class.

[0170] A confidence score may be assigned to each predicted output to trigger the anomalous detection in step 509. The confidence score may be defined as follows. Alternative definitions of the parameters falling within the scope of the invention may be defined as well.

[0171] When the Neural Network model is applied to a record, it returns (together with the predicted value for that record) a continuous value, which can be considered a significance parameter for the prediction.

[0172] The confidence score represents an approximation of the maximum a posteriori (MAP) probability, so that a score may be associated with each output class based on this value, as shown in Richard, Michael D., and Richard P. Lippmann. "Neural network classifiers estimate Bayesian a posteriori probabilities." Neural computation 3.4 (1991).

[0173] Once an anomaly has been detected, the anomalous data record may be then submitted to a validation step 511. The validation step 511 may confirm the original data record, or it may correct the original data record with the predicted output class, or the original data record may be rejected and excluded from the original data set. As a result of the validation step 511, a corrected data set may be generated.