Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MALWARE MODELLING
Document Type and Number:
WIPO Patent Application WO/2017/020928
Kind Code:
A1
Abstract:
Provided are machine-readable instructions which represent malware model implementations and cause malware operation symptoms in a computing environment.

Inventors:
REINECKE PHILIPP (US)
CRANE STEPHEN J (US)
Application Number:
PCT/EP2015/067660
Publication Date:
February 09, 2017
Filing Date:
July 31, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO LP (US)
REINECKE PHILIPP (US)
CRANE STEPHEN J (US)
International Classes:
G06F21/56
Other References:
ANDREAS MARX: "A Guideline to Anti-Malware-Software testing Trend Micro Student Runner Up Award", 7 March 2000 (2000-03-07), XP055254943, Retrieved from the Internet [retrieved on 20160303]
SHYAMASUNDAR R K ET AL: "Malware: From Modelling to Practical Detection", 15 February 2010, DISTRIBUTED COMPUTING AND INTERNET TECHNOLOGY, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 21 - 39, ISBN: 978-3-642-11658-2, XP019138285
Attorney, Agent or Firm:
BOEHMERT & BOEHMERT ANWALTSPARTNERSCHAFT MBB (DE)
Download PDF:
Claims:
CLAIMS

1. A computer-implemented method of testing malware detector implementation effectiveness, comprising: providing a malware detector implementation to detect malware operations in a computing environment; configuring a malware model with different parameter sets to generate different implementations of the malware model, wherein different implementations of the malware model cause different malware operation symptoms in the computing environment; and assigning an effectiveness indicator to the malware detector implementation based on an evaluation of malware model implementations for which the malware detector implementation indicates malware operations in the computing environment.

2. The computer-implemented method of Claim 1, wherein the symptoms caused by the malware model implementations are benign to the computing environment.

3. The computer-implemented method of Claim 2, wherein operations of the malware model implementations are free from impeding operation of processes in the computing environment, intruding entities of the computing environment, allowing unauthorized access to entities of the computing environment, or gathering sensitive information from entities of the computing environment.

4. The computer-implemented method of Claim 1, wherein the symptoms caused by a malware model implementation are at least in part based on a stochastic process incorporated by the malware model implementation.

5. The computer-implemented method of Claim 4, wherein the stochastic process is defined at least in part by a probability parameter, wherein a first parameter set of the parameter sets assigns a probability value to the probability parameter.

6. The computer-implemented method of Claim 1, wherein the symptoms caused by a malware model implementation exhibits at least one of a temporal pattern, a spatial pattern, and a structural pattern.

7. The computer-implemented method of Claim 6, wherein the at least one exhibited pattern relates to at least one of data traffic, resource accesses, resource consumption, and stored data.

8. A system, comprising: a malware detector implementation to detect malware operations directed at the system; and a malware model, wherein the malware model has an interface which allows to configure the malware model with different parameter sets to generate different implementations of the malware model, wherein the different implementations of the malware model cause varying symptoms of malware operation directed at the system.

9. The system of Claim 8, further comprising: a classifier which assigns an effectiveness indicator to the malware detector implementation based on an evaluation of malware model implementations for which the malware detector implementation indicates malware operations directed at the system.

10. The system of Claim 8, wherein each malware model implementation has an interface which allows the system to start and to stop operation of the malware model implementation.

11. The system of Claim 8, wherein a malware model implementation incorporates a stochastic process.

12. The system of Claim 11, wherein the stochastic process is defined at least in part by a probability parameter, wherein a first parameter set of the parameter sets assigns a probability value to the probability parameter.

13. The system of Claim 8, wherein a malware model implementation causes symptoms which exhibit at least one of a temporal pattern, a spatial pattern and a structural pattern.

14. The system of Claim 13, wherein the at least one exhibited pattern relates to at least one of data traffic, resource accesses, resource consumption, and stored data.

15. A computer-readable memory storing : non-transient machine-readable instructions which represent a malware model; and machine-readable instructions which when executed assign an effectiveness indicator to a malware detector implementation based on an evaluation of malware model implementations for which the malware detector implementation indicates a malware attack or a presence of malware in a computing environment.

Description:
MALWARE MODELLING

[0001] Malware may pose a serious threat to computer security. To avoid successful malware attacks, a computing environment may employ a malware detector. Once malware is detected, appropriate action to protect the computing environment such as removal of the malware and closing of a backdoor may be taken. However, as malware continuously evolves, the need for more and more sophisticated implementations of malware detectors is never-ceasing.

BRIEF DESCRIPTION OF DRAWINGS

[0002] The following detailed description refers to the appended drawings in which:

FIG. 1 schematically illustrates a computing environment comprising a system emulating symptoms of malware operations by a malware model;

FIG. 2 is a flow diagram illustrating a process of testing malware detector implementation effectiveness in the computing environment; and

FIG. 3 schematically illustrates the system in connection with a network entity.

DETAILED DESCRIPTION

[0003] FIG. 1 schematically illustrates entities of a computing environment 10. The computing environment 10 comprises a system 12 having a memory 14 and a processor 16, wherein the processor 16 may be configured to execute machine-readable instructions stored in the memory 14. The memory 14 may comprise a storage device, e.g., a magnetic or optical storage device. The processor 16 may be a single-core or multi-core central processing unit (CPU). The memory 14 may store a first machine-readable instruction set representing a malware detector implementation 18 or a multitude of malware detector implementations 18 to detect malware operations directed at the system 12, e.g., malware operations directed at intruding and/or exploiting the system 12 (such as malware attacks). In this regard, it is noted that the term "malware" as used throughout the description and claims is to be understood broadly and may refer to hardware entities or software (or more general, machine-readable instructions) or combinations thereof which cause operations in the computing environment 10 that are neither under direct nor indirect control of the owner or operator of the computing environment 10. Rather, the operations caused by the malware may be under direct or indirect control of a third (malign) party. In particular, the operations may intentionally impede the operation of processes in the system 12, extract sensitive information from the system 12, intrude the system 12, or allow unauthorized access to the system 12. Moreover, it is noted that the term "malware detector implementation" as used throughout the description and claims is to be interpreted broadly and may refer to machine-readable instructions or a combination of machine readable-instructions and execution hardware, e.g., processor 16, which when executed by the execution hardware implement a malware detector.

[0004] The system 12 may comprise computing devices such as mobile phones, Personal Digital Assistants (PDAs), Personal Computers (PCs), laptops, tablet computers, servers, etc. The computing devices may be interconnected using wireless and/or wired transmission technologies such as, for example, Ethernet or Wireless Local Area Network (WLAN), thereby forming a local network such as a Local Area Network (LAN). The computing devices may be connected to an interface 20 (indicated as optional in FIG. 1) such as a router or a switch relaying data traffic between the computing devices and network entities of another local network or the internet. A malware detector implementation 18 or instances of the malware detector implementation 18 may reside on one, many, or all of the computing devices and may keep track of operations performed by the entities of the system 12. Each malware detector implementation 18 may detect a behavior of entities of the system 12 which indicates the presence of malware operations directed at the system 12. For example, a malware detector implementation 18 may detect patterns or irregularities in the operations performed by (or occurring in) the computing devices or the interface 20 and may interpret the detected patterns or irregularities, e.g., irregularity patterns, as symptoms of malware operations directed at the system 12. Moreover, the malware detector implementation 18 may detect patterns or irregularities in the data structures stored in the computing devices or in received or transmitted data packets and may interpret these patterns or irregularities as symptoms of malware operations directed at the system 12.

[0005] In this regard, the term "pattern" as used throughout the description and claims is to be understood broadly and may refer to a regularity in the behavior or structure of entities exhibiting the pattern. Moreover, as used throughout the description and claims, the term "pattern" is intended to encompass regularities in the occurrence or structure of irregularities in the behavior or structure of entities exhibiting the pattern. For example, an entity may transmit data packets of a certain size at a certain time interval, wherein the transmission occurs regularly (at least for some time) and thus represents a transmission pattern. When the entity starts to append to every fifth data packet a certain amount of additional payload, this may be regarded as a different transmission pattern or as a reoccurring irregularity in the original transmission pattern, i.e., a pattern of irregularities compared to a continuation of the original transmission behavior. However, although this example deals with transmissions reoccurring at a certain time interval, the patterns are not limited thereto. For example, a reoccurring transmission may represent a regularity, lend to the fact that it is reoccurring. Patterns as considered here may not be restricted to deterministic patterns, but may also comprise statistical patterns, e.g. average time intervals or probabilities of transmissions. Furthermore, even a single transmission, e.g., to a particular receiver, may represent an irregularity in a pattern of absence of such transmissions. Thus, in a more general sense, the term "pattern" may be interpreted as a behavior or structure that deviates from an intended behavior or structure in a discernible way.

[0006] The memory 14 may further store a second machine-readable instruction set representing, when executed by execution hardware, such as processor 16, a malware model implementation 22 or a plurality of malware model implementations 22 causing symptoms of malware operations directed at the system 12. In this regard, it is noted that the term "malware model implementation" as used throughout the description and claims is to be interpreted broadly and may refer to machine-readable instructions, or a combination of machine readable-instructions and execution hardware, such as processor 16, which when executed by the execution hardware implement a malware model. Different malware model implementations 22 may, for example, be generated by configuring a malware model with different parameter set of a plurality of parameter sets. Moreover, the symptoms caused by different malware model implementations 22 may vary. I.e., different malware model implementations 22 may cause different symptoms of malware operations. In this regard, it is noted that the term "malware model" as used throughout the description and claims is to be understood broadly and may refer to an abstraction of malware behavior in terms of a model, e.g., a set of rules defining malware behavior which may be implemented, for example, by a set of machine-readable instructions executed by execution hardware. In particular, the malign effects on the computing environment 10 caused by malware operations (exhibiting true malware behavior) may be partially or fully muted and the malware operations may only or substantially be modeled by their benign symptoms. More particularly, the benign symptoms caused by each malware model implementation 22 may be free from intentionally impeding operation of processes in the system 12, extracting sensitive information from the system 12, intruding the system 12, or allowing unauthorized access to the system 12.

[0007] The benign symptoms may be symptoms whose impact on the system 12 is known beforehand and which can be reversed at any time during or after their occurrence. For example, the benign symptoms may be symptoms which make use of resources of the system 12 without being directed at a particular purpose other than mimicking malware operations. E.g., a malign operation such as an exfiltration of sensitive data, i.e., an unauthorized transfer of sensitive data from the system 12, may be modeled by a transmission of non-sensitive data or even random or randomly generated data so that symptoms of data traffic comparable to real malware operations occur, but without being directed at the malign goal of exfiltrating sensitive data. Thus, in a more general sense, any operation which is under direct or indirect control of the owner or operator of the system 12 and which performs as intended by the owner or operator of the system 12 may be regarded as exposing benign symptoms. In this regard, it is noted that the term "symptoms" as used throughout the description and claims may refer to any deviation in the behavior or structure of entities from an intended behavior or structure. Accordingly, the behavior of malware may be modeled in the computing environment 10 without risking the potentially dangerous behavior of real malware. In this regard, the computing environment 10 may be an "everyday" operational computing environment 10, such as a local network of a private or public institution or a corporate unit as opposed to a computing environment dedicated to malware detector implementation 18 evaluation such as, for example, a computing environment of a malware detector implementation evaluation laboratory. [0008] Each parameter set may relate to the symptoms caused by a respective malware model implementation 22. For example, a parameter may define a (spatial) occurrence of a symptom, a degree or rate of the occurrence of the symptom, a temporal or spatial symptom occurrence covariance, a symptom pattern, etc. In this regard, it is noted that there may be a direct or an indirect relationship between a parameter and a symptom caused by the respective malware model implementation 22. In particular, a parameter may be directly assigned to a symptom such as a data exfiltration rate. For example, the data exfiltration rate may be defined by a numeric value of the parameter. However, another parameter may define a design of the malware model implementation 22. More particularly, a malware model implementation 22 may be comprised of different engines which may be activated or deactivated. In an example, a first parameter of a first parameter set may activate a first engine while a second parameter of the first parameter set may deactivate a second engine in the malware model implementation 22. E.g., the first engine may be a data exfiltration engine while the second engine may be an unauthorized access allowing engine. In another example, a first parameter of a second parameter set may deactivate the first engine while a second parameter of the second parameter set may activate the second engine in the malware model implementation 22. In a further example, the malware model may be a set of machine- readable instructions, wherein different parameter sets define different subsets of the machine-readable instructions, each subset defining a malware model implementation 22. Different malware model implementations 22 may thus be comprised of different engines, e.g., the first or the second engine, wherein the engines of a malware model implementation 22 may be defined by the respective subset of the machine readable instructions forming the malware model, and thus cause different symptoms. This may allow choosing or selecting parameter sets which enable a systematic emulation of malware behavior including a potential or anticipated behavior of yet unknown malware.

[0009] The symptom pattern may be a pattern in data traffic, resource accesses, resource consumption, and data stored in the memory 14, or any combination thereof. Moreover, the symptom pattern may be a temporal pattern, a spatial pattern, a structural pattern, etc. For example, the symptoms may include temporal or spatial patterns in inter-arrival times of successive network packets, disk or registry accesses, load levels on resources such as the memory 14 or the processor 16, and structural patterns in received network packets or data stored in the memory 14. Furthermore, data traffic symptoms may relate to addresses that the data traffic within or of/by the system 12 is sent to/received from, and the size, type, and content of transmissions. Thus, by varying the parameters, e.g., by configuring the malware model with different parameter sets, the behavior of know and even yet unknown malware may be modeled in a benign way, i.e., without posing a threat to the security of the computing environment 10. In particular, the malware model implementations 22 may be fitted to traces obtained from observing existing malware. Moreover, the malware model may be parameterized freely to describe hypothetical malware behavior. In particular, parameter sets may be designed by talcing into account goals of the malware writer/user, e.g., a fast exfiltration of data (high data rate) while remaining undetected by a malware detector implementation 18 (e.g., due to stealthy transmissions), to design parameter sets that reflect relevant malware behavior, i.e., malware behavior that is apt for a specific purpose.

[0010] FIG. 2 is a flow diagram illustrating a process 24 of testing the effectiveness of a malware detector implementation 18 in the computing environment 10. At 26, the process starts with activating a malware detector implementation 18. After activating the malware detector implementation 18, the malware detector implementation 18 may detect malware operations in the computing environment 10, e.g., by detecting symptoms which may be caused by malware operations. To inject malware operation symptoms to the computing environment 10, the process may continue at 28 with (successively) generating different malware model implementations 22. For example, a malware model implementation 22 may be generated by configuring the malware model with a specific parameter set of a multitude of possible or available parameter sets for the malware model implementation 22 to cause specific malware operation symptoms in the computing environment 10. In particular, the malware model implementation 22 may be comprised of machine-readable instructions which are executed by entities in the computing environment 10. For example, the malware model implementation 22 may be split into several components, e.g. master and slave distributed over entities in the computing environment 10. In another example, instances of a generated malware model implementation 22 may be distributed over entities of the computing environment 10 to act cooperatively. After configuring the malware model with the specific parameter set at 28 to generate the malware model implementation 22, the malware detector implementation 18 may detect the symptoms injected into the computing environment 10 by the malware model implementation 22 and thus indicate (assumed or alleged) malware operations in the computing environment 10. The malware model implementation 22 (and following malware model implementations 22) may be used for a given amount of time or until a predefined goal is achieved, e.g., exfiltration of a certain amount of data, or until statistical significance of the results is achieved. After stopping the malware model implementation 22, e.g. by setting a start/stop bit via an interface of the malware model implementation 22, and storing a detection result of the malware detector implementation 18, the malware model may be configured with a different parameter set to generate a different malware model implementation 22 which causes different malware operation symptoms in the computing environment 10. The process 24 may terminate at 30 with evaluating the employed malware model implementations 22 for which the malware detector implementation 18 indicates malware operations in the computing environment 10. Based thereon, an effectiveness indicator may be assigned to the malware detector implementation 18.

[0011] FIG. 3 schematically illustrates the system 12 in connection with a network 32 comprising a plurality of network entities 34 and 36. In an example, a network entity 34 may implement a command and control server of the malware model implementation 22. In operation, the malware model implementation 22 may exchange data with the network entity 34, causing network traffic symptoms mimicking data exfiltration. The network traffic symptoms may be described as abstract data transmissions. The abstract data transmissions may involve a sender and a recipient (implying a direction), an amount of data with certain properties, time intervals between transmissions, etc. The caused symptoms may be explicit or implicit symptoms. Explicit symptoms may be symptoms which are generated by a malware model implementation 22 while implicit symptoms may emerge as a consequence of the injection of explicit symptoms. The explicit symptoms may be described by a stochastic process incorporated by a malware model implementation 22. The stochastic process may be defined at least in part by a probability parameter as will be described in greater detail below.

[0012] In network traffic, implicit symptoms may occur at lower layers in the transmission control protocol/internet protocol (TCP/IP) stack as a result of explicit symptoms at a higher layer. For example, a malware model implementation 22 may comprise an explicit description of a Hypertext Transfer Protocol (HTTP) connection to the network entity 34. Injecting the HTTP connection may, however, imply an underlying TCP/IP connection whose characteristics are determined by a specific implementation of the computing environment 10, rather than by the malware model implementation 22. Moreover, explicit symptoms may be described by a stochastic model comprising n > 1 states. Each state may define the behavior of malware in terms of symptom generation in a particular state of its lifecycle, such as discovery or data exfiltration. The behavior in each state may be defined by a set of stochastic models that define the network traffic in this state. For example, an inter-arrival stochastic model may describe times between successive data transmissions, a size stochastic model may describe the amount of data sent/received in each transmission, an address stochastic model may describe the remote network entity 34 involved in the transmission, a type stochastic model may describe the type of the transmission, etc. The resulting explicit symptoms may be described at different layers of a transmission stack, such as different layers of a TCP/IP stack. For instance, a malware model implementation 22 may specify characteristics of TCP connections or it may specify characteristics of Hypertext Transfer Protocol Secure (HTTPS) connections. The implicit symptoms may then be determined by the layer for which explicit symptoms are described.

[0013] The network entity 34 may be Internet-based and the malware detector implementation 18 may be configured to detect malware operations within the system 12 including the interface 20 via which data traffic is exchanged with the malware model implementation 22. The malware model implementation 22 may comprise a data traffic generator which may request values for the size, address, and type of data transmissions from the stochastic models for the current state, respectively, and may generate data traffic based thereon. Moreover, the data traffic generator may request a time value from the inter-arrival time stochastic model and may wait for the time value before initiating a next transmission. Transmission sizes may be picked randomly from a uniform distribution. Addresses may be chosen randomly from a given set (specified explicitly or by means of a netmask) or may be generated by a domain- name generator. Furthermore, the traffic generator may generate data traffic using different protocols, such as, User Datagram Protocol (UDP), TCP, HTTP, or HTTPS. Lower-level protocols (such as UDP and TCP) may allow for a fine-grained control over the data traffic characteristics, such as individual data packet sizes. Higher-level protocols (such as HTTP or HTTPS) may support more realistic behavior, e.g., of malware utilizing encrypted HTTPS connections to evade detection.

[0014] Stochastic processes of a malware model implementation 22 may be governed by numerical parameters forming a respective parameter set. The numerical parameters may be varied to express properties of malware that is being modelled and they may give insight into the properties of the malware. For example, a malware model implementation 22 may calculate the inter-arrival times of data transmissions transmitted or received via the interface 20 by a Poisson process, i.e., the inter-arrival times may be drawn from an exponential distribution with a given rate λ. However, the stochastic processes are not limited to Poisson processes but may also include a Markov-modulated process, etc., and combinations thereof. By varying λ across a range of values, it may be evaluated when the malware detector implementation 18 becomes unable to reliably detect the symptoms of malware operations. As a malware model implementation 22 may be comprised of stochastic processes, it may be analyzed with respect to its properties. In particular, metrics such as the amount of data that may be exfiltrated within a given time frame may be computed based on the given parameters.

[0015] A malware model implementation 22 may alternate between a setup phase and an operation phase. In the setup phase, a first malware model implementation 22 may be generated by configuring the malware model with a first parameter set. In an example, the first parameter set may specify a list of network entities 34, 36 to connect to. During the setup phase, a Domain-Generation Algorithm (DGA) may be used for randomly generating host names. The host names may, for example, be transmitted using HTTP GET invocations. In the operation phase, the first malware model implementation 22 may connect to one of the network entities 34, 36, e.g., by using HTTPS and may alternate between downloading command scripts (typically small amounts of data) from the network entity 34 and transmitting exfiltrated data to the network entity 34. In an example, two malware detector implementations 18 may be evaluated. In particular, the first malware model implementation 22 may be configured to exfiltrate data at a low data exfiltration rate. Due to the low data exfiltration rate, a first malware detector implementation 18 may not indicate malware operations for operation of the first malware model implementation 22. Upon configuring the malware model with a second parameter set for generating a second malware model implementation 22 with substantially increased data exfiltration rate, the first malware detector implementation 18 may indicate malware operations for operation of the second malware model implementation 22. [0016] As a result, the first malware detector implementation 18 may be assigned an effectiveness indicator reflecting a measure of correct indications of malware operations, i.e., one correct detection and one wrong detection. For example, the effectiveness indicator of the first malware detector implementation 18 may comprise a detection probability value of 50%. Moreover, said process may be repeated with a second malware detector implementation 18. In this regard, it is to be noted that the difference between both malware detector implementations 18 may be a difference in design (i.e., different malware detectors) or a difference in configuration of a malware detector, for example a shift in patterns to observe or a different configuration in view of sensitivity. The second malware detector implementation 18 may detect malware operations for malware model implementations 22 and may hence be awarded with a high detection probability value such as 100% or close to 100%, indicating a higher effectiveness of the second malware detector implementation 18. Moreover, in a further example, the malware model may be configured with a third parameter set to generate a third malware model implementation 22 which causes no symptoms. While the first malware detector implementation 18 may not indicate malware operations for the third parameter set, the second detector implementation 18 may wrongly indicate malware operations and the detection probability value assigned to the malware detector implementations 18 may be adapted accordingly. Thus, in an example, the effectiveness indicator may be a percentage of correctly indicated malware operations although the concept is not limited thereto and other rules for calculating an effectiveness indicator may be contemplated as will be explained in greater detail below. Other effectiveness indicators may be used. In particular, standard indicators such as the true-positive rate, the false-positive rate, the true-negative rate, and/or the false-negative rate may be used.

[0017] Thus, malware model implementations 22 may be used to evaluate different malware detector implementations 18. Comparing different malware detector implementations 18 may allow selecting an optimum malware detector implementation 18 with regard to design and configuration in view of the computing environment 10. Heretofore, a malware model implementation 22 may be used to establish a ranking between different malware detector implementations 18 with respect to their effectiveness. This ranking may then be used to select the malware detector implementation 18 that is best suited to detect malware behavior or operations in a particular scenario, e.g., as defined by the computing environment 10. In particular, the ranking may allow singling out those malware detector implementations 18 that do not fulfill required detection quality requirements. Moreover, the evaluation may allow selecting malware detector implementations 18 which may be used as a starting point for further malware detector implementation 18 developments, e.g., in guiding the development of a malware detector implementation 18 in view of a preferred implementation or in finding malware detector implementation 18 that may be evaluated in a further evaluation process. Moreover, if two malware detector implementation 18 differ in view of configuration parameters, further (not yet evaluated) malware detector implementations 18 having configuration parameters which are at least in part within a range defined by the configuration parameters of the two malware detector implementations 18 may be evaluated. For example, if a first malware detector implementation 18 has a sensitivity level indicated by numeric value "a" and a second malware detector implementation 18 has a sensitivity level indicated by a numeric value "b", a third malware detector implementation 18 having a sensitivity level indicated by a numeric value "c" which is between a and b, may be evaluated. In particular, further malware detector implementations 18 may be evaluated that are somehow near a breaking point of a malware detector implementation 18 (in terms of a distance metric such as a distance of numeric values of configuration parameters), beyond which effectiveness (substantially) changes.

[0018] In an example, a malware model implementation 22 may be used in a process of establishing a ranking of malware detector implementations 18 that reflects their quality in detecting malware operations according to a given set of metrics such as, for instance, a true- positive rate (TPR) or a false-positive rate (FPR). The metrics may be combined into a single numerical value by a function, e.g., a weighted sum, with the weights reflecting a relative importance of each metric. The process may then iterate over the set of all malware detector implementations 18 and emulate malware behavior using each of a set of malware model implementations 22 in turn. The process may run each malware model implementation 22 and may measure the effectiveness of the malware detector implementation 18 in detecting the symptoms produced by the malware model implementation 22, according to the given metrics.

[0019] The metrics may be combined using the function and the result may be stored in an effectiveness indicator assigned to the malware detector implementation 18 by a classifier of the system, wherein the classifier my be implemented by machine-readable instructions executed by the processor 16. After iterating over all malware detector implementations 18, the classifier may sort the list of malware detector implementations 18 by the numerical values produced by the function. The sorting may combine metrics for different malware model implementations 22 in various ways. For instance, a weighted sum of the metrics for the malware model implementations 22 may be used to express relative importance of each malware model implementation 22. In particular, malware model implementations 22 having a high likelihood of being relevant may be assigned higher weights than malware model implementations 22 having a low likelihood of being relevant. For example, malware model implementations 22 having a low data exfiltration rate may have a low likelihood of being relevant, as the low data exfiltration rate may not be sufficient to achieve the goal of sufficiently fast data exfiltration. The sorted list may then be provided for further selecting a particular malware detector implementation 18 or for evaluation of further the malware detector implementations 18 as discussed above. The process thus provides an automated way of evaluating effectiveness of mal ware-detector implementations 18. Moreover, the resulting ranking may allow identifying malware detector implementations 18 that are appropriate for deployment, further development, or more extensive evaluation.

LIST OF REFERENCE SIGNS computing environment

system

memory

processor

malware detector implementation

interface

malware model implementation

process

process element

process element

process element

network

network entity

network entity




 
Previous Patent: MIGRATION OF COMPUTER SYSTEMS

Next Patent: MALWARE MODELLING