Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IOT DEVICE IDENTIFICATION WITH PACKET FLOW BEHAVIOR MACHINE LEARNING MODEL
Document Type and Number:
WIPO Patent Application WO/2023/076127
Kind Code:
A1
Abstract:
Identifying Internet of Things (loT) devices with packet flow behavior including by using machine learning models is disclosed. Information associated with a network communication of an loT device is received. A determination of whether the loT device has previously been classified is made. In response to determining that the loT device has not previously been classified, a determination is made that a probability match for the loT device against a behavior signature exceeds a threshold. Based at least in part on the probability match, a classification of the loT device is provided to a security appliance configured to apply a policy to the loT device.

Inventors:
ZHANG JIALIANG (US)
TIAN KE (US)
ZHANG FAN (US)
Application Number:
PCT/US2022/047493
Publication Date:
May 04, 2023
Filing Date:
October 21, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PALO ALTO NETWORKS INC (US)
International Classes:
H04L9/40
Foreign References:
US20200177485A12020-06-04
Attorney, Agent or Firm:
WAGNER, Robyn (US)
Download PDF:
Claims:
CLAIMS

1. A system, comprising: a processor configured to: receive information associated with a network communication of an Internet of Things (loT) device; determine whether the loT device has previously been classified; in response to determining that the loT device has not previously been classified, determine that a probability match for the loT device against a behavior signature exceeds a threshold; and based at least in part on the probability match, provide a classification of the loT device to a security appliance configured to apply a policy to the loT device; and a memory coupled to the processor and configured to provide the processor with instructions.

2. The system of claim 1, wherein the received information includes sequence of packet length information.

3. The system of claim 1, wherein the received information includes sequence of packet inter-arrival time information.

4. The system of claim 1, wherein the received information includes transport layer security (TLS) information.

5. The system of claim 1, wherein an organizationally unique identifier (OUI) for the loT device is not available.

6. The system of claim 1, wherein an OUI for the loT device corresponds to a network card and wherein the loT device is not a network card.

7. The system of claim 1, wherein an OUI for the loT device corresponds to a network appliance and wherein the loT device is not a network appliance.

8. The system of claim 1, wherein at least a portion of the network communication is encrypted.

9. The system of claim 1, wherein a hostname for the loT device is not available.

10. The system of claim 1, wherein the behavior signature comprises a set of coefficients.

11. The system of claim 1, wherein the behavior signature is generated at least in part by

43 using a machine learning model trained on features extracted from exemplary loT devices of a particular type.

12. The system of claim 1, wherein determining that the probability match exceeds the threshold includes determining that a plurality of signatures are matched above a threshold, and selecting a highest ranking match as a result.

13. A method, comprising: receiving information associated with a network communication of an Internet of Things (loT) device; determining whether the loT device has previously been classified; in response to determining that the loT device has not previously been classified, determining that a probability match for the loT device against a behavior signature exceeds a threshold; and based at least in part on the probability match, providing a classification of the loT device to a security appliance configured to apply a policy to the loT device.

14. A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for: receiving information associated with a network communication of an Internet of Things (loT) device; determining whether the loT device has previously been classified; in response to determining that the loT device has not previously been classified, determining that a probability match for the loT device against a behavior signature exceeds a threshold; and based at least in part on the probability match, providing a classification of the loT device to a security appliance configured to apply a policy to the loT device.

44

Description:
IOT DEVICE IDENTIFICATION WITH PACKET FLOW BEHAVIOR MACHINE LEARNING MODEL

BACKGROUND OF THE INVENTION

[0001] Nefarious individuals atempt to compromise computer systems in a variety of ways. As one example, such individuals may embed or otherwise include malicious software (“malware”) in email attachments and transmit or cause the malware to be transmited to unsuspecting users. When executed, the malware compromises the victim’s computer and can perform additional nefarious tasks (e.g., exfiltrating sensitive data, propagating to other systems, etc.). A variety of approaches can be used to harden computers against such and other compromises. Unfortunately, existing approaches to protecting computers are not necessarily suitable in all computing environments. Further, malware authors continually adapt their techniques to evade detection, and an ongoing need exists for improved techniques to detect malware and prevent its harm in a variety of situations.

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.

[0003] Figure 1 illustrates an example of an environment in which malicious activity is detected and its harm reduced.

[0004] Figure 2A illustrates an embodiment of a data appliance.

[0005] Figure 2B is a functional diagram of logical components of an embodiment of a data appliance.

[0006] Figure 2C illustrates an example event path between an loT server and an loT module.

[0007] Figure 2D illustrates an example of a device discovery event.

[0008] Figure 2E illustrates an example of a session event.

[0009] Figure 2F illustrates an embodiment of an loT module. [0010] Figure 2G illustrates an example way of implementing loT device analytics.

[0011] Figure 3 illustrates an embodiment of a process for passively providing AAA support for an loT device in a network.

[0012] Figures 4A-4C illustrate examples of RADIUS messages sent by an loT server to a AAA server on behalf of an loT device in various embodiments.

[0013] Figure 5 illustrates an embodiment of an loT module.

[0014] Figure 6 illustrates an example of a process for classifying an loT device.

[0015] Figures 7A and 7B illustrate example firewall rules

[0016] Figure 8-10 illustrate portions of example interfaces.

[0017] Figure 11 illustrates an example of a process for generating a policy to apply to a communication involving an loT device.

[0018] Figure 12 illustrates a sequence of packet inter-arrival times.

[0019] Figure 13A illustrates training data.

[0020] Figure 13B illustrates a result of training a model using the dataset shown in Figure 13A.

[0021] Figure 14A illustrates two examples of devices matching a device behavior signature.

[0022] Figure 14B illustrates two examples of devices not matching a device behavior signature.

[0023] Figure 15 illustrates an example of a process for classifying an loT device.

DETAILED DESCRIPTION

[0024] The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.

[0025] A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.

[0026] I. OVERVIEW

[0027] A firewall generally protects networks from unauthorized access while permitting authorized communications to pass through the firewall. A firewall is typically a device, a set of devices, or software executed on a device that provides a firewall function for network access. For example, a firewall can be integrated into operating systems of devices (e.g., computers, smart phones, or other types of network communication capable devices). A firewall can also be integrated into or executed as one or more software applications on various types of devices, such as computer servers, gateways, network/routing devices (e.g., network routers), and data appliances (e.g., security appliances or other types of special purpose devices), and in various implementations, certain operations can be implemented in special purpose hardware, such as an ASIC or FPGA.

[0028] Firewalls typically deny or permit network transmission based on a set of rules. These sets of rules are often referred to as policies (e.g., network policies or network security policies). For example, a firewall can filter inbound traffic by applying a set of rules or policies to prevent unwanted outside traffic from reaching protected devices. A firewall can also filter outbound traffic by applying a set of rules or policies (e.g., allow, block, monitor, notify or log, and/or other actions can be specified in firewall rules or firewall policies, which can be triggered based on various criteria, such as are described herein). A firewall can also filter local network (e.g., intranet) traffic by similarly applying a set of rules or policies.

[0029] Security devices (e.g., security appliances, security gateways, security services, and/or other security devices) can include various security functions (e.g., firewall, anti-malware, intrusion prevention/detection, Data Loss Prevention (DLP), and/or other security functions), networking functions (e.g., routing, Quality of Service (QoS), workload balancing of network related resources, and/or other networking functions), and/or other functions. For example, routing functions can be based on source information (e.g., IP address and port), destination information (e.g., IP address and port), and protocol information.

[0030] A basic packet filtering firewall filters network communication traffic by inspecting individual packets transmitted over a network (e.g., packet filtering firewalls or first generation firewalls, which are stateless packet filtering firewalls). Stateless packet filtering firewalls typically inspect the individual packets themselves and apply rules based on the inspected packets (e.g., using a combination of a packet’s source and destination address information, protocol information, and a port number).

[0031] Application firewalls can also perform application layer filtering (e.g., application layer filtering firewalls or second generation firewalls, which work on the application level of the TCP/IP stack). Application layer filtering firewalls or application firewalls can generally identify certain applications and protocols (e.g., web browsing using HyperText Transfer Protocol (HTTP), a Domain Name System (DNS) request, a file transfer using File Transfer Protocol (FTP), and various other types of applications and other protocols, such as Telnet, DHCP, TCP, UDP, and TFTP (GSS)). For example, application firewalls can block unauthorized protocols that attempt to communicate over a standard port (e.g., an unauthorized/out of policy protocol attempting to sneak through by using a nonstandard port for that protocol can generally be identified using application firewalls). [0032] Stateful firewalls can also perform state-based packet inspection in which each packet is examined within the context of a series of packets associated with that network transmission’s flow of packets. This firewall technique is generally referred to as a stateful packet inspection as it maintains records of all connections passing through the firewall and is able to determine whether a packet is the start of a new connection, a part of an existing connection, or is an invalid packet. For example, the state of a connection can itself be one of the criteria that triggers a rule within a policy.

[0033] Advanced or next generation firewalls can perform stateless and stateful packet filtering and application layer filtering as discussed above. Next generation firewalls can also perform additional firewall techniques. For example, certain newer firewalls sometimes referred to as advanced or next generation firewalls can also identify users and content (e.g., next generation firewalls). In particular, certain next generation firewalls are expanding the list of applications that these firewalls can automatically identify to thousands of applications. Examples of such next generation firewalls are commercially available from Palo Alto Networks, Inc. (e.g., Palo Alto Networks’ PA Series firewalls). For example, Palo Alto Networks’ next generation firewalls enable enterprises to identify and control applications, users, and content — not just ports, IP addresses, and packets — using various identification technologies, such as the following: APP-ID for accurate application identification, User-ID for user identification (e.g., by user or user group), Content-ID for real-time content scanning (e.g., controlling web surfing and limiting data and file transfers), and Device-ID (e.g., for loT device type identification). These identification technologies allow enterprises to securely enable application usage using business-relevant concepts, instead of following the traditional approach offered by traditional port-blocking firewalls. Also, special purpose hardware for next generation firewalls (implemented, for example, as dedicated appliances) generally provides higher performance levels for application inspection than software executed on general purpose hardware (e.g., such as security appliances provided by Palo Alto Networks, Inc., which use dedicated, function specific processing that is tightly integrated with a single-pass software engine to maximize network throughput while minimizing latency).

[0034] Advanced or next generation firewalls can also be implemented using virtualized firewalls. Examples of such next generation firewalls are commercially available from Palo Alto Networks, Inc. (e.g., Palo Alto Networks’ VM Series firewalls, which support various commercial virtualized environments, including, for example, VMware® ESXi™ and NSX™, Citrix® Netscaler SDX™, KVM/OpenStack (Centos/RHEL, Ubuntu®), and Amazon Web Services (AWS)). For example, virtualized firewalls can support similar or the exact same next-generation firewall and advanced threat prevention features available in physical form factor appliances, allowing enterprises to safely enable applications flowing into, and across their private, public, and hybrid cloud computing environments. Automation features such as VM monitoring, dynamic address groups, and a REST-based API allow enterprises to proactively monitor VM changes dynamically feeding that context into security policies, thereby eliminating the policy lag that may occur when VMs change.

[0035] II. EXAMPLE ENVIRONMENT

[0036] Figure 1 illustrates an example of an environment in which malicious activity is detected and its harm reduced. In the example shown in Figure 1, client devices 104-108 are a laptop computer, a desktop computer, and a tablet (respectively) present in an enterprise network 110 of a hospital (also referred to as “Acme Hospital”). Data appliance 102 is configured to enforce policies regarding communications between client devices, such as client devices 104 and 106, and nodes outside of enterprise network 110 (e.g., reachable via external network 118).

[0037] Examples of such policies include ones governing traffic shaping, quality of service, and routing of traffic. Other examples of policies include security policies such as ones requiring the scanning for threats in incoming (and/or outgoing) email attachments, website content, files exchanged through instant messaging programs, and/or other file transfers. In some embodiments, data appliance 102 is also configured to enforce policies with respect to traffic that stays within enterprise network 110.

[0038] Network 110 also includes a directory service 154 and an Authentication, Authorization, and Accounting (AAA) server 156. In the example shown in Figure 1, directory service 154 (also referred to as an identity provider or domain controller) makes use of the Lightweight Directory Access Protocol (LDAP) or other appropriate protocols. Directory service 154 is configured to manage user identity and credential information. One example of directory service 154 is a Microsoft Active Directory server. Other types of systems can also be used instead of an Active Directory server, such as a Kerberos-based system, and the techniques described herein adapted accordingly. In the example shown in Figure 1, AAA server 156 is a network admission control (NAC) server. AAA server 156 is configured to authenticate wired, wireless, and VPN users and devices to a network, evaluate and remediate a device for policy compliance before permitting access to the network, differentiate access based on roles, and then audit and report on who is on the network. One example of AAA server 156 is a Cisco Identity Services Engine (ISE) server that makes use of the Remote Authentication Dial-In User Service (RADIUS). Other types of AAA servers can be used in conjunction with the techniques described herein, including ones that use protocols other than RADIUS.

[0039] In various embodiments, data appliance 102 is configured to listen to communications (e.g., passively monitor messages) to/from directory service 154 and/or AAA server 156. In various embodiments, data appliance 102 is configured to communicate with (i.e., actively communicate messages with) directory service 154 and/or AAA server 156. In various embodiments, data appliance 102 is configured to communicate with an orchestrator (not pictured) that communicates with (e.g., actively communicates messages with) various network elements such as directory service 154 and/or AAA server 156. Other types of servers can also be included in network 110 and can communicate with data appliance 102 as applicable, and directory service 154 and/or AAA server 156 can also be omitted from network 110 in various embodiments.

[0040] While depicted in Figure 1 as having a single data appliance 102, a given network environment (e.g., network 110) can include multiple embodiments of data appliances, whether operating individually or in concert. Similarly, while the term “network” is generally referred to herein for simplicity in the singular (e.g., as “network 110”), the techniques described herein can be deployed in a variety of network environments of various sizes and topologies, comprising various mixes of networking technologies (e.g., virtual and physical), using various networking protocols (e.g., TCP and UDP) and infrastructure (e.g., switches and routers) across various network layers, as applicable.

[0041] Data appliance 102 can be configured to work in cooperation with a remote security platform 140. Security platform 140 can provide a variety of services, including performing static and dynamic analysis on malware samples (e.g., via sample analysis module 124), and providing a list of signatures of known-malicious files, domains, etc., to data appliances, such as data appliance 102 as part of a subscription. As will be described in more detail below, security platform 140 can also provide information (e.g., via loT module 138) associated with the discovery, classification, management, etc., of loT devices present within a network such as network 110. In various embodiments, signatures, results of analysis, and/or additional information (e.g., pertaining to samples, applications, domains, etc.) is stored in database 160. In various embodiments, security platform 140 comprises one or more dedicated commercially available hardware servers (e.g., having multi-core processor(s), 32G+ of RAM, gigabit network interface adaptor(s), and hard drive(s)) running typical server-class operating systems (e.g., Linux). Security platform 140 can be implemented across a scalable infrastructure comprising multiple such servers, solid state drives or other storage 158, and/or other applicable high-performance hardware. Security platform 140 can comprise several distributed components, including components provided by one or more third parties. For example, portions or all of security platform 140 can be implemented using the Amazon Elastic Compute Cloud (EC2) and/or Amazon Simple Storage Service (S3). Further, as with data appliance 102, whenever security platform 140 is referred to as performing a task, such as storing data or processing data, it is to be understood that a sub-component or multiple sub-components of security platform 140 (whether individually or in cooperation with third party components) may cooperate to perform that task. As examples, security platform 140 can perform static/dynamic analysis (e.g., via sample analysis module 124) and/or loT device functionality (e.g., via loT module 138) in cooperation with one or more virtual machine (VM) servers. An example of a virtual machine server is a physical machine comprising commercially available server-class hardware (e.g., a multi -core processor, 32+ Gigabytes of RAM, and one or more Gigabit network interface adapters) that runs commercially available virtualization software, such as VMware ESXi, Citrix XenServer, or Microsoft Hyper-V. In some embodiments, the virtual machine server is omitted. Further, a virtual machine server may be under the control of the same entity that administers security platform 140, but may also be provided by a third party. As one example, the virtual machine server can rely on EC2, with the remainder portions of security platform 140 provided by dedicated hardware owned by and under the control of the operator of security platform 140.

[0042] An embodiment of a data appliance is shown in Figure 2A. The example shown is a representation of physical components that are included in data appliance 102, in various embodiments. Specifically, data appliance 102 includes a high performance multicore Central Processing Unit (CPU) 202 and Random Access Memory (RAM) 204. Data appliance 102 also includes a storage 210 (such as one or more hard disks or solid state storage units). In various embodiments, data appliance 102 stores (whether in RAM 204, storage 210, and/or other appropriate locations) information used in monitoring enterprise network 110 and implementing disclosed techniques. Examples of such information include application identifiers, content identifiers, user identifiers, requested URLs, IP address mappings, policy and other configuration information, signatures, hostname/URL categorization information, malware profiles, machine learning models, loT device classification information, etc. Data appliance 102 can also include one or more optional hardware accelerators. For example, data appliance 102 can include a cryptographic engine 206 configured to perform encryption and decryption operations, and one or more Field Programmable Gate Arrays (FPGAs) 208 configured to perform matching, act as network processors, and/or perform other tasks.

[0043] Functionality described herein as being performed by data appliance 102 can be provided/implemented in a variety of ways. For example, data appliance 102 can be a dedicated device or set of devices. A given network environment may include multiple data appliances, each of which may be configured to provide services to a particular portion or portions of a network, may cooperate to provide services to a particular portion or portions of a network, etc. The functionality provided by data appliance 102 can also be integrated into or executed as software on a general purpose computer, a computer server, a gateway, and/or a network/routing device. In some embodiments, at least some functionality described as being provided by data appliance 102 is instead (or in addition) provided to a client device (e.g., client device 104 or client device 106) by software executing on the client device. Functionality described herein as being performed by data appliance 102 can also be performed at least partially by or in cooperation with security platform 140, and/or functionality described herein as being performed by security platform 140 can also be performed at least partially by or in cooperation with data appliance 102, as applicable. As one example, various functionality described as being performed by loT module 138 can be performed by embodiments of loT server 134.

[0044] Whenever data appliance 102 is described as performing a task, a single component, a subset of components, or all components of data appliance 102 may cooperate to perform the task. Similarly, whenever a component of data appliance 102 is described as performing a task, a subcomponent may perform the task and/or the component may perform the task in conjunction with other components. In various embodiments, portions of data appliance 102 are provided by one or more third parties. Depending on factors such as the amount of computing resources available to data appliance 102, various logical components and/or features of data appliance 102 may be omitted and the techniques described herein adapted accordingly. Similarly, additional logical components/features can be included in embodiments of data appliance 102 as applicable. One example of a component included in data appliance 102 in various embodiments is an application identification engine which is configured to identify an application (e.g., using various application signatures for identifying applications based on packet flow analysis). For example, the application identification engine can determine what type of traffic a session involves, such as Web Browsing - Social Networking; Web Browsing - News; SSH; and so on. Another example of a component included in data appliance 102 in various embodiments is an loT server 134, described in more detail below. loT server 134 can take a variety of forms, including as a standalone server (or set of servers), whether physical or virtualized, and can also be collocated with/incorporated into data appliance 102 as applicable (e.g., as shown in Figure 1).

[0045] Figure 2B is a functional diagram of logical components of an embodiment of a data appliance. The example shown is a representation of logical components that can be included in data appliance 102 in various embodiments. Unless otherwise specified, various logical components of data appliance 102 are generally implementable in a variety of ways, including as a set of one or more scripts (e.g., written in Java, python, etc., as applicable).

[0046] As shown, data appliance 102 comprises a firewall, and includes a management plane 212 and a data plane 214. The management plane is responsible for managing user interactions, such as by providing a user interface for configuring policies and viewing log data. The data plane is responsible for managing data, such as by performing packet processing and session handling.

[0047] Network processor 216 is configured to receive packets from client devices, such as client device 108, and provide them to data plane 214 for processing. Whenever flow module 218 identifies packets as being part of a new session, it creates a new session flow. Subsequent packets will be identified as belonging to the session based on a flow lookup. If applicable, SSL decryption is applied by SSL decryption engine 220. Otherwise, processing by SSL decryption engine 220 is omitted. Decryption engine 220 can help data appliance 102 inspect and control SSL/TLS and SSH encrypted traffic, and thus help to stop threats that might otherwise remain hidden in encrypted traffic. Decryption engine 220 can also help prevent sensitive content from leaving enterprise network 110. Decryption can be controlled (e.g., enabled or disabled) selectively based on parameters such as: URL category, traffic source, traffic destination, user, user group, and port. In addition to decryption policies (e.g., that specify which sessions to decrypt), decryption profiles can be assigned to control various options for sessions controlled by the policy. For example, the use of specific cipher suites and encryption protocol versions can be required.

[0048] Application identification (APP-ID) engine 222 is configured to determine what type of traffic a session involves. As one example, application identification engine 222 can recognize a GET request in received data and conclude that the session requires an HTTP decoder. In some cases, e.g., a web browsing session, the identified application can change, and such changes will be noted by data appliance 102. For example, a user may initially browse to a corporate Wiki (classified based on the URL visited as “Web Browsing - Productivity”) and then subsequently browse to a social networking site (classified based on the URL visited as “Web Browsing - Social Networking”). Different types of protocols have corresponding decoders.

[0049] Based on the determination made by application identification engine 222, the packets are sent, by threat engine 224, to an appropriate decoder configured to assemble packets (which may be received out of order) into the correct order, perform tokenization, and extract out information. Threat engine 224 also performs signature matching to determine what should happen to the packet. As needed, SSL encryption engine 226 can reencrypt decrypted data. Packets are forwarded using a forward module 228 for transmission (e.g., to a destination).

[0050] As also shown in Figure 2B, policies 232 are received and stored in management plane 212. Policies can include one or more rules, which can be specified using domain and/or host/server names, and rules can apply one or more signatures or other matching criteria or heuristics, such as for security policy enforcement for subscriber/IP flows based on various extracted parameters/information from monitored session traffic flows. An interface (I/F) communicator 230 is provided for management communications (e.g., via (REST) APIs, messages, or network protocol communications or other communication mechanisms). Policies 232 can also include policies for managing communications involving loT devices. [0051] III. IOT DEVICE DISCOVERY AND IDENTIFICATION

[0052] Returning to Figure 1, suppose that a malicious individual (e.g., using system 120) has created malware 130. The malicious individual hopes that vulnerable client devices will execute a copy of malware 130, compromising the client device, and causing the client device to become a bot in a botnet. The compromised client device can then be instructed to perform tasks (e.g., cryptocurrency mining, participating in denial of service attacks, and propagating to other vulnerable client devices) and to report information or otherwise exfdtrate data to an external entity (e.g., command and control (C&C) server 150), as well as to receive instructions from C&C server 150, as applicable.

[0053] Some client devices depicted in Figure 1 are commodity computing devices typically used within an enterprise organization. For example, client devices 104, 106, and 108 each execute typical operating systems (e.g., macOS, Windows, Linux, Android, etc.). Such commodity computing devices are often provisioned and maintained by administrators (e.g., as company-issued laptops, desktops, and tablets, respectively) and often operated in conjunction with user accounts (e.g., managed by a directory service provider (also referred to as a domain controller) configured with user identity and credential information). As one example, an employee Alice might be issued laptop 104 which she uses to access her ACME- related email and perform various ACME-related tasks. Other types of client devices (referred to herein generally as Internet of Things or loT devices) are increasingly also present in networks and are often “unmanaged” by the IT department. Some such devices (e.g., teleconferencing devices) may be found across a variety of different types of enterprises (e.g., as loT whiteboards 144 and 146). Such devices may also be vertical specific. For example, infusion pumps and computerized tomography scanners (e.g., CT scanner 112) are examples of loT devices that may be found within a healthcare enterprise network (e.g., network 110), and robotic arms are an example of devices that may be found in a manufacturing enterprise network. Further, consumer-oriented loT devices (e.g., cameras) may also be present in an enterprise network. As with commodity computing devices, loT devices present within a network may communicate with resources that are both internal or external to such networks (or both, as applicable).

[0054] As with commodity computing devices, loT devices are a target of nefarious individuals. Unfortunately, the presence of loT devices in a network can present several unique security/administrative challenges. loT devices are often low-power devices or special purpose devices and are often deployed without the knowledge of network administrators. Even where known to such administrators, it may not be possible to install endpoint protection software or agents on loT devices. loT devices may be managed by and communicate solely/directly with third party cloud infrastructure (e.g., with industrial thermometer 152 communicating directly with cloud infrastructure 126) using proprietary (or otherwise non-standard) protocols. This can confound attempts to monitor network traffic in and out of such devices to make decisions about when a threat or attack is happening against the device. Further, some loT devices (e.g., in a healthcare environment) are mission critical (e.g., a network connected surgical system). Unfortunately, compromise of an loT device (e.g., by malware 130) or the misapplication of security policies against traffic associated with an loT device can have potentially catastrophic implications. Using techniques described herein, the security of heterogeneous networks that include loT devices can be improved and the harms posed to such networks can be reduced.

[0055] In various embodiments, data appliance 102 includes an loT server 134. loT server 134 is configured to identify loT devices within a network (e.g., network 110), in some embodiments, in cooperation with loT module 138 of security platform 140. Such identification can be used, e.g., by data appliance 102, to help make and enforce policies regarding traffic associated with loT devices, and to enhance the functionality of other elements of network 110 (e.g., providing contextual information to AAA 156). In various embodiments, loT server 134 incorporates one or more network sensors configured to passively sniff/monitor traffic. One example way to provide such network sensor functionality is as a tap interface or switch mirror port. Other approaches to monitoring traffic can also be used (in addition or instead) as applicable.

[0056] In various embodiments, loT server 134 is configured to provide log or other data (e.g., collected from passively monitoring network 110) to loT module 138 (e.g., via frontend 142). Figure 2C illustrates an example event path between an loT server and an loT module. loT server 134 sends device discovery events and session events to loT module 138. An example discovery event and a session event are illustrated in Figures 2D and 2E, respectively. In various embodiments, discovery events are sent by loT server 134 whenever it observes a packet that can uniquely identify or confirm the identity of a device (e.g., whenever a DHCP, UPNP, or SMB packet is observed). Each session that a device has (with other nodes, whether inside or outside the device’s network) is described within a session event that summarizes information about the session (e.g., source/destination information, number of packets received/sent, etc.). As applicable, multiple session events can be batched together by loT server 134 prior to sending to loT module 138. In the example shown in Figure 2E, two sessions are included. loT module 138 provides loT server 134 with device classification information via device verdict events (234).

[0057] One example way of implementing loT module 138 is using a microservices- based architecture. loT module 138 can also be implemented using different programming languages, databases, hardware, and software environments, as applicable, and/or as services that are messaging enabled, bounded by contexts, autonomously developed, independently deployable, decentralized, and built and released with automated processes. One task performed by loT module 138 is to identify loT devices in the data provided by loT server 134 (and provided by other embodiments of data appliances such as data appliances 136 and 148) and to provide additional contextual information about those devices (e.g., back to the respective data appliances).

[0058] Figure 2F illustrates an embodiment of an loT module. Region 295 depicts a set of Spark Applications that run on intervals (e.g., every five minutes, every hour, and every day) across the data of all tenants. Region 297 depicts a Kafka message bus. Session event messages received by loT module 138 (e.g., from loT server 134) bundle together multiple events as observed at loT server 134 (e.g., in order to conserve bandwidth). Transformation module 236 is configured to flatten the received session events into individual events and publish them at 250. The flattened events are aggregated by aggregation module 238 using a variety of different aggregation rules. An example rule is “for the time interval (e.g., 5 minutes), aggregate all event data for a specific device and each (APP-ID) application it used.” Another example rule is “for the time interval (e.g., 1 hour), aggregate all event data for a particular device communicating with a particular destination IP address.” For each rule, aggregation engine 238 tracks a list of attributes that need to be aggregated (e.g., a list of applications used by a device or a list of destination IP addresses). Feature extraction module 240 extracts features (252) from the attributes. Analytics module 242 uses the extracted features to perform device classification (e.g., using supervised and unsupervised learning), the results of which (254) are used to power other types of analytics (e.g., via operational intelligence module 244, threat analytics module 246, and anomaly detection module 248). Operational intelligence module 244 provides analytics related to the OT framework and operational or business intelligence (e.g., how a device is being used). Alerts (256) can be generated based on results of the analytics. In various embodiments, MongoDB 258 is used to store aggregated data and feature values. Background services 262 receive data aggregated by Spark applications and write data to MongoDB 258. API Server 260 pulls and merges data from MongoDB 258 to serve requests received from Front End 142.

[0059] Figure 2G illustrates an example way of implementing loT device identification analytics (e.g., within loT module 138 as an embodiment of analytics module 242 and related elements). Discovery events and session events (e.g., as shown in Figures 2D and 2E, respectively) are received as raw data 264 on a message bus as a Kafka topic (and are also stored in storage 158). Features are extracted by feature engine 276 (which can, for example, be implemented using Spark/MapReducer). The raw data is enriched (266) with additional contextual information by security platform 140, such as geolocation information (e.g., of the source/destination addresses). During metadata feature extraction (268), features such as the number of packets sent within a time interval from an IP address, the number of applications used by a particular device during the time interval, and the number of IP addresses contacted by the device during the time interval are constructed. The features are both passed (e.g., on a message bus) in realtime to inline analytics engine 272 (e.g., in JSON format) and stored (e.g., in feature database 270 in an appropriate format such as Apache Parquet/DataFrame) for subsequent querying (e.g., during offline modeling 299).

[0060] In addition to features built from metadata, a second type of features can be built by loT module 138 (274), referred to herein as analytics features. An example analytics feature is one built overtime based on time-series data, using aggregate data. Analytics features are similarly passed in realtime to analytics engine 272 and stored in feature database 270.

[0061] Inline analytics engine 272 receives features on a message bus via a message handler. One task performed is activity classification (278), which attempts to identify activities (such as file download, login/authentication process, or disk backup activity) associated with the session based on the received feature values/session information and attaches any applicable tags. One way of implementing activity classification 278 is via a neural network-based multi-layer perceptron combined with a convolutional neural network.

[0062] Suppose, as a result of activity classification, it is determined that a particular device is engaging in printing activities (i.e., using printing protocols) and is also periodically contacting resources owned by HP (e.g., to check for updates by calling an HP URL and using it to report status information). In various embodiments, the classification information is passed to both a clustering process (unsupervised) and a prediction process (supervised). If either process results in a successful classification of the device, the classification is stored in device database 286.

[0063] A device can be clustered, by stage one clustering engine 280, into multiple clusters (e.g., acts like a printer, acts like an HP device, etc.) based on its attributes and other behavior patterns. One way of implementing clustering engine 280 is using an extreme gradient boosting framework (e.g., XGB). The stage one classifier can be useful for classifying devices that have not previously been seen but are similar to existing known devices (e.g., a new vendor of thermostats begins selling thermostat devices that behave similarly to known thermostats).

[0064] As shown in Figure 2G, activity classification information is also provided to a set of classifiers 282 and a prediction is performed based on the provided features for the device. Two possibilities can occur. In a first scenario, it is determined that there is a high probability that the device matches a known device profile (i.e., a high confidence score). If so, information about the device is provided to a stage two classifier (284) that makes a final verdict for the device’s identification (e.g., using the information it was provided and any additional applicable contextual information) and updates device database 286 accordingly. One way of implementing a stage two classifier is using a gradient boosting framework. In a second scenario, suppose the confidence score is low (e.g., the device matches both an HP printer and an HP laptop with 50% confidence). In this scenario, the information determined by classifiers 282 can be provided to clustering engine 280 as additional information usable in clustering.

[0065] Also shown in Figure 2G is an offline modeling module 299. Offline modeling module 299 is contrasted with inline analytics engine 272 as it is not time constrained (whereas inline analytics engine 272 attempts to provide device classification information in realtime (e.g., as message 234)). Periodically (e.g., once per day or once per week), offline modeling module 299 (implemented, e.g., using Python) rebuilds models used by inline analytics module 272. Activity modeling engine 288 builds models for activity classifier 278, which are also used for device type models (296) which are used by classifiers for device identification during inline analytics. Baseline modeling engine 290 builds models of baseline behaviors of device models, which are also used when modeling specific types of device anomalies (292) and specific types of threats (294), such as a kill chain. The generated models are stored, in various embodiments, in model database 298.

[0066] IV. NETWORK ENTITY ID AAA

[0067] Suppose, as was previously mentioned, Alice was issued a laptop 104 by ACME. Various components of network 110 will cooperate to authenticate Alice’s laptop as she uses it to access various resources. As one example, when Alice connects laptop 104 to a wireless access point located within network 110 (not pictured), the wireless access point may communicate (whether directly or indirectly) with AAA server 156 while provisioning network access. As another example, when Alice uses laptop 104 to access her ACME email, laptop 104 may communicate (whether directly or indirectly) with directory service 154 while fetching her inbox, etc. As a commodity laptop running a commodity operating system, laptop 104 is able to generate appropriate AAA messages (e.g., RADIUS client messages) which will help laptop 104 gain access to the appropriate resources it needs.

[0068] As previously mentioned, one problem posed by loT devices (e.g., device 146) in a network such as 110 is that such devices are often “unmanaged” (e.g., not configured, provisioned, managed by network administrators, etc.), do not support protocols such as RADIUS, and thus cannot be integrated with AAA services such as other devices such as laptop 104. A variety of approaches can be adopted to provide loT devices with network access within network 110, each of which has drawbacks. One option is for ACME to limit loT devices to use of a guest network (e.g., via a pre-shared key). Unfortunately, this can limit the utility of the loT device if it is unable to communicate with other nodes within network 110 to which it should legitimately have access. Another option is to allow loT devices unlimited access to network 110, mitigating the security benefits of having a segmented network. Yet another option is for ACME to manually specify rules that govern how a given loT device should be able to access resources in network 110. This approach is generally untenable/unworkable for a variety of reasons. As one example, administrators may often not be involved in the deployment of loT devices and thus will not know that policies for such devices should be included (e.g., in data appliance 102). Even where administrators might, e.g., manually configure policies for specific loT devices in appliance 102 (e.g., for devices such as device 112), keeping such policies up to date is error prone and is generally untenable given the sheer number of loT devices that might be present in network 110. Further, such policies will likely be simplistic (e.g., assigning CT scanner 112 by IP address and/or MAC address to a particular network) and not allow for finer grained control over connections/policies involving CT scanner 112 (e.g., dynamically including with policies applicable to surgical devices vs. point of sales terminals). Further, even where CT scanner 112 is manually included in data appliance 102, as previously mentioned, loT devices will generally not support technologies such as RADIUS, and the benefits in having such AAA servers manage CT scanner 112’s networking access will be limited as compared to other types of devices (e.g., laptop 104) which more fully support such technologies. As will be described in more detail below, in various embodiments, data appliance 102 (e.g., via loT server 134) is configured to provide support for AAA functionality to loT devices present in network 110 in a passive manner.

[0069] In the following discussion, suppose that Alice’s department in ACME has recently purchased an interactive whiteboard 146 so that Alice can collaborate with other ACME employees as well as individuals outside of ACME (e.g., Bob, a researcher at Beta University having its own network 114, data appliance 136, and whiteboard 144). As part of the initial setup of whiteboard 146, Alice connects it to a power source and provides it with a wired connection (e.g., to an outlet in the conference room) or wireless credentials (e.g., the credentials for use by visitors of the conference room). When whiteboard 146 provisions a network connection, loT server 134 (e.g., via a mechanism such as a network sensor as described above) will recognize whiteboard 146 as a new device within network 110. One action taken in response to this detection is to communicate with security platform 140 (e.g., creating a new record for whiteboard 146 in database 160 and retrieving any currently available contextual information associated with whiteboard 146 (e.g., obtaining the manufacturer of whiteboard 146, model of whiteboard 146, etc.)). Any contextual information provided by security platform 140 can be provided to (and stored at) data appliance 102 which can in turn provide it to directory service 154 and/or AAA server 156 as applicable. As applicable, loT module 138 can provide updated contextual information about whiteboard 146 to data appliance 102 as it becomes available. And, data appliance 102 (e.g., via loT server 134) can similarly provide security platform 140 with ongoing information about whiteboard 146. Examples of such information include observations about whiteboard 146’s behaviors on network 110 (e.g., statistical information about the connections it makes) which can be used by security platform 140 to build behavioral profiles for devices such as whiteboard 146. Similar behavior profiles can be built by security platform 140 for other devices (e.g., whiteboard 144). Such profiles can be used for a variety of purposes, including detecting anomalous behaviors. As one example, data appliance 148 can use information provided by security platform 140 to detect whether thermometer 152 is operating anomalously as compared to historic observations of thermometer 152, and/or as compared to other thermometers (not pictured) of similar model, manufacturer, or more generally, including thermometers present in other networks. If anomalous behavior is detected (e.g.., by data appliance 148), appropriate remedial action can be automatically taken, such as restricting thermometer 152’s access to other nodes on network 116, generating an alert, etc.

[0070] Figure 3 illustrates an embodiment of a process for passively providing AAA support for an loT device in a network. In various embodiments, process 300 is performed by loT server 134. The process begins at 302 when a set of packets transmitted by an loT device is obtained. As one example, when whiteboard 146 is first provisioned on network 110, such packets can be passively received by loT server 134 at 302. Packets can also be received at 302 during subsequent use of whiteboard 146 (e.g., as Alice has whiteboarding sessions with Bob via whiteboard 144). At 304, at least one packet included in the set of data packets is analyzed. As one example of the processing performed at 304, loT server 134 determines that the packets received at 302 are being transmitted by whiteboard 146. One action that loT server 134 can take is to identify whiteboard 146 as a new loT device on network 110 and obtain contextual information from loT module 138 if available. At 306, loT server 134 transmits, on behalf of the loT device, a AAA message that includes information associated with the loT device. An example of such a message is shown in Figure 4A. As previously mentioned, whiteboard 146 does not support the RADIUS protocol. However, loT server 134 can generate a message such as is depicted in Figure 4A (e.g., using information received at 302 and also from security platform 140 as applicable) on behalf of whiteboard 146. As previously mentioned, when loT server 134 provides information about whiteboard 146 to loT module 138, loT module 138 can take a variety of actions such as creating a record for whiteboard 146 in database 160 and populating that record with contextual information about whiteboard 146 (e.g., determining its manufacturer, model number, etc.). As additional contextual information about whiteboard 146 is gathered by security platform 140, its profde can be updated and propagated to data appliance 102. When whiteboard 146 is initially provisioned within network 110, no additional contextual information may be available (e.g., security platform 140 may not have such additional information or providing such information by security platform 140 to loT server 134 may not be instant). Accordingly, and as is depicted in Figure 4A, the RADIUS message generated by loT server 134 on behalf of whiteboard 146 may include limited information. As additional contextual information is received (e.g., by loT server 134 from loT module 138), subsequent RADIUS messages sent by loT server 134 on behalf of whiteboard 146 can be enriched with such additional information. Examples of such subsequent messages are illustrated in Figures 4B and 4C. Figure 4B illustrates an example of a RADIUS message that loT server 134 can send on behalf of whiteboard 146 once contextual information about whiteboard 146 has been provided by loT module 138 (e.g., which contains a database of contextual information about a wide variety of loT devices). In the example shown in Figure 4B, contextual information such as the manufacturer of the whiteboard (Panasonic) and the nature of the device (e.g., it is an interactive whiteboard) is included. Such contextual information can be used by AAA servers such as AAA server 156 to provide AAA services to whiteboard 146 (without having to modify whiteboard 146), such as by automatically provisioning it on a subnetwork dedicated to teleconferencing equipment. Other types of loT devices can also be automatically grouped based on attributes such as device type, purpose, etc. (e.g., with critical surgical equipment automatically provisioned on a subnetwork dedicated to such equipment and thus isolated from other devices on the network). Such contextual information can be used to enforce policies such as traffic shaping policies, such as a policy giving preferential treatment to whiteboard 146 packets over social networking packets (e.g., as determined using APP-ID). Fine-grained policies could similarly be applied to communications with critical surgical equipment (e.g., preventing any device in communication with such equipment from having an out of date operating system, etc.). In the example shown in Figure 4C, yet more additional contextual information is included by loT server 134 in RADIUS messages on behalf of whiteboard 146. Such additional contextual information includes additional attribute information such as the device model, operating system, and operating version. When whiteboard 146 is initially provisioned in network 110, all of the contextual information depicted in Figure 4C will likely not be available. As whiteboard 146 is used within network 110 overtime, additional contextual information can be collected (e.g., as loT server 134 continues to passively observe packets from whiteboard 146 and provide information to security platform 140). This additional information can be leveraged (e.g., by data appliance 102) to enforce fine-grained policies. As one example, as shown in Figure 4C, whiteboard 146 runs a particular operating system that is Einux-based and has a version of 3.16. Frequently, loT devices will run versions of operating systems that are not upgradable / not patchable. Such devices can pose security risks as exploits are developed forthose operating systems. Data appliance 102 can implement security policies based on contextual information such as by isolating loT devices having out of date operating systems from other nodes in network 110 (or otherwise limiting their access) while permitting less restrictive network access to those with current operating systems, etc.

[0071] Figures 4A-4C depict examples of RADIUS access request messages. As applicable, loT server 134 can generate a variety of types of RADIUS messages on behalf of whiteboard 146. As one example, RADIUS accounting start messages can be triggered when traffic from whiteboard 146 is first observed. Periodic RADIUS accounting interim update messages can be sent while the whiteboard is in use, and RADIUS accounting stop messages can be sent when whiteboard 146 goes offline.

[0072] V. IOT DEVICE DISCOVERY AND IDENTIFICATION

[0073] As discussed above, one task performed by security platform 140 (e.g., via loT module 138) is loT device classification. As an example, when loT server 134 transmits a device discovery message to loT module 138, loT module 138 attempts to determine a classification for the device and respond (e.g., with verdict 234 shown in Figure 2C). The device is associated by loT module 138 with a unique identifier so that, as applicable, subsequent classification of the device need not be performed (or, as applicable, performed less frequently than would otherwise be performed). Also as discussed above, the determined classification can be used (e.g., by data appliance 102) to enforce policies against traffic to/from the device.

[0074] A variety of approaches can be used to classify a device. A first approach is to perform classification based on a set of rules/heuristics that leverage the device’s static attributes, such as an organizationally unique identifier (OUI) appearing in the first three bytes of a Media Access Control (MAC) address, the types of applications it executes, etc. A second approach is to perform classification using machine learning techniques that leverage the device’s dynamic, but predefined attributes extracted from its network traffic (e.g., number of packets sent per day). Unfortunately, both of these approaches have weaknesses.

[0075] A rule-based approach generally requires that a separate rule be manually created for each type of loT device (describing which attributes/values should be used as a signature for a type of device’s signature). One challenge presented by this approach is in determining which signatures are both relevant to identifying a device and unique among other device signatures. Further, with a rule-based approach, a limited number of static attributes that can easily be acquired from traffic are available (e.g., user agent, OUI, URL destination, etc.). The attributes generally need to be simple enough that they show up in a pattern that a regular expression can match on. Another challenge is in identifying new static attributes that may be present/identifiable as new devices enter a market (e.g., a new brand or model of CT scanner is offered). Another challenge is that all matching attributes in the network traffic need to be collected in order for a rule to be triggered. A verdict cannot be reached with fewer attributes. As an example, a signature may require a particular device with a particular OUI to connect to a particular URL. Having the OUI itself may already be a sufficient indicator of a device’s identity, but the signature will not trigger until the URL is also observed. This would cause further delay in determining the device’s identity. Another challenge is maintaining and updating signatures as static attributes for a device change over time (e.g., because of updates made to the device or services used by the device). As an example, a particular device may have initially been manufactured using a first type of network card, but over time the manufacturer may have switched to a different network card (which will exhibit a different OUI). If a rule-based system is unaware of the change, false positives may result. Yet another challenge is in scaling signature generation/verification when the number of new loT devices brought online each day approaches millions of new device instances. As a result, newly created rules may conflict with existing rules and causing false positives in classification.

[0076] A machine learning-based approach generally involves creating training models based on static and/or dynamic features extracted from network traffic. The result of prediction on network data from a new loT device is based on pre-trained models that provide an identity of a device with an associated accuracy. Examples of problems with a machine learning approach are as follows. The computation time required to reach a desired accuracy may be unacceptable, where prediction is done on every new device, or a device without a constant/unique ID (e.g., a MAC address). There may be thousands or tens of thousands of features that need to be generated, and those features may transform over a predefined time window, taking significant time (which may defeat the purpose of policy enforcement) before a sufficient number of features are available to make an effective prediction. Further, the cost could be high to build and maintain a large data pipeline for streaming network data if a goal is to minimize a delay in prediction. Yet another problem is that noise brought in by irrelevant features specific to a given deployment environment could decrease the accuracy of a prediction. And, there is a challenge in maintaining and updating models when the number of device types reaches the tens of thousands or higher.

[0077] In various embodiments, security platform 140 addresses the problems of each of the two aforementioned approaches by using a hybrid approach to classification. In an example hybrid approach, a network behavior pattern identifier (also referred to herein as a pattern ID) is generated for each type of device. In various embodiments, a pattern ID is a list of attributes or sequence features combined, with their respective probabilities (as importance scores for a feature or behavior category), that forms a distinct network behavior description and can be used to identify the type of an loT device. The pattern IDs can be stored (e.g., in a database) and used to identify/verify the identity of devices.

[0078] When training on a set of attributes, certain approaches, such as an extreme gradient boosting framework (e.g., XGB), can provide atop list of important features (whether static attributes, dynamic attributes, and/or aggregated/transformed values). The pattern ID can be used to uniquely identify a device type once established. If particular features are dominant for a device (e.g., a particular static feature (such as contacting a highly-specific URL at boot time) identifies a device with 98% confidence), they can be used to automatically generate a rule. Even where no dominant features are present, a representation of the top features can nonetheless be used as the pattern ID (e.g., where a set of multiple features are concatenated into a pattern). By training on a data set that includes all known models (and all known loT devices), potential conflict between models/uniquely identifying features can be avoided. Further, the pattern ID need not be human-readable (but can be stored, shared, and/or reused for identification purposes). Significant time savings can also be realized by this approach, such that it can be used in near-realtime classification. As soon as a dominant feature is observed, classification of a particular device can occur (instead of having to wait until a large number of features occur).

[0079] An example of data that could be used to create a pattern ID for a “Teem Room Display iPad” device could include the following (with a full list automatically generated through training of a multivariate model or training multiple binary models):

[0080] * Apple device (100%) [0081] * Special iPad (>98.5%)

[0082] * Teem Room App (>95%)

[0083] * Meeting volume pattern VPM-17 (>95%)

[0084] * Server-in-the-cloud (>80%)

[0085] An example way of implementing a hybrid approach is as follows. A neural network-based machine learning system can be used for automated pattern ID training and generation. Examples of features that can be used to train the neural network model include both static features extracted from network traffic (e.g., OUI, hostname, TLS fingerprint, matched L7 payload signatures, etc.) and sequence features extracted from network traffic but not specific to the environment (e.g., applications, L7 attributes of an application, volume range converted to categorical features, etc.). A lightweight data pipeline can be used to stream selected network data for feature generation in realtime. A prediction engine can be used that imports models and provides caching to minimize delays in prediction. In prediction, a short (e.g., minute-based) aggregation can be used to stabilize the selected sequence features. Customized data normalization, enrichment, aggregation, and transformation techniques can be used to engineer the sequence features. A longer aggregation window can be used in training for better accuracy. Accuracy can be improved for prediction with features being merged and aggregated over time. A backend feedback engine can be used to route the results of a “slow path” prediction system (e.g., a machine learning-based approach that includes a device type modeling subsystem and a device group modeling subsystem) that helps expand attributes used for pattern ID prediction. A device group model can be trained to compensate for issues with a device type model when not enough samples or features are available, to improve the accuracy over an acceptable threshold (e.g., assigning prediction results based on a set of predefined types, some of which come with another subsystem to cluster similar types of devices that are unlabeled). Finally, a verdict module can be used to publish results from the realtime prediction engine.

[0086] Example advantages of a hybrid approach to classification such as is described herein are as follows. First, fast convergence can occur, allowing for a given device to be potentially identified within minutes or seconds. Second, it addresses individual problems of rule-based and machine learning -based systems. Third, it provides stability and consistency in prediction results. Fourth, it has scalability to support tens of thousands (or more) different types of loT devices. Prediction is generally only required on new devices (even if a given device lacks a unique ID assignment, such as an L3 network traffic-based identification).

[0087] An embodiment of module 138 is shown in Figure 5. One example way of implementing loT module 138 is using a microservices-based architecture in which services are fine-grained and protocols are lightweight. Services can also be implemented using different programming languages, databases, hardware, and software environments, as applicable, and/or relatively small services that are messaging enabled, bounded by contexts, autonomously developed, independently deployable, decentralized, and built and released with automated processes.

[0088] As previously mentioned, in various embodiments, security platform 140 periodically receives information (e.g., from data appliance 102) about loT devices on a network (e.g., network 110). In some cases, the loT devices will have previously been classified by security platform 140 (e.g., a CT scanner that was installed on network 110 last year). In other cases, the loT devices will be newly seen by security platform 140 (e.g., the first time whiteboard 146 is installed). Suppose a given device has not previously been classified by security platform 140 (e.g., no entry for the device is present in database 286 which stores a set of unique device identifiers and associated device information). As illustrated in Figure 5, information about the new device can be provided to two different processing pipelines for classification. Pipeline 504 represents a “fast path” classification pipeline (corresponding to a pattern ID-based scheme) and pipeline 502 represents a “slow path” classification pipeline (corresponding to a machine learning-based scheme).

[0089] In pipeline 504, a fast path feature engineering is performed (508) to identify applicable static and sequence features of the device. A fast path prediction is performed (510) using pattern IDs or previously built models (e.g., models based on top important features and built using offline processing pipeline 506). A confidence score for the device matching a particular pattern is determined ( 12). If the confidence score for a device meets a pre-trained threshold (e.g., based on the overall prediction accuracy of module 138 or components thereof, such as 0.9), a classification can be assigned to the device (in device database 516) or updated as applicable. Initially, the confidence score will be based on the near-realtime fast path processing. An advantage of this approach is that data appliance 102 can begin applying policies to the device’s traffic very quickly (e.g., within a few minutes of module 138 identifying the device as new/unclassified). Appliance 102 can be configured to fail-safe (e.g., reduce/restrict the device’s ability to access various network resources) or fail -danger (e.g., allow the device broad access) pending a classification verdict from system 140. As additional information becomes available (e.g., via the slow path processing), the confidence score can be based on that additional information, as applicable (e.g., increasing the confidence score or revising/correcting mistakes made during fast path classification).

[0090] Examples of features (e.g., static attributes and sequence features) that can be used include the following. Pattern IDs can be any combination of these attributes with logical conditions included:

[0091] * OUI in mac address

[0092] * Hostname string from decoded protocols

[0093] * User agent string from HTTP, and other clear text protocols

[0094] * System name string from decoded SNMP responses

[0095] * OS, hostname, domain, and username from decoded LDAP protocols

[0096] * URLs from decoded DNS protocols

[0097] * SMB versions, commands, errors from decoded SMB protocols

[0098] * TCP flags

[0099] * Option strings from decoded DHCP protocols

[00100] * Strings from decoded loT protocols such as Digital Imaging and

Communications in Medicine (DICOM)

[00101] * List of inbound applications from local network

[00102] * List of inbound applications from Internet

[00103] * List of outbound applications to local network [00104] * List of outbound applications to Internet

[00105] * List of inbound server ports from local network

[00106] * List of inbound server ports from Internet

[00107] * List of outbound server ports to local network

[00108] * List of outbound server ports to Internet

[00109] * List of inbound IPs from local network

[00110] * List of inbound URLs from Internet

[00111] * List of outbound IPs to local network

[00112] * List of outbound URLs to Internet

[00113] In some cases, the confidence score determined at 512 may be very low. One reason this can occur is because the device is a new type (e.g., a new type of loT toy or other type of product not previously analyzed by security platform 140) and there is no corresponding pattern ID available for the device on security platform 140. In such a scenario, information about the device and classification results can be provided to offline processing pipeline 506 which, e.g., can perform clustering (514) on the behaviors exhibited by the device and other applicable information (e.g., to determine that the device is a wireless device, acts like a printer, uses DICOM protocol, etc.). Clustering information can be applied as labels and flagged for additional research 518 as applicable, with any subsequently seen similar devices automatically grouped together. If, as a result of research, additional information about a given device is determined (e.g., it is identified as corresponding to a new type of consumer-oriented loT meat thermometer), the device (and all other devices having similar properties) can be relabeled accordingly (e.g., as a brand XYZ meat thermometer) and an associated pattern ID generated and be made usable by pipelines 502/504 as applicable (e.g., after models are rebuilt). In various embodiments, offline modeling 520 is a process that runs daily to train and update various models 522 used for loT device identification. In various embodiments, models are refreshed daily to cover new labeled devices, and are rebuilt weekly to reflect behavior changes (for slow path pipeline 502) and accommodate new features and data insights added during the week. Note that when adding new types of devices to security platform 140 (i.e., creating new device patterns), it is possible that multiple existing device patterns will be impacted, requiring that either the list of features or their importance scores be updated. The process can be performed automatically (and is a major advantage compared to a rule-based solution).

[00114] For fast path modeling, neural network-based models (e.g., FNN) and general machine learning models (e.g., XGB) are used extensively for multivariate classification models. Binary models are also built for selected profiles to help improve results and provide input to clustering. A binary model gives yes/no answers to an identity of a device, or certain behaviors of a device. For example, a binary model can be used to determine whether a device is a type of IP phone or unlikely to be an IP phone. A multivariate model will have many outputs normalized to a probability of 1. Each output corresponds to a type of device. Even though binary models are generally faster, it would require that a device goes through many of them in prediction to be able to find the right “yes” answer. A multivariate model can achieve that in one step.

[00115] Slow path pipeline 502 is similar to pipeline 504 in that features are extracted (524). However, the features used by pipeline 502 will typically take a period of time to build. As one example, a feature of “number of bytes sent per day” will require a day to collect. As another example, certain usage patterns may take a period of time to occur/be observed (e.g., where a CT scanner is used hourly to perform scans (a first behavior), backs up data daily (a second behavior), and checks a manufacturer’s website for updates weekly (a third behavior). Slow path pipeline 502 invokes a multivariate classifier (526) in an attempt to classify a new device instance on a full set of features. The features used are not limited to static or sequence features, but include volume and time series based features as well. This is generally referred as a stage one prediction. For certain profiles when the stage one prediction result is not optimal (with a lower confidence), a stage two prediction is used in an attempt to improve the result. Slow path pipeline 502 invokes a set of decision tree classifiers (528) supported by additional imported device context to classify a new device instance. The additional device context is imported from external source. As an example, a URL the device has connected to may have been given a category and a risk-based reputation which can be included as a feature. As another example, an application used by the device may have been given a category and a risk based score which can be included as a feature. By combining result from stage one prediction 526 and stage two prediction 528, a final verdict of the slow path classification can be reached with a derived confidence score.

[00116] There are generally two stages included in slow path pipeline 502. In the slow path pipeline, in some embodiments, stage one models are built with multivariate classifiers, based on neural network techniques. Stage two of the slow path pipeline is generally a set of decision-based models with additional logic to handle probability-related exceptions of stage one. In prediction, the stage two will consolidate input from stage one, applying rules and context to verify stage one output and generate a final output of the slow path. The final output will include an identity of the device, an overall confidence score, the pattern ID that can be used for future fast path pipeline 504, and an explanation list. The confidence score is based on the reliability and accuracy of the model (models also have confidence scores), and the probability as part of the classification. The explanation list will include a list of features that contribute to the result. As mentioned above, investigation can be triggered if the result deviates from known pattern IDs.

[00117] In some embodiments, for slow path modeling, two types of models are built, one for individual identity, and one for a group identity. It is often harder to tell the difference between, for example, two printers from different vendors or with different models than it is to differentiate a printer from a thermometer (e.g., because printers tend to exhibit network behavior, speak similar protocols, etc.). In various embodiments, various printers from various vendors are included into a group, and a “printer” model is trained for group classification. This group classification result may provide a better accuracy than a specific model for a specific printer and can be used to update the confidence score of a device, or provide reference and verification to the individual profile identity-based classification, as applicable.

[00118] Figure 6 illustrates an example of a process for classifying an loT device. In various embodiments, process 600 is performed by security platform 140. Process 600 can also be performed by other systems as applicable (e.g., a system collocated on-premise with loT devices). Process 600 begins at 602 when information associated with a network communication of an loT device is received. As one example, such information is received by security platform 140 when data appliance 102 transmits to it a device discovery event for a given loT device. At 604, a determination is made that the device has not been classified (or, as applicable, that a re-classification should be performed). As one example, platform 140 can query database 286 to determine whether or not the device has been classified. At 606, a two-part classification is performed. As an example, a two-part classification is performed at 606 by platform 140 providing information about the device to both fast path classification pipeline 504 and slow path classification pipeline 502. Finally, at 608, a result of the classification process performed at 606, along with the summarized network behavior from baseline modeling (290) is provided to a security appliance configured to apply a policy to the loT device. Examples of such summarized network behavior include the most used applications, URLs, and other attributes that can help form a security appliance policy that can be “extracted” from machine-learning trained baseline models for loT device profiles.

As mentioned above, this allows for highly fine-grained security policies to be implemented in potentially mission critical environments with minimal administrative effort.

[00119] In a first example of performing process 600, suppose that an Xbox One game console has been connected to network 110. During classification, a determination can be made that the device has the following dominant features: a “vendor = Microsoft” feature with 100% confidence, a “communicates with Microsoft cloud server” feature with 89.7% confidence, and matches a “game console” feature with 78.5% confidence. These three feature s/confidence scores can collectively be matched against a set of profile IDs (a process done by a neural network-based prediction) to identify the device as being an Xbox One game console (i.e., a profile ID match is found that meets a threshold at 512). In a second example, suppose that an AudioCodes IP phone has been connected to network 110. During classification, a determination can be made that the device matches a “vendor = AudioCodes” feature with 100% confidence, an “is an IP audio device” feature with 98.5% confidence, and “acts like a local server” feature with 66.5%. These three features/confidence scores are also matched against the set of profile IDs, but in this scenario suppose that no existing profile ID is matched with sufficient confidence. Information about the device can then be provided to clustering process 514 and, as applicable, a new profile ID can ultimately be generated and associated with the device (and used to classify future devices).

[00120] As applicable, security platform 140 can recommend particular policies based on the determined classification information, described in more detail below. The following are examples of policies that can be enforced:

[00121] * deny Internet traffic for all Infusion Pumps (irrespective of vendor)

[00122] * deny Internet traffic for all GE ECG Machines except from/to certain GE hosts

[00123] * only allow internal traffic to Picture Archiving and Communication System

(PACS) servers for all CT Scanners (irrespective of vendor) [00124] VI. IOT SECURITY POLICY ON A FIREWALL

[00125] As mentioned above, loT devices are often special purpose devices (as contrasted with general computing devices such as laptops) that have predefined behaviors that can be observed on a network. As one example, irrespective of manufacturer (e.g., GE or Fujitsu), a CT scanner will have similar functionality/exhibit similar behaviors on a network as with other CT scanners, such as transmitting captured patient images using one or more specific protocols, to a networked image server for medical staff to examine (e.g., via an interface to the server). Other types of systems (e.g., Heating, Ventilation, and Air Conditioning (HVAC) systems) will exhibit their own set of similar typically predefined behaviors (e.g., reporting a temperature value to a server once a minute via a particular protocol).

[00126] As mentioned above, analysis of these behaviors (e.g., by security platform 140) from traffic observed (e.g., by data appliance 102) allows particular loT devices to be identified (including by identifying particular instances of a device, model of device, manufacturer of device, type of device, etc). Further, the bi-product of the device identification will have a device baseline model trained (e.g., by baseline modeling engine 290) for the purpose of the classification. As applicable, anomaly detection module 248 can be used to filter out known anomalous behaviors when creating a baseline for a device (or group of devices). This deep machine learning model captures the network behavior described above. This baseline model can not only used in device identity prediction, but can also be used to generate a common list of behavior summaries ranked by how popular a network behavior is seen on a device profile. One approach to behavior summarization is to extract and rank the top contributing features (used in device identification) from the baseline model during the training process, using ML algorithms such as XGB. Other approaches can also be used or combined (e.g., heuristic approaches). Top contributing features (subject to a reliability/confidence threshold) essentially highlight what are the most common network behaviors a type of device will exhibit, from the thousands of attributes or features used in the training and can be used in recommendations (e.g., whitelisting/blacklisting particular URLs, protocols, etc.). The behavior summaries can include what applications are used, what connections are made to certain network domains, what payload is carried in the application, the volume, the time and frequency of the communication, etc. Each attribute is assigned a frequency category such as “rare,” “often,” or “regular.” Each attribute can also be assigned a range category such as “less than 1 MB per hour.” Anomalies (e.g., compromised or malfiinctioning/misconfigured loT devices) can be detected (e.g., by data appliance 102 working in conjunction with anomaly detection module 248) as deviations from baselines. These attributes (and known vulnerabilities to particular attacks) can be used as a blueprint to automatically create a recommended firewall policy for constraining network activity associated with a particular loT device, type of device, etc. For example, a regularly used application and URL can be used to build an “allow” firewall policy. In another example, an application not part of the baseline behavior could be used to build a “deny” firewall policy. Users will be able to adjust the policy based on the frequency of a network behavior summarized from hundreds of thousands of similar devices. And, any known vulnerabilities (e.g., susceptibility of a particular device to a particular attack) can be separately modeled and incorporated into recommended policies, as applicable. An example of a top feature for a given device type is that the device checks for an update approximately once per day at a particular URL (e.g., www.siemens.com/updates). If a threshold number of devices sharing the device type exhibit similar baseline behavior, the feature can be selected as a recommended whitelist item for device profiles associated with that device type.

[00127] Figure 7A illustrates a first approach to implementing a set of policies pertaining to CT scanner / image servers that Acme could deploy within network 110. In particular, suppose Acme has deployed two types of CT scanners (made by GE and Fujitsu). An administrator of network 110 (hereinafter referred to as Charlie) can interact with an interface (e.g., provided by data appliance 102 and/or security platform 140 as applicable) and manually specify, for each CT scanner and image server within network 110, the protocols, ports, and IP addresses with which they are allowed to communicate.

Unfortunately, this approach can be time consuming and error prone. As one example, when a new CT scanner is added to the environment, Charlie will need to manually add to the rules shown in Figure 7A, and also potentially change/remove some of the rules (e.g., if the new CT scanner replaces an existing one and/or network information changes). If the total number of loT devices in an environment is low and the loT devices are assigned static IP addresses, manual maintenance of rules such as are shown in Figure 7A may be feasible. In practice, however, a given environment may have hundreds or thousands of loT devices (or more), and/or may use DHCP, and manual maintenance of rules is infeasible.

[00128] An alternate approach is to abstract applications (e.g., “DICOM-App, indicating particular protocols/ports/etc. corresponding to network traffic used to communicate medical imaging information) and device types (e.g., GE-Xray-Device), in accordance with techniques described herein. An abstraction of the rules shown in Figure 7A is depicted in Figure 7B. Of note, Charlie does not need to provide the IP addresses, ports, or protocols of relevance to the loT policies, but rather can use the abstracted application types and device types. Policies such as are shown in Figure 7B can be compiled and used at runtime by data appliance 102. During compilation, the abstracted elements (e.g., GE-Xray- Device) will be replaced (e.g., with the IP address of each loT device matching that device identification), based on information stored on data appliance 102 (such as APP-ID information, IP information, and/or a dictionary of device types).

[00129] Charlie can elect to manually write loT device rules (e.g., using the aforementioned abstractions if desired), but can also be provided with policy recommendations by security platform 140. The recommendations are based, in various embodiments, on device profiles (which include device type or other information) and baseline/typical behavior (across many different customer environments/deployments) of a set of devices sharing various characteristics. If Charlie accepts the recommended policies, appropriate rules (e.g., rules such as are shown in Figure 7B) can be automatically created (e.g., by security platform 140) and imported into security appliance 102 for enforcement. Security appliance 102 will learn device profiles of loT devices on its network and match applicable policies to devices as sources or destinations. As applicable, the policies can be translated (e.g., by security platform 140) into formats usable by types of infrastructure other than security appliance 102, such as network access controllers.

[00130] In the following discussion, suppose Acme has recently purchased a set of building automation devices (e.g., a set of badge readers), installed them inside various Acme facilities, and brought the devices online onto a portion of network 110. Using device identification/classification techniques described above, security platform 140 (working with data appliance 102) will identify that Acme has added the 28 new badge reader devices to its network and learn various behaviors taken by those specific badge reader devices within Acme’s network environment as they operate (e.g., during an initial observation period of a week or month). A portion of an administrative interface provided by security platform 140 is shown in Figure 8. Interface 800 indicates that Acme currently has 65 total kinds of loT devices (having corresponding profiles) operating in its environment. The newly added badge readers are depicted in row 802.

[00131] If Charlie clicks on link 804, he will be taken to the interface shown in Figure 9. Region 902 indicates that security platform 140 has identified the 28 new devices as matching a Siemens Building Technology Device profile with high confidence. Behavioral information that has been collected about the 28 devices while operating within the Acme environment is also shown, summarized in region 904. Collectively, the 28 devices run eight applications in the Acme environment, communicate with 23 destinations (22 within Acme and one outside), and currently have a risk score of 56. A count of how many of the 28 devices are using each application within the Acme environment is shown in region 906, and whether the destinations are internal or external is shown in region 908. Included in region 910 is a comparison of the number of applications used by the Siemens Building Technology Device devices in the Acme environment as contrasted to how those devices (sharing the same Siemens Building Technology Device profile) behave across the environments of other customers of security platform 140. As indicated in region 910, a typical customer deployment of Siemens Building Technology Devices uses three to five applications (912), making Acme’s deployment outside of the typical range (914). If Charlie hovers his cursor over region 912, he will be presented with a box that provides additional information about the comparison, such as:

“8 different applications were used by devices in this profile. Based on data from all loT security customers, the minimum number of applications used was 3, the average was 3, and the maximum was 5. Application usage by your Siemens Building Technology Device devices was higher than usual. Review the application list.”

[00132] Charlie can review the application list by scrolling further down interface 900. As shown in Figure 10, Charlie is reviewing use by the badge reader devices of the “dhcp” and “bacnet” applications after such scrolling. The “usage” designation (1002) indicates the frequency of network usage patterns (device profile + application + URL (e.g., “www.siemens.com/update”)) and/or destination profile (e.g., “PACS server”) for the loT devices sharing the profile. In various embodiments, the usage for each application is generated based on one month of initially collected traffic. Charlie can use the usage information in determining whether he would like to allow or block certain behaviors. For example, he could learn that, given bacnet is infrequently used, it should only be allowed with internal domains, or only allowed to predetermined external domains (e.g., based on his knowledge of Acme’s environment).

[00133] If Charlie clicks on region 916 of interface 900, he will be presented with two options for creating a set of policies that can be applied to the badge reader devices. As previously mentioned, Charlie can create his own policy set(s) for the badge reader devices manually (e.g., by interacting with various elements of an embodiment of interface 900). Charlie can also elect to load a recommended policy set that security platform 140 has generated using baseline/other information obtained from the environments of other customers of security platform 140. Once clicking region 916 and opting to use a recommended policy set (if available), security platform 140 will enumerate any available recommended policy sets and Charlie is able to download/apply them to the Acme environment, with the ability to refme/adjust the policies as applicable (e.g., by interacting with various functionality provided by interface 800.

[00134] Figure 11 illustrates an example of a process for generating a policy to apply to a communication involving an loT device. In various embodiments, process 1100 is performed by security platform 140. Process 1100 can also be performed by other systems as applicable (e.g., a system collocated on-premise with loT devices). Process 1100 begins at 1102 when information associated with a network communication of an loT device is received. As one example, such information is received by security platform 140 when data appliance 102 transmits to it a device discovery event for a given loT device (e.g., a badge reader device). At 1104, the received information is used to determine a device profile to associate with the loT device. As an example, a determination is made that the loT device is a Siemens SIMATEC RF 10000 device, having a particular serial/MAC address, a particular IP address, etc. In this example, the determined “device type” could be “Siemens Building Technology Device.” Device types (e.g., badge reader device) and device profiles (e.g., Siemens Building Technology Device) are generally referred to interchangeably in this section. However, multiple profiles can be created for a given device type (e.g., Siemens Building Technology Device devices located in a research area of Acme vs. a retail area of Acme), and a given profile can include multiple device types as applicable (e.g., the Siemens Building Technology Device profile can include badge reader devices and motion trigger sensors). Finally, at 1106, a recommended policy to be applied to the loT device by a security appliance is generated. As one example, instead of allowing access to all eight of the applications shown in Figure 9, security appliance 140 could recommend a policy set that allows only the three most commonly used badge reader applications (or five most commonly used badge reader applications) corresponding to the information shown in region 910. If, once downloading and applying the recommended policy, Charlie needs to make adjustments (e.g., whitelist bacnet) to the recommended set, he can do so (e.g., by interacting with an “edit” option provided by interface 800).

[00135] VII. IOT DEVICE IDENTIFICATION WITH PACKET FLOW BEHAVIOR MACHINE LEARNING MODEL

[00136] A. Introduction

[00137] As described above, packet inspection, such as examining packet headers and/or examining payloads to perform content based pattern or signature matches, can provide helpful information when attempting to identify/classify a device, such as an loT device. Such packet and content information will often include items such as organizationally unique identifiers (GUIs), source/destination ports, hostnames, application IDs, special destinations, user agent strings, DHCP fingerprint and DHCP vendor class identifiers, etc. Unfortunately, some devices can be difficult to identify through packet analysis. The following are six examples of such types of devices:

[00138] * An endpoint device for which GUI information is not available.

[00139] * Devices of multiple types which share a single OUI (e.g., X-ray machines and ultrasound machines all using wireless cards having the same OUI).

[00140] * Devices whose identities are discovered but corresponding classification rules are challenging to define (e.g., an X-ray machine that is discovered because other devices in the same subnet are also X-ray machines with similar traffic patterns or have similar hostnames, or is discovered because it communicates with a server whose name suggests that the client is the X-ray machine).

[00141] * Devices with encrypted traffic (that can make signature based classification difficult).

[00142] * Devices behind network appliances (e.g., routers or switches) where the device’s MAC appears to be the router/switch interface’s MAC, not the device’s MAC. [00143] * Devices without hostnames (where existing identification rules often use a combination of a hostname and an OUI as an identification/classification rule).

[00144] In various embodiments, device identification/classification techniques described above are supplemented/complemented by deploying one or more machine learning models that make use of packet behavior information. Examples of such packet behavior information (described in more detail below) include: sequences of packet length (SPLN), sequences of packet inter-arrival time (SPIT), and Transport Layer Security (TLS) information, all of which can be used as features. In various embodiments, logistic regression is used to train a data set and the coefficients in the logistic regression can be used as a behavior signature for a device (which can then be used to identify/help identify the device). Returning to Figure 2F, in various embodiments, a training stage is performed by training module 241 using labeled events provided to aggregation module 238 for feature extraction and training. Trained models can be saved for later use, e.g., by analytics module 242. During production/operation, events (e.g., as initially provided by data appliance 102 and processed by various components of loT module 138) will similarly pass through aggregation module 238 and into analytics module 242, which will use the trained models to perform device identification on aggregated events (e.g., during inline analytics by analytics engine 272 as examples of binary/multi-class classifiers 282). As applicable (e.g., as new information is received about new types of devices), training module 241 can retrain existing models/create new models.

[00145] The techniques described herein are particularly well suited to an environment, such as is shown in Figure 1, where one or more data appliances (e.g., data appliances 102, 136, and 148) are present and used to help enforce security policies, in part because such data appliances can maintain session control blocks for each session, and the session control blocks will have the metadata/other information that machine learning models described herein will use. Further, if an existing data appliance does not support a particular feature extraction (e.g., sequence of packet lengths, sequence of packet inter-arrival time, and/or verbose TLS information), adding such functionality can be done efficiently, keeping overhead to a minimum.

[00146] B. Examples of Packet Behavior Information

[00147] The following are examples of features (associated with packet behavior parameters) that can be used to train one or more machine learning models and used, for example, along with other models (e.g., as described above) either individually or in combination to help identify/classify devices such as loT devices. The following example features can be combined with (if available) other features, such as metadata information associated with a packet (e.g., OUI, destination port, application ID, protocol, outgoing packet count, and outgoing packet byte count).

[00148] In an example embodiment, logistic regression is used (based on linear regression), where the result of the logistic regression is a probability which can be used as a confidence level for a device identification. An example formula of the logistic regression is as follows: p = 1+b -^ 0 +^ 1 x 1 +^ z x z + -+^ m x m -) where usually b=e. Features are extracted from packet flows during sessions and can be used for both training models and, subsequently, for device identification by employing those resulting trained models. The resulting models will generate co-efficients for each feature. Sets of such co-efficients can be used (and are referred to herein) as behavior signatures for devices.

[00149] Other types of classifiers can also be used in accordance with techniques described herein (including by using the features described herein), such as a Gaussian naive Bayes, K nearest neighbor, decision tree, random forest, and/or vector machine.

[00150] 1. Sequence of Packet Length (SPLN)

[00151] A sequence of packet length (SPLN) can be obtained for a given flow, for example, by determining the length of the first ten (or other appropriate number of) non-zero packet length packets (payload) of the flow. If a given packet in the sequence has a zero length payload, that packet can be skipped and the next packet length examined (until the requisite ten or other amount of non-zero packet lengths are considered). In the event a given flow has fewer than the total number of non-zero packet lengths (e.g., the entire flow is only seven non-zero length packets), the SPLN can be padded with zeros at the end to reach the configured number of packet lengths (e.g., ten).

[00152] 2. Sequence of Packet Inter-Arrival Time (SPIT)

[00153] Figure 12 illustrates a sequence of packet inter-arrival times (SPITs). As shown, “Timel” represents the time between when a first packet (Packet 0) and a second packet (Packet 1) is received. “Time 2” represents the time between when the second packet and a third packet (Packet 2) is received, and so on. As with the SPLN described above, in some embodiments, SPITs are only determined for non-zero length packets. If, for example, Packet 2 as illustrated in Figure 12 has a zero length payload, the delta between the times at which Packet 3 and Packet 1 are received is used (and a total of ten or other appropriate number of packet inter-arrival times is determined between the first eleven or other appropriate number of non-zero length packets).

[00154] 3. Transport Layer Security (TLS)

[00155] When a device (e.g., an loT device) engages in an encrypted traffic session, the first portion of the session will be a handshake in the clear. Information from that handshake can be captured and information about the handshake extracted. Examples of such information include: TLS version, an ordered list of offered ciphersuites (e.g., in the client hello message), list of supported TLS extensions (also, e.g., in the client hello message), selected ciphersuites (e.g., in the server hello reply message), selected TLS extensions (also, e.g., in the server hello reply message), and public key length. As with the aforementioned packet metadata, SPLN, and SPIT information, TLS information can be used as a feature in determining a packet behavior signature.

[00156] C. Example - SampleCo Smart Meter

[00157] The following is a simplified example of using techniques described herein to train (e.g., via training module 241) a model to recognize SampleCo brand Smart Meters (e.g., water meters that collect data on how much water is used, and when, and transmit the collected data to a network for billing, e.g., by a water company). The SampleCo Smart Meters exhibit various characteristics, such as by communicating over NTP, TCP, and FTP both internally (over an intranet) and externally (e.g., to an external server).

[00158] Positive and negative training data (with respect to SampleCo Smart Meters) is collected and provided to training module 241 (e.g., by a researcher). An example of such data is shown in Figure 13 A. In the example shown, a total of ten training samples has been provided (numbered 0-9, respectively). Column 1302 indicates whether a given sample is a positive sample (indicated with a “1”) or a negative sample (indicated with a “0”). Thus, the first five devices are SampleCo Smart Meters, and the last five devices are not. The remaining nine columns (1304) each respectively correspond to a feature. Examples include GUIs (1306), protocol (1308), destination port (1310), packet lengths (1312), and packet timing information (1314). In various embodiments, such sample data can be obtained using one or more python scripts configured to parse traffic flow logs. Examining training data for device 0 (a positive example of a SampleCo Smart Meter), it has an OUI of 6935 and is communicating using protocol 6 (FTP, for example) over port 21. The first three non-zero packets of device 0 were observed to be 21 bytes, 26 bytes, and 22 bytes, respectively. In this example data set, the first packet time (tO) is always zero, and the delta between each time (e.g., tl-tO, etc.) is determined by training module 241. Other forms of data can also be used (e.g., where the deltas are determined by the python script and appear in place of each of the times listed in region 1314).

[00159] Figure 13B illustrates a result of training a model using the dataset shown in Figure 13A. In particular, the coefficients and intercepts for a model for determining whether traffic corresponds to a SampleCo Smart Meter are depicted. The coefficients/intercepts depicted in Figure 13B are collectively an example of a behavior signature of a SampleCo Smart Meter device and can be stored (e.g., in database 160 or other appropriate location) for use in subsequent identification of other SampleCo Smart Meter devices (e.g., as they are added to network 110 and their traffic is received by data appliance 102).

[00160] Figure 14A illustrates two examples of devices matching a device behavior signature. The two rows depicted in region 1402 correspond to data collected from two respective devices: device 0 and device 1. Both of these devices are SampleCo Smart Meter devices. Region 1404 indicates (for device 0 on the top line and device 1 on the bottom line) the probability that the respective device is not a SampleCo Smart Meter. Region 1406 indicates (for device 0 on the top line and device 1 on the bottom line) the probability that the respective device is a SampleCo Smart Meter. Both devices match the SampleCo Smart Meter device behavior profile with over a 99% probability.

[00161] Figure 14B illustrates two examples of devices not matching a device behavior signature. The two rows depicted in region 1452 correspond to data collected from two respective devices: device 0 and device 1. Both of these devices are not SampleCo Smart Meter devices. Region 1454 indicates (for device 0 on the top line and device 1 on the bottom line) the probability that the respective device is not a SampleCo Smart Meter.

Region 1456 indicates (for device 0 on the top line and device 1 on the bottom line) the probability that the respective device is a SampleCo Smart Meter. Both devices have a less than 1% probability of being SampleCo Smart Meter devices based on the behavior profile. [00162] D. Process for Performing Device Identification

[00163] Models trained using techniques described herein can be used for a variety of purposes. As previously mentioned, one such purpose is efficiently performing inline device identification/classification (e.g., classification performed in realtime as traffic associated with the device is observed on a network). Another such purpose is performing offline analysis (e.g., on pcap or other previously captured network traffic files). In both inline and offline device classification, device dataset and other traffic are used to train a model (e.g., using logistic regression) and generate a set of coefficients for each device. The set of coefficients for each device can be used as a device behavior signature. The set of coefficients can then be fed into a model to use in device identification. Features extracted (whether by a security appliance such as a firewall from live traffic, or from a pcap file) are provided to a device identification engine (e.g., implemented as analytics engine 242 and related elements depicted in Figure 2G). The device identification engine produces as output a probability that a given device matches a considered device behavior signature. Each possible device behavior signature can be looped through and the one with the highest probability (e.g., subject to a quality threshold) assigned as a classification of the device.

[00164] Figure 15 illustrates an example of a process for classifying an loT device. In various embodiments, process 1500 is performed by security platform 140. Process 1500 can also be performed by other systems as applicable (e.g., a system collocated on-premise with loT devices). Process 1500 begins at 1502 when information associated with a network communication of an loT device is received. As one example, such information is received by security platform 140 when data appliance 102 transmits to it a device discovery event for a given loT device. At 1504, a determination is made that the device has not been classified (or, as applicable, that a re-classification should be performed). As one example, platform 140 can query database 286 to determine whether or not the device has been classified. At 1506, a comparison against one or more behavior signatures is performed. As an example, as previously mentioned, a variety of different machine-learning and rule/heuristic-based models can be employed by inline analytics engine 272. The models can be applied individually, or (more typically) collectively, with different types of models better at detecting particular kinds of devices. As previously mentioned, in some situations, machine learning models that make use of packet behavior information can be efficient at classifying devices. In various embodiments, at 1506, one or more such models are used to help identify a given loT device. In particular, at 1506, probability matches against such models are determined. And, if a given device has a probability match that exceeds a threshold against a given packet behavior signature, the device can be classified (e.g., by platform 140) as being of a particular type (e.g., the device type corresponding to the matched profile). Finally, at 1508, the classification of the loT device (e.g., made at 1506) is provided to a security appliance configured to apply a policy to the loT device. As mentioned above, this allows for highly fine-grained security policies to be implemented in potentially mission critical environments with minimal administrative effort.

[00165] Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided.

There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.