Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEPLOYMENT ASSURANCE CHECKS FOR MONITORING INDUSTRIAL CONTROL SYSTEMS
Document Type and Number:
WIPO Patent Application WO/2017/087145
Kind Code:
A1
Abstract:
This disclosure provides an apparatus and method for deployment assurance checks for monitoring industrial control systems and other systems. A method includes identifying (205), by a risk manager system (154), a plurality of connected devices (138, 132, 124, 116) that are vulnerable to cyber-security risks. The method includes determining (210) devices to be monitored (502) from the plurality of connected devices. The method includes evaluating (220) system resource usage (602, 604, 606) on each device to be monitored (502). The method includes providing (230) recommendations (702) to a user as to whether or not the user should proceed with the monitoring, based on the evaluation.

Inventors:
CARPENTER SETH G (US)
KNAPP ERIC D (US)
Application Number:
PCT/US2016/059667
Publication Date:
May 26, 2017
Filing Date:
October 31, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HONEYWELL INT INC (US)
International Classes:
H04L29/06
Foreign References:
KR101233934B12013-02-15
KR20060017346A2006-02-23
US20090328209A12009-12-31
US20150007315A12015-01-01
JP2015103212A2015-06-04
US20080168560A12008-07-10
Other References:
See also references of EP 3378215A4
Attorney, Agent or Firm:
SZUCH, Colleen D. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method comprising:

identifying (205), by a risk manager system (154), a plurality of connected devices (138, 132, 124, 116) that are vulnerable to cyber-security risks;

determining (210), by the risk manager system (154), devices to be monitored (502) from the plurality of connected devices;

evaluating (220) system resource usage (602, 604, 606), by the risk manager system, on each device to be monitored (502); and

providing (230) recommendations (702) to a user, by the risk manager system (154), as to whether or not the user should proceed with the monitoring, based on the evaluation. 2. The method of claim 1, further comprising verifying (215) any software or hardware prerequisites on the devices to be monitored (502). 3. The method of claim 1, further comprising evaluating (225) security prerequisites on the connected devices (138, 132, 124, 116). 4. The method of claim 1, wherein identifying (205) the plurality of connected devices includes performing an automatic discovery process by the risk manager system (154).

5. A risk manager system (154) comprising:

a controller (156); and

a memory (158), the risk manager system (154) configured to:

identify (205) a plurality of connected devices (138, 132, 124, 116) that are vulnerable to cyber-security risks;

determine (210) devices to be monitored (502) from the plurality of connected devices;

evaluate (220) system resource usage (602, 604, 606) on each device to be monitored (502); and

provide (230) recommendations (702) to a user as to whether or not the user should proceed with the monitoring, based on the evaluation.

6. The risk manager system of claim 5, wherein the risk manager system (154) is further configured to verify (215) any software or hardware prerequisites on the devices to be monitored (502). 7. The risk manager system of claim 5, wherein the risk manager system (154) is further configured to evaluate (225) security prerequisites on the connected devices (138, 132, 124, 116). 8. The risk manager system of claim 5, wherein the risk manager system (154) is configured to identify (205) the plurality of connected devices by performing an automatic discovery process.

9. A non-transitory machine-readable medium (158) encoded with executable instructions that, when executed, cause one or more processors (156) of a risk manager system (154) to:

identify (205) a plurality of connected devices (138, 132, 124, 116) that are vulnerable to cyber-security risks;

determine (210) devices to be monitored (502) from the plurality of connected devices;

evaluate (220) system resource usage (602, 604, 606) on each device to be monitored (502); and

provide (230) recommendations (702) to a user as to whether or not the user should proceed with the monitoring, based on the evaluation. 10. The non-transitory machine-readable medium of claim 15, wherein the non-transitory machine-readable medium is further encoded with instructions to verify (215) any software or hardware prerequisites on the devices to be monitored (502). 11. The non-transitory machine-readable medium of claim 15, wherein the non-transitory machine-readable medium is further encoded with instructions to evaluate (225) security prerequisites on the connected devices (138, 132, 124, 116).

Description:
DEPLOYMENT ASSURANCE CHECKS FOR MONITORING INDUSTRIAL CONTROL SYSTEMS

TECHNICAL FIELD

[0001] This disclosure relates generally to network security. More specifically, this disclosure relates to an apparatus and method for deployment assurance checks for monitoring industrial control systems and other systems.

BACKGROUND

[0002] Processing facilities are often managed using industrial process control and automation systems. Conventional control and automation systems routinely include a variety of networked devices, such as servers, workstations, switches, routers, firewalls, safety systems, proprietary real-time controllers, and industrial field devices. Often times, this equipment comes from a number of different vendors. In industrial environments, cyber-security is of increasing concern, and unaddressed security vulnerabilities in any of these components could be exploited by attackers to disrupt operations or cause unsafe conditions in an industrial facility.

SUMMARY

[0003] This disclosure provides an apparatus and method for deployment assurance checks for monitoring industrial control systems and other systems. A method includes identifying, by a risk manager system, a plurality of connected devices that are vulnerable to cyber-security risks. The method includes determining devices to be monitored from the plurality of connected devices. The method includes evaluating system resource usage, by the risk manager system, on each device to be monitored. The method includes providing recommendations to a user as to whether or not the user should proceed with the monitoring, based on the evaluation. [0004] Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0005] For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which: [0006] FIGURE 1 illustrates an example industrial process control and automation system according to this disclosure; [0007] FIGURE 2 illustrates a flowchart of a process in accordance with disclosed embodiments; and [0008] FIGURES 3-8 illustrate example user interfaces that can be used as part of disclosed embodiments.

DETAILED DESCRIPTION

[0009] The figures, discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the invention. Those skilled in the art will understand that the principles of the invention may be implemented in any type of suitably arranged device or system. [0010] Figure 1 illustrates an example industrial process control and automation system 100 according to this disclosure. As shown in Figure 1, the system 100 includes various components that facilitate production or processing of at least one product or other material. For instance, the system 100 is used here to facilitate control over components in one or multiple plants 101a-101n. Each plant 101a-101n represents one or more processing facilities (or one or more portions thereof), such as one or more manufacturing facilities for producing at least one product or other material. In general, each plant 101a-101n may implement one or more processes and can individually or collectively be referred to as a process system. A process system generally represents any system or portion thereof configured to process one or more products or other materials in some manner. [0011] In Figure 1, the system 100 is implemented using the Purdue model of process control. In the Purdue model,“Level 0” may include one or more sensors 102a and one or more actuators 102b. The sensors 102a and actuators 102b represent components in a process system that may perform any of a wide variety of functions. For example, the sensors 102a could measure a wide variety of characteristics in the process system, such as temperature, pressure, or flow rate. Also, the actuators 102b could alter a wide variety of characteristics in the process system. The sensors 102a and actuators 102b could represent any other or additional components in any suitable process system. Each of the sensors 102a includes any suitable structure for measuring one or more characteristics in a process system. Each of the actuators 102b includes any suitable structure for operating on or affecting one or more conditions in a process system. [0012] At least one network 104 is coupled to the sensors 102a and actuators 102b. The network 104 facilitates interaction with the sensors 102a and actuators 102b. For example, the network 104 could transport measurement data from the sensors 102a and provide control signals to the actuators 102b. The network 104 could represent any suitable network or combination of networks. As particular examples, the network 104 could represent an Ethernet network, an electrical signal network (such as a HART or FOUNDATION FIELDBUS network), a pneumatic control signal network, or any other or additional type(s) of network(s). [0013] In the Purdue model,“Level 1” may include one or more controllers 106, which are coupled to the network 104. Among other things, each controller 106 may use the measurements from one or more sensors 102a to control the operation of one or more actuators 102b. For example, a controller 106 could receive measurement data from one or more sensors 102a and use the measurement data to generate control signals for one or more actuators 102b. Each controller 106 includes any suitable structure for interacting with one or more sensors 102a and controlling one or more actuators 102b. Each controller 106 could, for example, represent a proportional-integral-derivative (PID) controller or a multivariable controller, such as a Robust Multivariable Predictive Control Technology (RMPCT) controller or other type of controller implementing model predictive control (MPC) or other advanced predictive control (APC). As a particular example, each controller 106 could represent a computing device running a real-time operating system. [0014] Two networks 108 are coupled to the controllers 106. The networks 108 facilitate interaction with the controllers 106, such as by transporting data to and from the controllers 106. The networks 108 could represent any suitable networks or combination of networks. As a particular example, the networks 108 could represent a redundant pair of Ethernet networks, such as a FAULT TOLERANT ETHERNET (FTE) network from HONEYWELL INTERNATIONAL INC. [0015] At least one switch/firewall 110 couples the networks 108 to two networks 112. The switch/firewall 110 may transport traffic from one network to another. The switch/firewall 110 may also block traffic on one network from reaching another network. The switch/firewall 110 includes any suitable structure for providing communication between networks, such as a HONEYWELL CONTROL FIREWALL (CF9) device. The networks 112 could represent any suitable networks, such as an FTE network. [0016] In the Purdue model,“Level 2” may include one or more machine-level controllers 114 coupled to the networks 112. The machine-level controllers 114 perform various functions to support the operation and control of the controllers 106, sensors 102a, and actuators 102b, which could be associated with a particular piece of industrial equipment (such as a boiler or other machine). For example, the machine-level controllers 114 could log information collected or generated by the controllers 106, such as measurement data from the sensors 102a or control signals for the actuators 102b. The machine-level controllers 114 could also execute applications that control the operation of the controllers 106, thereby controlling the operation of the actuators 102b. In addition, the machine-level controllers 114 could provide secure access to the controllers 106. Each of the machine-level controllers 114 includes any suitable structure for providing access to, control of, or operations related to a machine or other individual piece of equipment. Each of the machine-level controllers 114 could, for example, represent a server computing device running a MICROSOFT WINDOWS operating system. Although not shown, different machine-level controllers 114 could be used to control different pieces of equipment in a process system (where each piece of equipment is associated with one or more controllers 106, sensors 102a, and actuators 102b). [0017] One or more operator stations 116 are coupled to the networks 112. The operator stations 116 represent computing or communication devices providing user access to the machine-level controllers 114, which could then provide user access to the controllers 106 (and possibly the sensors 102a and actuators 102b). As particular examples, the operator stations 116 could allow users to review the operational history of the sensors 102a and actuators 102b using information collected by the controllers 106 and/or the machine-level controllers 114. The operator stations 116 could also allow the users to adjust the operation of the sensors 102a, actuators 102b, controllers 106, or machine-level controllers 114. In addition, the operator stations 116 could receive and display warnings, alerts, or other messages or displays generated by the controllers 106 or the machine-level controllers 114. Each of the operator stations 116 includes any suitable structure for supporting user access and control of one or more components in the system 100. Each of the operator stations 116 could, for example, represent a computing device running a MICROSOFT WINDOWS operating system. [0018] At least one router/firewall 118 couples the networks 112 to two networks 120. The router/firewall 118 includes any suitable structure for providing communication between networks, such as a secure router or combination router/firewall. The networks 120 could represent any suitable networks, such as an FTE network. [0019] In the Purdue model,“Level 3” may include one or more unit-level controllers 122 coupled to the networks 120. Each unit-level controller 122 is typically associated with a unit in a process system, which represents a collection of different machines operating together to implement at least part of a process. The unit-level controllers 122 perform various functions to support the operation and control of components in the lower levels. For example, the unit-level controllers 122 could log information collected or generated by the components in the lower levels, execute applications that control the components in the lower levels, and provide secure access to the components in the lower levels. Each of the unit-level controllers 122 includes any suitable structure for providing access to, control of, or operations related to one or more machines or other pieces of equipment in a process unit. Each of the unit-level controllers 122 could, for example, represent a server computing device running a MICROSOFT WINDOWS operating system. Although not shown, different unit-level controllers 122 could be used to control different units in a process system (where each unit is associated with one or more machine-level controllers 114, controllers 106, sensors 102a, and actuators 102b). [0020] Access to the unit-level controllers 122 may be provided by one or more operator stations 124. Each of the operator stations 124 includes any suitable structure for supporting user access and control of one or more components in the system 100. Each of the operator stations 124 could, for example, represent a computing device running a MICROSOFT WINDOWS operating system. [0021] At least one router/firewall 126 couples the networks 120 to two networks 128. The router/firewall 126 includes any suitable structure for providing communication between networks, such as a secure router or combination router/firewall. The networks 128 could represent any suitable networks, such as an FTE network. [0022] In the Purdue model,“Level 4” may include one or more plant-level controllers 130 coupled to the networks 128. Each plant-level controller 130 is typically associated with one of the plants 101a-101n, which may include one or more process units that implement the same, similar, or different processes. The plant-level controllers 130 perform various functions to support the operation and control of components in the lower levels. As particular examples, the plant-level controller 130 could execute one or more manufacturing execution system (MES) applications, scheduling applications, or other or additional plant or process control applications. Each of the plant-level controllers 130 includes any suitable structure for providing access to, control of, or operations related to one or more process units in a process plant. Each of the plant-level controllers 130 could, for example, represent a server computing device running a MICROSOFT WINDOWS operating system. [0023] Access to the plant-level controllers 130 may be provided by one or more operator stations 132. Each of the operator stations 132 includes any suitable structure for supporting user access and control of one or more components in the system 100. Each of the operator stations 132 could, for example, represent a computing device running a MICROSOFT WINDOWS operating system. [0024] At least one router/firewall 134 couples the networks 128 to one or more networks 136. The router/firewall 134 includes any suitable structure for providing communication between networks, such as a secure router or combination router/firewall. The network 136 could represent any suitable network, such as an enterprise-wide Ethernet or other network or all or a portion of a larger network (such as the Internet). [0025] In the Purdue model,“Level 5” may include one or more enterprise-level controllers 138 coupled to the network 136. Each enterprise-level controller 138 is typically able to perform planning operations for multiple plants 101a-101n and to control various aspects of the plants 101a-101n. The enterprise-level controllers 138 can also perform various functions to support the operation and control of components in the plants 101a-101n. As particular examples, the enterprise-level controller 138 could execute one or more order processing applications, enterprise resource planning (ERP) applications, advanced planning and scheduling (APS) applications, or any other or additional enterprise control applications. Each of the enterprise-level controllers 138 includes any suitable structure for providing access to, control of, or operations related to the control of one or more plants. Each of the enterprise-level controllers 138 could, for example, represent a server computing device running a MICROSOFT WINDOWS operating system. In this document, the term“enterprise” refers to an organization having one or more plants or other processing facilities to be managed. Note that if a single plant 101a is to be managed, the functionality of the enterprise-level controller 138 could be incorporated into the plant-level controller 130. [0026] Access to the enterprise-level controllers 138 may be provided by one or more operator stations 140. Each of the operator stations 140 includes any suitable structure for supporting user access and control of one or more components in the system 100. Each of the operator stations 140 could, for example, represent a computing device running a MICROSOFT WINDOWS operating system. [0027] Various levels of the Purdue model can include other components, such as one or more databases. The database(s) associated with each level could store any suitable information associated with that level or one or more other levels of the system 100. For example, a historian 141 can be coupled to the network 136. The historian 141 could represent a component that stores various information about the system 100. The historian 141 could, for instance, store information used during production scheduling and optimization. The historian 141 represents any suitable structure for storing and facilitating retrieval of information. Although shown as a single centralized component coupled to the network 136, the historian 141 could be located elsewhere in the system 100, or multiple historians could be distributed in different locations in the system 100. [0028] In particular embodiments, the various controllers and operator stations in Figure 1 may represent computing devices. For example, each of the controllers 106, 114, 122, 130, 138 could include one or more processing devices 142 and one or more memories 144 for storing instructions and data used, generated, or collected by the processing device(s) 142. Each of the controllers 106, 114, 122, 130, 138 could also include at least one network interface 146, such as one or more Ethernet interfaces or wireless transceivers. Also, each of the operator stations 116, 124, 132, 140 could include one or more processing devices 148 and one or more memories 150 for storing instructions and data used, generated, or collected by the processing device(s) 148. Each of the operator stations 116, 124, 132, 140 could also include at least one network interface 152, such as one or more Ethernet interfaces or wireless transceivers. [0029] As noted above, cyber-security is of increasing concern with respect to industrial process control and automation systems. Unaddressed security vulnerabilities in any of the components in the system 100 could be exploited by attackers to disrupt operations or cause unsafe conditions in an industrial facility. In industrial environments, cyber security is of increasing concern, and it is difficult to quickly determine the potential sources of risk to the whole system. Modern control systems contain a mix of Windows servers and workstations, switches, routers, firewalls, safety systems, proprietary real-time controllers and field devices, any of which can be implemented by one or another of the components in system 100. [0030] Often these systems are a mixture of equipment from different vendors. Sometimes the plant operators do not have a complete understanding or inventory of all the equipment running in their site. Unaddressed security vulnerabilities in any of these components could be exploited by attackers to disrupt production or cause unsafe conditions in the plant. Disclosed embodiments can address the potential vulnerabilities in all these systems, prioritize the vulnerabilities based on the risk to the system, and guide the user to mitigate the vulnerabilities. [0031] Any monitoring system will necessarily have some performance impact on the monitored system, no matter how small. It will also have some requirements or prerequisites that must be met in order to monitor that system. The highest priorities for a control system are safety and production, so it is critical that any monitoring of that system does not jeopardize either of these aspects. This is true whether an agent is installed on the end devices for monitoring or whether“agentless” protocols are used for monitoring (which take advantage of hooks and APIs already present in the end devices). [0032] A monitoring system then should be able to verify these requirements and ensure that it will not have an adverse impact on system safety or production prior to starting its monitoring. This can be accomplished (among other ways) using a risk manager 154. Among other things, the risk manager 154 supports a technique for monitoring a system such as an industrial control system and checking for proper deployment of the devices and components of that system. [0033] In this example, the risk manager 154 includes one or more processing devices 156; one or more memories 158 for storing instructions and data used, generated, or collected by the processing device(s) 156; and at least one network interface 160. Each processing device 156 could represent a microprocessor, microcontroller, digital signal process, field programmable gate array, application specific integrated circuit, or discrete logic. Each memory 158 could represent a volatile or non-volatile storage and retrieval device, such as a random access memory or Flash memory. Each network interface 160 could represent an Ethernet interface, wireless transceiver, or other device facilitating external communication. The functionality of the risk manager 154 could be implemented using any suitable hardware or a combination of hardware and software/firmware instructions. In some embodiments, the risk manager 154 includes, or is communication with, a database 155. The database 155 denotes any suitable structure facilitating storage and retrieval of information. [0034] Although Figure 1 illustrates one example of an industrial process control and automation system 100, various changes may be made to Figure 1. For example, a control and automation system could include any number of sensors, actuators, controllers, servers, operator stations, networks, risk managers, and other components. Also, the makeup and arrangement of the system 100 in Figure 1 is for illustration only. Components could be added, omitted, combined, or placed in any other suitable configuration according to particular needs. Further, particular functions have been described as being performed by particular components of the system 100. This is for illustration only. In general, control and automation systems are highly configurable and can be configured in any suitable manner according to particular needs. In addition, Figure 1 illustrates an example environment in which the functions of the risk manager 154 can be used. This functionality can be used in any other suitable device or system. [0035] In some risk manager implementations, the user installing and configuring the risk manager would be responsible for verifying that each end device is ready for monitoring. In many cases, a user will simply attempt to monitor the end device and hope there are no adverse effects. The attempt to monitor the device may also fail, leaving the user to contacting technical support or try independently troubleshooting. [0036] Disclosed embodiments automate those checks, provide feedback to the installing user, and provide a recommendation as to whether the user should proceed. It can identify situations where monitoring is not recommended (e.g., the user might not want to risk upsetting the system by monitoring a device already suffering from low resources) and situations where monitoring would fail if attempted (e.g., trying to push a monitoring agent to the device would be blocked by the current security settings). A recommendation as to whether the user should proceed can be as simple as a“yes/no” indicator, or can be more specific recommendations as to actions that should or should not be taken if the user is to proceed. [0037] The techniques disclosed herein can be applied to both“agent-based” and “agentless” monitoring of end devices. The distinction between the two is that agent- based monitoring will add a resident program to the monitored device, while agentless monitoring takes advantage of features already present in the end device to perform the monitoring functions. Often agentless monitoring takes advantages of APIs present in the operating system (such as the Windows Management Infrastructure (WMI) available on Microsoft Windows platforms) or otherwise already present on the end device. [0038] For example, in agent-based system, once devices are discovered, the agent needs to be deployed. This may require that some security settings are disabled, and requires a certain amount of free CPU, memory and disk space. The pre-check process can perform such functions as checking security configurations to see if the agent can be pushed without any changes; checking memory to ensure that there is sufficient free RAM to install the agent and still have a healthy margin of available memory; checking CPU utilization to ensure that there is sufficient free CPU to install the agent and still have a healthy margin of available memory; and checking a hard disk drive (HDD) to ensure that there is sufficient free HDD space to install the agent and still have a healthy margin of available storage. [0039] Figure 2 illustrates a flowchart of a method 200 in accordance with disclosed embodiments, as can be performed, for example, by risk manager 154 or another device or controller (referred to as the“system” or“risk manager system” below). [0040] The system identifies a plurality of connected devices that are vulnerable to cyber-security risks (205). These devices can be any computing devices, such as any of the components of Fig. 1 or those described below. Devices can be automatically discovered and added by the system. Devices can be added manually by the user, for example by entering such information as an IP address, an SNMP community string, or others. [0041] The system determines devices to be monitored (210). This can include receiving manual entries from a user that identify the devices to be monitored, by automatic discovery methods such as enumerating PCs in Active Directory for Windows domains or searching through machines within the subnet of the monitoring host, or otherwise. The devices to be monitored can include any of the devices described herein, including those described above with respect to Fig.1 and those described below. [0042] The system can verify any relevant software or hardware prerequisites on the devices to be monitored (215). For example, in implementations using agent-based monitoring, when monitoring Windows PCs, the agent can use Microsoft PowerShell scripts on the local machine to collect data, and some of these scripts require PowerShell version 2.0 or later. The system can validate whether or not the appropriate software version is present on each device to be monitored through a remote WMI query against that machine. This step may not be necessary for agentless monitoring approaches, as the mechanism for data collection may be an integral part of the system. [0043] The system evaluates system resource usage on each device to be monitored to ensure that the monitoring will not push it beyond acceptable thresholds (220). Useful resources to check include, but are not limited to, CPU, memory and disk space. A threshold can be selected for each resource checked (e.g., all validated resource usage should be below 80%). This can be used to help make the user recommendation described herein. Again, in the example of a risk manager looking at a Windows PC, this data can be collected via remote WMI queries. [0044] The system evaluates security prerequisites on the connected devices (225). These might include checking firewall settings on each device and validating that the monitoring process is running from an account that is recognized and has sufficient privileges on the end device. This process can also include receiving credentials from a user or sending credentials to the connected devices. [0045] The system provides recommendations to the user as to whether or not they should proceed with the monitoring (230). If a device fails hardware, software or security prerequisite checks, the user can be advised that monitoring may not work correctly (or at all), and provide steps that need to be taken on the end device. If a device fails resource checks, the user can be advised of possible system instability if they proceed with monitoring. The recommendations can be displayed to a user, transmitted to a user device, or otherwise. [0046] As part of providing recommendations, the system can also offer some level of enforcement of the recommendations. Possible levels of enforcement include: [0047] a. No enforcement– The user is presented with the information gathered by the system, but uses that information to decide how to proceed with no further restriction from the system. [0048] b. “Soft” enforcement– The user is presented with a recommendation for each device, and the option whether or not to proceed for a given device is preselected. The user can still override that selection (they may be presented with a warning message that they must acknowledge first). [0049] c. “Hard” enforcement– The user is presented with a recommendation for each device, but cannot proceed with monitoring on any device that does not pass the system checks. [0050] The system can provide the recommendations by means of a notification, such as by email, SMS, OPC, system center, or otherwise. [0051] The steps in the processes described herein, unless specifically described otherwise, may be performed concurrently, sequentially, or repeatedly, may be omitted, or may be performed in a different order. [0052] Figure 3 illustrates a user interface 300 that may be presented to a user, for example by the risk manager 154 or another system, for interacting with a user as part of processes described herein. This figure illustrates a device configuration 302, showing a plurality of connected devices 304 that have been identified by the system. Also shown is an“add new devices” button 306; when a user selects this button, the system will receive new device information from the user either manually or via a wizard. Devices 304 can be automatically discovered and added by the system. Devices can be added manually by the user, for example by entering such information as an IP address, an SNMP community string, or others. [0053] Figure 4 illustrates a user interface 400 that may be presented to a user, for example by the risk manager 154 or another system, for interacting with a user as part of processes described herein. This figure illustrates an interface to add new devices, either manually or by automatic discovery by the system of PCs, network devices, or other connected devices. Similar techniques can be used to delete a connected device from the monitoring list. [0054] Figure 5 illustrates a user interface 500 that may be presented to a user, for example by the risk manager 154 or another system, for interacting with a user as part of processes described herein. This figure illustrates an interface for determining devices to monitor 502, in this case by receiving a selection from a user. [0055] Figure 6 illustrates a user interface 600 that may be presented to a user, for example by the risk manager 154 or another system, for interacting with a user as part of processes described herein. This figure illustrates an interface for evaluating resources of the devices to be monitored, such as CPU usage 602, physical memory usage 604, and disk space usage 606. [0056] Figure 7 illustrates a user interface 700 that may be presented to a user, for example by the risk manager 154 or another system, for interacting with a user as part of processes described herein. This figure illustrates an interface for providing credentials to install or deploy an agent on a connected device. A similar interface can be used to provide credentials to remove an agent from a connected device. This figure also illustrates an example of recommendations 702 that are provided to a used as to whether or not the user should proceed with the monitoring, based on the evaluation, including specific actions that are recommended to be taken or not taken (such as installing an agent or adding firewall exceptions). [0057] Figure 8 illustrates a user interface 800 that may be presented to a user, for example by the risk manager 154 or another system, for interacting with a user as part of processes described herein. This figure illustrates an interface for showing results of adding new devices and installing or deploying an agent on the connected devices. [0058] Note that the risk manager 154 and/or the other processes, devices, and techniques described herein could use or operate in conjunction with any combination or all of various features described in the following previously-filed patent applications (all of which are hereby incorporated by reference): • U.S. Patent Application No. 14/482,888 entitled “DYNAMIC QUANTIFICATION OF CYBER-SECURITY RISKS IN A CONTROL SYSTEM”; • U.S. Provisional Patent Application No. 62/036,920 entitled“ANALYZING CYBER-SECURITY RISKS IN AN INDUSTRIAL CONTROL ENVIRONMENT”; • U.S. Provisional Patent Application No.62/113,075 entitled“RULES ENGINE FOR CONVERTING SYSTEM-RELATED CHARACTERISTICS AND EVENTS INTO CYBER-SECURITY RISK ASSESSMENT VALUES” and corresponding non- provisional U.S. Patent Application 14/871,695;

• U.S. Provisional Patent Application No.62/113,221 entitled“NOTIFICATION SUBSYSTEM FOR GENERATING CONSOLIDATED, FILTERED, AND RELEVANT SECURITY RISK-BASED NOTIFICATIONS” and corresponding non- provisional U.S. Patent Application 14/871,521;

• U.S. Provisional Patent Application No. 62/113,100 entitled“TECHNIQUE FOR USING INFRASTRUCTURE MONITORING SOFTWARE TO COLLECT CYBER-SECURITY RISK DATA” and corresponding non-provisional U.S. Patent Application 14/871,855;

• U.S. Provisional Patent Application No. 62/113,186 entitled “INFRASTRUCTURE MONITORING TOOL FOR COLLECTING INDUSTRIAL PROCESS CONTROL AND AUTOMATION SYSTEM RISK DATA” and corresponding non-provisional U.S. Patent Application 14/871,732;

• U.S. Provisional Patent Application No. 62/113,165 entitled“PATCH MONITORING AND ANALYSIS” and corresponding non-provisional U.S. Patent Application 14/871,921;

• U.S. Provisional Patent Application No. 62/113,152 entitled“APPARATUS AND METHOD FOR AUTOMATIC HANDLING OF CYBER-SECURITY RISK EVENTS” and corresponding non-provisional U.S. Patent Application 14/871,503; • U.S. Provisional Patent Application No. 62/114,928 entitled“APPARATUS AND METHOD FOR DYNAMIC CUSTOMIZATION OF CYBER-SECURITY RISK ITEM RULES” and corresponding non-provisional U.S. Patent Application 14/871,605; • U.S. Provisional Patent Application No. 62/114,865 entitled“APPARATUS AND METHOD FOR PROVIDING POSSIBLE CAUSES, RECOMMENDED ACTIONS, AND POTENTIAL IMPACTS RELATED TO IDENTIFIED CYBER- SECURITY RISK ITEMS” and corresponding non-provisional U.S. Patent Application 14871814; and

• U.S. Provisional Patent Application No. 62/114,937 entitled“APPARATUS AND METHOD FOR TYING CYBER-SECURITY RISK ANALYSIS TO COMMON RISK METHODOLOGIES AND RISK LEVELS” and corresponding non-provisional U.S. Patent Application 14/871,136; and

• U.S. Provisional Patent Application No. 62/116,245 entitled “RISK MANAGEMENT IN AN AIR-GAPPED ENVIRONMENT” and corresponding non- provisional U.S. Patent Application 14/871,547. [0059] In some embodiments, various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase“computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A“non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device. [0060] It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms“application” and“program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code). The term“communicate,” as well as derivatives thereof, encompasses both direct and indirect communication. The terms“include” and“comprise,” as well as derivatives thereof, mean inclusion without limitation. The term“or” is inclusive, meaning and/or. The phrase“associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrase“at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example,“at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C. [0061] While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.