Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SAFETY INSTRUMENTED FUNCTION ANALYZER SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2020/163000
Kind Code:
A1
Abstract:
A safety instrumented function analyzer (SIFA) is presented. The SIFA reads in a SIS logic from a Safety Instrumented System, analyzes it according to failure mode reasoning, and produces a list of output modes. The SIFA also calculates a probability of failure on demand as well as a spurious trip rate.

Inventors:
JAHANIAN HAMID (AU)
Application Number:
PCT/US2019/062569
Publication Date:
August 13, 2020
Filing Date:
November 21, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
SIEMENS ENERGY INC (US)
International Classes:
G06F11/00; G05B9/02; G06F11/07
Foreign References:
US20170185470A12017-06-29
CN103984796A2014-08-13
Other References:
"Programmable controllers - Part 6: Functional safety", IEC 61131-6:2012, IEC, 3, RUE DE VAREMBÉ, PO BOX 131, CH-1211 GENEVA 20, SWITZERLAND, 2 October 2012 (2012-10-02), pages 1 - 203, XP082008116
Attorney, Agent or Firm:
FIL, Michele S. (US)
Download PDF:
Claims:
What is claimed is:

1. A system (100) for determining failure modes of a safety instrumented system, the system comprising:

at least one processor (102) configured via executable instructions (106) included in at least one memory (104) to:

receive the safety instrumented system (SIS) logic (122); analyze the SIS logic using failure mode reasoning to determine one or more SIS failure modes (124); and

output the determined SIS failure modes (124).

2. The system (100) according to claim 1, wherein the at least one processor is configured to:

calculate and output a probability of failure on demand (PFD) (126) associated with the determined SIS failures modes (124).

3. The system according to claims 1 or 2, wherein the at least one processor is configured to:

calculate and output a spurious trip rate (128) associated with the determined

SIS failure modes.

4. The system according to claim 1, wherein the SIS logic comprises a plurality of function blocks, and wherein the failure mode reasoning includes analyzing each function block in the SIS logic by determining a plurality of function block input signals that result in a failure mode at the function block output, and wherein a failure block for each function block is created describing failure behavior of the function block.

5. The system according to claim 4, wherein the plurality of failure blocks are interconnected relating a SIS output to a SIS input, and wherein from the interconnection the determined SIS failure modes are extracted.

6. The system according to claim 5, wherein the SIS input is an output of a sensor.

7 The system according to claim 2, wherein the probability of failure on demand is calculated using the equation:

PFDSIF = PFDs + PFDsis + PFDFE

Wherein PFDs, PFDsis, and PFDFE represent the PFD of input sensors, the PFD of the SIS logic, and the PFD of output elements, respectively.

8. A method (800) for safety instrumented function analyzation comprising through operation of at least one processor (102): receiving (804) a safety instrumented system (SIS) logic (122); analyzing (806) the SIS logic using failure mode reasoning to determine one or more SIS failure modes (124); and outputting (808) the determined SIS failure modes.

9. The method according to claim 8, further comprising: calculating and outputting a probability of failure (126) associated with the determined SIS failure modes.

10. The method according to claims 8 or 9, further comprising:

calculating and outputting a spurious trip rate (128) associated with the determined failure modes.

11. The method according to claim 8, wherein the SIS logic comprises a plurality of function blocks,

wherein the failure mode reasoning includes analyzing each function block in the SIS logic by determining a plurality of input signals that result in a failure mode at the function block output, and

wherein a failure block for each function block is created describing failure behavior of the function block.

12. The method according to claim 11, wherein the plurality of failure blocks are interconnected relating a SIS output to a SIS input, and wherein from the interconnection the determined SIS failure modes are extracted.

13. The method according to claim 12, wherein the SIS input is an output of a sensor.

14. A non-transitory computer readable medium (118) encoded with processor executable instructions (106) that when executed by at least one processor (102), cause the at least one processor to carry out a method for safety instrumented function analyzation according to claim 8.

15. The non-transitory computer readable medium according to claim 14, further comprising:

calculating and outputting a probability of failure (126) associated with the determined SIS failure modes.

16. The non-transitory computer readable medium according to claims 14 or 15, further comprising: calculating and outputting a spurious trip rate (128) associated with the determined failure modes.

17. The non-transitory computer readable medium according to claim 14, wherein the SIS logic comprises a plurality of function blocks, wherein the failure mode reasoning includes analyzing each function block in the SIS logic by determining a plurality of input signals that result in a failure mode at the function block output, and wherein a failure block for each function block is created describing failure behavior of the function block.

18. The non-transitory computer readable medium according to claim 17, wherein the plurality of failure blocks are interconnected relating a SIS output to a SIS input, and wherein from the interconnection the determined SIS failure modes are extracted.

19. The non-transitory computer readable medium according to claim 18, wherein the SIS input is an output of a sensor.

Description:
SAFETY INSTRUMENTED FUNCTION ANALYZER SYSTEM AND

METHOD

BACKGROUND

1. Field

[0001] The present disclosure is directed, in general, to identification and calculation of failure modes of safety instrumented systems.

2. Description of the Related Art

[0002] Safety instrumented systems (SIS) are responsible for protecting industrial plants and the people in or near the plant environment from major hazardous events. A typical SIS includes hardware components and software programs (i.e. SIS logic). Through SIS logic, the CPU processes the input signals received from sensors, for example, and decides when a safety action should be triggered at its output in order to drive at least one final element and isolate the plant from hazard. The safety function achieved by the combination of sensors, SIS logic, and output elements is referred to as a Safety Instrumented Function (SIF).

[0003] Currently, various reliability methods, such as Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) are used to identify and quantify failure modes in SIS. These methods mainly address the random hardware failure of SIS components, in isolation from SIS logic. While this approach may offer a quick and easy reliability model, it does not guarantee a comprehensive and accurate representation of SIS failure. This can only be achieved by taking into account the SIS logic, as it is the SIS logic that transforms the readings of sensors into a correct (or incorrect) indication of hazard, which may in turn trigger (or block) the demand for safety action.

[0004] Consequently, safety instrumented systems may benefit from improvements. SUMMARY

[0005] Briefly described, aspects of the present disclosure relate to a system and a method for safety instrumented function analyzation.

[0006] A first aspect provides a system for determining failure modes of a safety instrumented system. The system includes at least one processor configured via executable instructions included in at least one memory to receive the safety instrumented system logic, analyse the SIS logic using failure mode reasoning to determine one or more SIS failure modes, and output the determined SIS failure modes.

[0007] A second aspect provides a method for safety instrumented function analyzation. The method includes the steps of receiving a safety instrumented systems logic, analysing the SIS logic using failure mode reasoning to determine one or more SIS failure modes and outputting the determined SIS failure modes.

[0008] A third aspect provides a non-transitory computer readable medium encoded with processor executable instructions that when executed by at least one processor, cause the at least one processor to carry out the method for safety instrumented function analyzation.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] Fig. 1 illustrates a functional block diagram of an example system that facilitates safety instrumented function analyzation.

[0010] Fig. 2 illustrates a block diagram of a safety instrumented system (SIS) configuration.

[0011] Fig. 3 illustrates a block diagram of an example of safety instrumented function logic (SIF) with two inputs.

[0012] Fig. 4 illustrates an example of failure mode reasoning (FMR) blocks corresponding to individual function blocks (FBs) in SIF logic.

[0013] Fig. 5 illustrates a flow diagram of an example methodology that facilitates safety instrumented function analyzation.

[0014] Fig. 6 illustrates a block diagram of a data processing system in which an embodiment may be implemented.

DETAILED DESCRIPTION

[0015] Various technologies that pertain to systems and methods that facilitate a safety instrumented function analyzer (SIFA) will now be described with reference to the drawings, where like reference numerals represent like elements throughout. The drawings discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged apparatus. It is to be understood that functionality that is described as being carried out by certain system elements may be performed by multiple elements. Similarly, for instance, an element may be configured to perform functionality that is described as being carried out by multiple elements. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.

[0016] It should be appreciated that energy production plants, chemical plants, manufacturing plants, and/or any other type of industrial plant, may employ safety instrumented systems (SIS) that are responsible for protecting the plant and plant personnel from major hazardous events. As illustrated in Fig. 2, a typical SIS configuration 200 may comprise input devices 210, such as sensors, that are interfaced to a SIS processor (e.g., CPU) 220 comprising SIS logic. Through the SIS logic 220, the SIS CPU 220 processes the sensor readings as input signals 210 and decides when a safety action should be triggered in an output element 230 which may include closing a safety valve as shown. [0017] SIS logic 220 is typically developed in graphic editors and in the form of Function Block Diagrams. A Function Block Diagram typically includes Function Blocks (FBs) and their interconnections. The interconnections may be represented by a variable.

[0018] A SIS output 225 is determined by the transition of SIS inputs 215 through the SIS logic 220. In normal situations, where SIS sensors 210 are not faulty, the real state of the process is measured by the sensors 210. Subsequently, the internal variables and SIS outputs will reflect the intended states. If a sensor 210 is affected by a fault, the reported value will differ from the real state and the resultant values at the internal variables and SIS output 225 will differ from their intended states.

[0019] Dangerous failure of SIS components can result in the SIS being unavailable and ultimately lead to a hazardous event at the industrial plant. An example embodiment is operative to minimize such hazardous events by effective identification of SIS failure modes. Such identification of SIS failure modes is heavily dependent on attaining a detailed and accurate understanding of the SIS logic 220, for it is the SIS logic 220 that determines the SIS behavior by mapping its inputs 210 to the outputs 230. Alternatively, in the absence of a comprehensive, accurate understanding of SIS logic 220, a safety practitioner might rely on a simplified generic reliability model, which can potentially suffer from inaccuracy and model uncertainty. As a result, the time and effort put on modelling the failure modes may not deliver a realistic representation of the SIS and its actual failure modes.

[0020] To obtain an accurate model for all failure modes of a SIS, the SIS practitioner may need to have a sound knowledge of specific SIS technology in order to understand the functionality of each function block in detail, and also to be prepared to review complex SIS logic. Not only is such review process a challenging task, if practical at all, but it will also cost considerable time and energy, and the final outcome will still be prone to human error at both model and data levels anyway.

[0021] To overcome these deficiencies, an example embodiment implements a safety instrumented function analyzer (SIFA) that automates the entire process of failure mode identification and reliability measures of safety systems. . With reference to Fig. 1, an example safety function analyzer system 100 is illustrated that may overcome the inadequacies of the previously described approaches to evaluate SIS. The system 100 employs one or more data processing systems 110 (e.g., a PC, workstations, server). Each data processing system 110 may comprise at least one processor 102 (e.g., a microprocessor/CPU). The processor 102 may be configured to carry out various processes and functions described herein by executing from a memory 104, computer/processor executable instructions 106 corresponding to one or more software and/or firmware applications 108 or portions thereof that are programmed to cause the at least one processor 102 to carry out the various processes and functions described herein.

[0022] Such a memory 104 may correspond to an internal or external volatile or nonvolatile processor memory 116 (e.g., main memory, RAM, and/or CPU cache), that is included in the processor and/or in operative connection with the processor. Such a memory may also correspond to non-transitory nonvolatile data stores 118(e.g., flash drive, SSD, hard drive, ROM, EPROMs, optical discs/drives, databases, or other non-transitory computer readable media) in operative connection with the processor.

[0023] The described data processing system 110 may optionally include at least one display device 112 and at least one input device 114 in operative connection with the processor. The display device, for example, may include an LCD or AMOLED display screen, monitor, VR head-set, projector, or any other type of display device capable of displaying outputs from the processor. The input device, for example, may include a mouse, keyboard, touch screen, touch pad, trackball, buttons, keypad, game controller, gamepad, camera, microphone, motion sensing devices that capture motion gestures, or other type of input device capable of providing user inputs to the processor.

[0024] The data processing system 110 may be configured to execute one or more applications 108 that carry out functions (or steps) that implement the SIFA 120. These functions may include receiving (e.g., reading data corresponding to) a SIS logic 122 from some local or external file or data store, analyzing the SIS logic 122, and determine a list of SIS failure modes 124. These functions may also include calculating a probability of failure on demand 126 and spurious trip rate 128 for the determined failure modes. The SIFA may output such a list of failure modes, probability of failure, and spurious trip rate. Such SIFA outputs 130 may, for example, be displayed on the display screen 112 and/or, for example, be saved to data store as a file (e.g., a CSV file, spreadsheet) or a data record in a database.

[0025] An example embodiment of the SIFA is configured to determine an answer to a first question, based on the SIS logic 220, of: What are the combinations of SIS inputs 215 that can result in an undesired state at the final SIS output 225? Two example undesired output states for reliability engineering are: Dangerous Undetected (DU), when the Safety Instrumented Function (SIF) is unable to trip the plant if a real hazardous situation arises; and Spurious Trip (ST), when the safety system initiates a plant trip when there is no real hazard.

[0026] Once those combinations of undesired states (i.e., the failure modes) are known, SIFA may be configured to determine an answer to a second question of: What is the likelihood of each scenario happening? To answer this question, SIFA performs the SIS Probability of Failure on Demand (PFD) and Spurious Trip Rate (STR) calculation using standard formulas provided by functional safety standards, such as IEC 61508, IEC 61511, and ISA 84.

[0027] To answer the first question, identifying the failure modes, SIFA may be configured to carry out a reverse analysis of the SIS logic 220. By applying this technique, namely Failure Mode Reasoning (FMR), SIFA begins at the very final SIS output 225, works backward through the SIS logic 220, analyzes each individual function block (FB) in reverse order, and finds out the combinations of input signals that can result in an undesired SIS output. At each step of the FMR process, a single FB is analyzed by considering the functionality of the FB and the given failure mode at the FB output. The result of such local analysis is the list of possible failure modes at the FB inputs. Once the local failure modes of all FBs are identified, the failure modes may be combined and then simplified into one global implication statement, which relates the SIS output to the SIS inputs. The statement may then be used to calculate the PFD or the STR.

[0028] As a layer of protection, the reliability of a SIF is commonly measured by its Probability of Failure on Demand (PFD). The Probability of Failure on Demand may be calculated by the equation: PFDSIF— PFDs + PFDsis + PFDFE

[0029] with PFDs, PFDsis, and PFD FE being the PFD of the sensors, SIS logic and final elements, respectively. The PFD is calculated using the failure rates of the SIS components. Thus, for answering the second question (i.e., calculating the PFD and STR), the probability of each failure mode occurring is calculated by an AND combination of all constituting events in that failure mode, and the overall probability of failure is calculated by an OR combination of the individual probability of all failure modes. The probability calculations in SIFA may be carried out using a library of failure data for industrial components, i.e., safety I/O modules and CPUs. The SIFA may also provide functionality for a practitioner to enter failure data that is not in the library. It should be noted that the final probability values represent the SIF end-to-end probability values and not just the SIS components.

[0030] The task of detailed logic analysis using reverse analysis remains a challenge for humans. Even a small SIS logic 300, such as illustrated in Fig. 3, requires solving dozens of combinations of intermediate variables (numbers in Fig. 3 such as 01, 02, 03, etc.) before one can get from SIS logic output 330 to SIS logic inputs 310.

[0031] The amount of such intermediate combinations grows exponentially as the number of relevant input increases and the logic becomes more complex. Manual analysis becomes an exhaustive process, human error comes into play, and despite the time and effort spent, the final result becomes uncertain and unreliable. By implementing its FMR method, SIFA can perform analysis that are impractical for humans to use. And the result is an inclusive list of failure modes and an accurate estimation of probability. Time and project budget are saved and certainty and accuracy of reliability modelling is substantially enhanced.

[0032] A core part of SIFA is its Failure Mode Reasoning (FMR) algorithm. With the help of FMR, SIFA is enabled to process complex SIS logic and identify all the failure modes of input signals that can result in undesired SIS outputs. In simple terms, the FMR method replaces each SIS function block (typically selected from a wide variety of blocks with very different and sometimes complex functions) with a simple, multi-state logic FMR failure block that can be analyzed in a Boolean-like manner. The failure blocks are then interconnected to create the overall FMR, which, once solved, yields the answer to the first question.

[0033] Fig. 3 shows an example of a small safety instrumented function (SIF) logic 300, that may be carried out by SIF A in order to describe the FMR method. The logic receives two analog signals from two pressure sensors 312, 314 and initiates a trip at its final output 330 if either sensor 312, 314 is healthy and showing high pressure, i.e. higher than its normal operating range, or if both sensors 312, 314 are detected faulty. Each logic FB (as labeled FBI -FB11 in Fig. 3) has its own corresponding FMR block. For example, the corresponding FMR blocks of logical OR gate 410 (FB2 in Fig. 3) and trip threshold block 420 (FB8 and FB9 in Fig. 3) are defined in the tables 400 as shown in Fig. 4. In Fig. 3: 0, Ϊ, L, H , respectively represent undesired states of 0, 1, Low and High in the SIF logic. In an example embodiment, the described SIF A may work through multiple stages of processing to identify the failure modes of the SIS logic 300. Through the process of analyzing SIS logic, a SIF A may handle multiple different files of different formats (e.g., 30 in one example), starting from logic diagrams, through to various analysis and library files, and to the final results.

[0034] Utilizing the SIF A, safety engineers and safety related projects can save time and money, and enhance reliability, accuracy and certainty in the delivery of safety-critical projects. The described SIF A may replace a combination multiple analysis, such as Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FT A). Applying such methods require extensive time and effort, and the final result will still be subject to uncertainty, inaccuracy and human error.

[0035] Referring now to Fig. 8, a methodology 800 is illustrated that facilitates safety instrumented function analyzation. While the methodology is described as being a series of acts that are performed in a sequence, it is to be understood that the methodology may not be limited by the order of the sequence. For instance, unless stated otherwise, some acts may occur in a different order than what is described herein. In addition, in some instances, not all acts may be required to implement a methodology described herein.

[0036] The methodology may start at 802 and may include an act 804 of receiving a safety instrumented systems (SIS) logic. The methodology may also include an act 806 of analyzing the SIS logic using reverse failure logic to determine one or more SIS failure modes. Also, the methodology may include an act 808 of outputting the determined SIS failure modes. At 810 the methodology may end.

[0037] It should be appreciated that this described methodology may include additional acts and/or alternative acts corresponding to the features described previously with respect to the data processing system 100.

[0038] In an embodiment, the FMR process may include a plurality of stages. A first stage, a composition stage, may include describing failure modes for each function block. At the FB level, each FB is analyzed by defining how different failure modes at the FB input lead to failure modes at the FB output. From this analysis a failure block describing the failure modes of the FB is created. In a substitution stage the failure blocks of all the FBs in the SIS logic are logically interconnected to relate the SIS output to the SIS input. The logical interconnection may then be simplified in order to extract a list of SIS failure modes. The simplification may include removing redundancies such as Boolean logic of always true, always false and duplicates.

[0039] For example, the methodology may include an act of calculating and outputting a probability of failure associated with the determined SIS failure modes. Further, the methodology may include an act of calculating and outputting a spurious trip rate associated with the determined SIS failure modes.

[0040] The described SIFA is configured to be accurate, inclusive and provide cost savings. Accurate, because it analyzed the logic and calculates PFD/STR through an automated algorithm; and inclusive, because it looks at all the combinations of SIS inputs that can result in an undesired output. Approximations of such tasks are not easy for humans to carry out, if possible, at all without greatly simplifying the model.

[0041] The described SIFA is operative to save time and effort by automating the analysis in a novel manner. All that the practitioner needs to do then is to import a copy of the SIS logic and run the SIFA analysis. In an embodiment, the SIS logic may be in the form of an xml file which is easy to extract data from. By using FMR, SIFA reviews the logic, understands how SIS behaves, provides the list of all failure modes, and calculates the final PFD/STR values accurately. [0042] Fig. 9 illustrates a further example of a data processing system 900 with which one or more embodiments of the data processing systems (110) described herein may be implemented. For example, in some embodiments, the at least one processor 102 (e.g., a CPU) may be connected to one or more bridges/controllers/buses 902 (e.g., a north bridge, a south bridge). One of the buses for example, may include one or more I/O buses such as a PCI Express bus. Also connected to various buses in the depicted example may include the processor memory 116 (e.g., RAM) and a graphics controller 904. The graphics controller 904 may generate a video signal that drives the display device 112. It should also be noted that the processor 102 in the form of a CPU may include a memory therein such as a CPU cache memory. Further, in some embodiments one or more controllers (e.g., graphics, south bridge) may be integrated with the CPU (on the same chip or die). Examples of CPU architectures include IA-32, x86-64, and ARM processor architectures.

[0043] Other peripherals connected to one or more buses may include communication controllers 906 (Ethernet controllers, WiFi controllers, cellular controllers) operative to connect to a network 908 such as a local area network (LAN), Wide Area Network (WAN), the Internet, a cellular network, and/or any other wired or wireless networks or communication equipment.

[0044] Further components connected to various busses may include one or more I/O controllers 910 such as USB controllers, Bluetooth controllers, and/or dedicated audio controllers (connected to speakers and/or microphones). It should also be appreciated that various peripherals may be connected to the I/O controller(s) (via various ports and connections) including the input devices 114, output devices 912 (e.g., printers, speakers) or any other type of device that is operative to provide inputs to and/or receive outputs from the data processing system.

[0045] Also, it should be appreciated that many devices referred to as input devices or output devices may both provide inputs and receive outputs of communications with the data processing system. For example, the processor 102 may be integrated into a housing (such as a tablet) that includes a touch screen that serves as both an input and display device. Further, it should be appreciated that some input devices (such as a laptop) may include a plurality of different types of input devices (e.g., touch screen, touch pad, and keyboard). Also, it should be appreciated that other peripheral hardware 914 connected to the I/O controllers 910 may include any type of device, machine, sensor, or component that is configured to communicate with a data processing system.

[0046] Additional components connected to various busses may include one or more storage controllers 916 (e.g., SATA). A storage controller may be connected to a storage device data store 118 such as one or more storage drives and/or any associated removable media. Also, in some examples, a data store such as an NVMe M.2 SSD may be connected directly to an I/O bus 902 such as a PCI Express bus.

[0047] A data processing system in accordance with an embodiment of the present disclosure may include an operating system 918. Such an operating system may employ a command line interface (CLI) shell and/or a graphical user interface (GUI) shell. The GUI shell permits multiple display windows to be presented in the graphical user interface simultaneously, with each display window providing an interface to a different application or to a different instance of the same application. A cursor or pointer in the graphical user interface may be manipulated by a user through a pointing device such as a mouse or touch screen. The position of the cursor/pointer may be changed and/or an event, such as clicking a mouse button or touching a touch screen, may be generated to actuate a desired response. Examples of operating systems that may be used in a data processing system may include Microsoft Windows, Linux, UNIX, iOS, macOS, and Android operating systems.

[0048] The data processing system 900 may also include or be operative to communicate with one or more data stores 104 that correspond to databases 920. The processor 102 may be configured to manage, retrieve, generate, use, revise, and store data, executable instructions, and/or other information described herein from/in the database 920. Examples of a data database may include a file and/or a record stored in a relational database (e.g., Oracle, Microsoft SQL Server), which may be executed by the processor 102 or may execute in a second data processing system connected via a network 908.

[0049] It should be understood that the data processing system 900 may directly or over the network 908 be connected with one or more other data processing systems such as a server 922 (which may in combination correspond to a larger data processing system). For example, a larger data processing system may correspond to a plurality of smaller data processing systems implemented as part of a distributed system in which processors associated with several smaller data processing systems may be in communication by way of one or more network connections and may collectively perform tasks described as being performed by a single larger data processing system. Thus, it is to be understood that when referring to a data processing system, such a system may be implemented across several data processing systems organized in a distributed system in communication with each other via a network.

[0050] In addition, it should be appreciated that data processing systems may include virtual machines in a virtual machine architecture or cloud environment that execute the executable instructions. For example, the processor and associated components may correspond to the combination of one or more virtual machine processors of a virtual machine operating in one or more physical processors of a physical data processing system. Examples of virtual machine architectures include VMware ESCi, Microsoft Hyper-V, Xen, and KVM. Further, the described executable instructions may be bundled as a container that is executable in a containerization environment such as Docker.

[0051] Also, it should be noted that the processor described herein may correspond to a remote processor located in a data processing system such as a server that is remote from the display and input devices described herein. In such an example, the described display device and input device may be included in a client data processing system (which may have its own processor) that communicates with the server (which includes the remote processor) through a wired or wireless network (which may include the Internet). In some embodiments, such a client data processing system, for example, may execute a remote desktop application or may correspond to a portal device that carries out a remote desktop protocol with the server in order to send inputs from an input device to the server and receive visual information from the server to display through a display device. Examples of such remote desktop protocols include Teradici's PCoIP, Microsoft's RDP, and the RFB protocol. In another example, such a client data processing system may execute a web browser or thin client application. Inputs from the user may be transmitted from the web browser or thin client application to be evaluated on the server, rendered by the server, and an image (or series of images) sent back to the client data processing system to be displayed by the web browser or thin client application. Also, in some examples, the remote processor described herein may correspond to a combination of a virtual processor of a virtual machine executing in a physical processor of the server.

[0052] Those of ordinary skill in the art will appreciate that the hardware and software depicted for the data processing system may vary for particular implementations. The depicted examples are provided for the purpose of explanation only and is not meant to imply architectural limitations with respect to the present disclosure. Also, those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure is not being depicted or described herein. Instead, only so much of a data processing system as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the data processing system 900 may conform to any of the various current implementations and practices known in the art.

[0053] As used herein, the terms“component” and“system” are intended to encompass hardware, software, or a combination of hardware and software. Thus, for example, a system or component may be a process, a process executing on a processor, or a processor. Additionally, a component or system may be localized on a single device or distributed across several devices.

[0054] Also, it should be understood that the words or phrases used herein should be construed broadly, unless expressly limited in some examples. For example, the terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The singular forms“a”,“an” and“the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further, the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term“or” is inclusive, meaning and/or, unless the context clearly indicates otherwise. The phrases“associated with” and“associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like.

[0055] Also, although the terms "first", "second", "third" and so forth may be used herein to refer to various elements, information, functions, or acts, these elements, information, functions, or acts should not be limited by these terms. Rather these numeral adjectives are used to distinguish different elements, information, functions or acts from each other. For example, a first element, information, function, or act could be termed a second element, information, function, or act, and, similarly, a second element, information, function, or act could be termed a first element, information, function, or act, without departing from the scope of the present disclosure.

[0056] In addition, the term "adjacent to" may mean: that an element is relatively near to but not in contact with a further element; or that the element is in contact with the further portion, unless the context clearly indicates otherwise. Further, the phrase “based on” is intended to mean“based, at least in part, on” unless explicitly stated otherwise.

[0057] Although an exemplary embodiment of the present disclosure has been described in detail, those skilled in the art will understand that various changes, substitutions, variations, and improvements disclosed herein may be made without departing from the spirit and scope of the disclosure in its broadest form.

[0058] None of the description in the present application should be read as implying that any particular element, step, act, or function is an essential element, which must be included in the claim scope: the scope of patented subject matter is defined only by the allowed claims. Moreover, none of these claims are intended to invoke a means plus function claim construction unless the exact words "means for" are followed by a participle.