Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF IDENTIFYING ANOMALIES
Document Type and Number:
WIPO Patent Application WO/2015/121176
Kind Code:
A1
Abstract:
A method (10) of identifying anomalies in a monitored system includes acquiring input data from a plurality of sensors in the monitored system. Preprocessing the acquired data to prepare it for modeling leaves a first data subset that feeds (20) into a normal Gaussian mixture model built using normal operating conditions of the monitored system. Removing data flagged as anomalous by the normal Gaussian mixture model leaves a second data subset that is compared to at least one threshold (22). If the comparison indicates that the second data subset contains anomalies (24), then the second data subset feeds (26,28) into at least one of a set of asset performance Gaussian mixture models. Identifying which data contribute to an abnormality in the monitored system leaves a third data subset. Post-processing the third data subset may extract anomalies in the monitored system.

Inventors:
GREEN DONNA LOUISE (GB)
LARDER BRIAN DAVID (GB)
KNIGHT PETER ROBIN (GB)
THUONG OLIVER (GB)
Application Number:
PCT/EP2015/052559
Publication Date:
August 20, 2015
Filing Date:
February 06, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GE AVIAT SYSTEMS LTD (GB)
International Classes:
G05B23/02
Foreign References:
US20130116996A12013-05-09
FR2871949A12005-12-23
US20040002776A12004-01-01
US20130197854A12013-08-01
US20080086283A12008-04-10
Attorney, Agent or Firm:
WILLIAMS, Andrew (The Ark201 Talgarth Road, Hammersmith London W6 8BJ, GB)
Download PDF:
Claims:
CLAIMS:

1. A method (10) of identifying anomalies in a monitored system, the method comprising: acquiring input data (12) from a plurality of sensors in the monitored system; preprocessing (14,16) the acquired data to prepare it for modeling, and leaving a first data subset; feeding (20) the first data subset into a normal Gaussian mixture model built using normal operating conditions of the monitored system, and identifying (22) data flagged as anomalous by the normal Gaussian mixture model, leaving a second data subset; comparing (22) the second data subset to at least one threshold; if the comparison indicates that the second data subset contains anomalies, then feeding (26,28) the second data subset into at least one of a set of asset performance Gaussian mixture models, and identifying which data contribute to an abnormality in the monitored system, leaving a third data subset; and post-processing (30) the third data subset to extract anomalies in the monitored system.

2. The method (10) of claim 1, wherein the preprocessing step includes deriving (14) parameters from the acquired data.

3. The method (10) of either of claim 1 or 2, wherein the preprocessing step includes normalizing (16) the acquired data.

4. The method (10) of claim 3, further comprising extracting (18) features from the normalized data by subtracting the median of the normalized data over a selected window of data.

5. The method (10) of any preceding claim, wherein the asset performance Gaussian mixture models include an operating condition Gaussian mixture model built using data affected by operating conditions of the monitored system, and a non- operating condition Gaussian mixture model built using data not affected by operating conditions of the monitored system.

6. The method (10) of claim 5, wherein the comparing step (22) includes determining if the second data subset includes data affected by an operating condition, and if so, then feeding the second data subset into the operating condition Gaussian mixture model and then feeding the second data subset into the non-operating condition Gaussian mixture model, and if not, then feeding the second data subset into the non-operating condition Gaussian mixture model.

7. The method (10) of any preceding claim, wherein the post-processing step (30) includes comparing the third data subset to at least one threshold.

8. The method (10) of any preceding claim, wherein the post-processing step (30) includes at least one of removing outliers from or smoothing (32) the third data subset.

9. The method (10) of any preceding claim, further comprising checking sensors for the extracted anomalies to determine if a sensor is a source of an anomaly associated with the sensor.

10. The method (10) of any preceding claim, further comprising feeding at least one extracted anomaly through a step detection algorithm to identify timing of a fault due to the at least one extracted anomaly.

11. The method (10) of any preceding claim, further comprising feeding the extracted anomalies through a set of hybrid step fault Bayesian networks and influence networks to classify the extracted anomalies according to fault type, and determining the probabilities that a given extracted anomaly is caused by a given fault type.

12. The method (10) of claim 11, wherein the order in which the set of step fault hybrid Bayesian networks and influence networks are run is configured so that later networks can be suppressed based on earlier ones.

13. The method (10) of either of claim 11 or 12, further comprising comparing the given extracted anomaly to at least one threshold, and if the given extracted anomaly meets the at least one threshold, sending a message identifying and alerting the probability of a fault in the monitored system.

14. The method (10) of any of claims 11 to 13, further comprising feeding the extracted anomalies to a hybrid trend fault Bayesian network to determine a rate of a fault in the monitored system.

Description:
METHOD OF IDENTIFYING ANOMALIES

BACKGROUND

Contemporary aircraft include gas turbine engine systems for use within the aircraft. Currently, airlines and maintenance personnel perform routine maintenance on the engine systems to replace parts that exceed their life limits and to inspect parts for defects or failures. Additionally, data collection systems may gather information from the engine systems to identify faults. The gathered information may inform the pilot of events such as temperature being too high or oil levels being too low. In this way, based on pilot discretion, fault occurrences may be recorded manually. BRIEF DESCRIPTION

One aspect of the invention relates to a method of identifying anomalies in a monitored system. The method comprises acquiring input data from a plurality of sensors in the monitored system; preprocessing the acquired data to prepare it for modeling, and leaving a first data subset. The first data subset is fed into a normal Gaussian mixture model built using normal operating conditions of the monitored system, and data flagged as anomalous by the normal Gaussian mixture model is removed, leaving a second data subset. The second data subset is compared to at least one threshold. If the comparison indicates that the second data subset contains anomalies, then the second data subset is fed into one or more sets of asset performance Gaussian mixture models. The method identifies which data contribute to an abnormality in the monitored system, leaving a third data subset. The method post-processes the third data subset to extract anomalies in the monitored system.

BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings: FIG. 1 is a flowchart showing a method of identifying anomalous data according to an embodiment of the invention. FIG. 2 is a flowchart showing a method of diagnosing a fault causing anomalous data according to an embodiment of the invention.

DETAILED DESCRIPTION

In the background and the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the technology described herein. It will be evident to one skilled in the art, however, that the exemplary embodiments may be practiced without these specific details. In other instances, structures and devices are shown in diagram form in order to facilitate description of the exemplary embodiments. The exemplary embodiments are described with reference to the drawings. These drawings illustrate certain details of specific embodiments that implement a module, method, or computer program product described herein. However, the drawings should not be construed as imposing any limitations that may be present in the drawings. The method and computer program product may be provided on any machine-readable media for accomplishing their operations. The embodiments may be implemented using an existing computer processor, or by a special purpose computer processor incorporated for this or another purpose, or by a hardwired system.

As noted above, embodiments described herein may include a computer program product comprising machine-readable media for carrying or having machine- executable instructions or data structures stored thereon. Such machine-readable media can be any available media, which can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of machine-executable instructions or data structures and that can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communication connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine- readable medium. Thus, any such a connection is properly termed a machine- readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions comprise, for example, instructions and data, which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.

Embodiments will be described in the general context of method steps that may be implemented in one embodiment by a program product including machine-executable instructions, such as program codes, for example, in the form of program modules executed by machines in networked environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that have the technical effect of performing particular tasks or implement particular abstract data types. Machine-executable instructions, associated data structures, and program modules represent examples of program codes for executing steps of the method disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps.

Embodiments may be practiced in a networked environment using logical connections to one or more remote computers having processors. Logical connections may include a local area network (LAN) and a wide area network (WAN) that are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets and the internet and may use a wide variety of different communication protocols. Those skilled in the art will appreciate that such network computing environments will typically encompass many types of computer system configurations, including personal computers, hand-held devices, multiprocessor systems, microprocessor- based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communication network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

An exemplary system for implementing the overall or portions of the exemplary embodiments might include a general purpose computing device in the form of a computer, including a processing unit, a system memory, and a system bus, that couples various system components including the system memory to the processing unit. The system memory may include read only memory (ROM) and random access memory (RAM). The computer may also include a magnetic hard disk drive for reading from and writing to a magnetic hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and an optical disk drive for reading from or writing to a removable optical disk such as a CD-ROM or other optical media. The drives and their associated machine-readable media provide nonvolatile storage of machine-executable instructions, data structures, program modules and other data for the computer.

Beneficial effects of the method disclosed in the embodiments include the early detection of abnormal system behavior applicable to assets that may include multiple complex systems. Consequently, implementation of the method disclosed in the embodiments may reduce repair and maintenance costs associated with the management of a fleet of assets. The inspection and repairs of assets with anomalous system behavior may occur before further damage to the asset and may allow for efficient fleet maintenance by increasing lead-time for scheduling repair and maintenance activities. The method may also provide an indication of what or where the fault is; resulting in an inspection that may be directed at the most likely source of the fault. Rather than having to inspect the complete asset, maintenance plans may be focused and save time.

The objective of anomaly detection is to identify abnormal system behaviour that might be indicative of a fault in the monitored system. Anomaly detection may be used in applications where there is no large library of tagged or labelled fault data with which to train a model. Anomaly detection may include building a model of normal behaviour using a training data set and then assessing new data based on computing a fit between the new data and the model. If the fit is not within a threshold of the model, the data is flagged as anomalous. The modelling approach typically requires that a set of normal data is available to construct a model of normal behaviour. However, modelling with in-service data (that is, collecting data to be used as both test and training data) may require additional processing to prevent corruption of the model by anomalous training data. For example, with a fleet of aircraft assets, due to issues such as a lack of feedback from the repair and overhaul process, undetected instrumentation problems, maintenance interventions, etc., any database of historical in-service data may contain data with unknown anomalies.

Anomaly models are built from a set of input data, with input parameters selected according to the particular monitoring requirements for the model. The anomaly models are based on Gaussian mixture models and provide detailed density mapping of the data. Gaussian mixture models allow complex distributions to be modelled by summing a number of Gaussian distributions. A Gaussian distribution dfx) may be described by: where μ is the mean (i.e. location of the peak) and o is the variance (i.e. the measure of the width of the distribution). Multiple Gaussian distributions may then be summed as in: each with a weight w corresponding to the number of samples represented by that distribution. In multi-dimensional problems, the individual distributions are often called clusters since they represent a subset of the data in terms of density distribution. The clusters in a model can rotate to represent correlations between parameters. The rotation is defined by a cluster covariance matrix. The models may then be adapted to reject any abnormalities existing in the training data. Automatic model adaptation detects regions in the cluster space that are not representative of normal behaviour and then removes these clusters. The adaptation process is complex but is controlled by a simple tuning parameter that specifies the percentage of the data to be removed (typically about 5%). The final model provides a poor fit to samples in the training data that are outliers. The automated model adaptation process enables the building of models using in-service data that contains various unknown anomalies.

The resulting models are sophisticated statistical representations of the data generated from in-service experience; fusing sets of input parameters to reduce a complex data set into a single parameter time-history, called a log likelihood or fitness score trace. The fitness score measures the degree of abnormality in the input data and mirrors the shape of any significant data trends. The fitness score represents a goodness of fit criterion, indicating how well data fits a model of normality. Therefore, the fitness score has a decreasing trend as data becomes increasingly abnormal.

FIG. 1 is a flowchart showing a method 10 of identifying anomalous data according to an embodiment of the invention. Initially, a monitoring system, such as an off-line computer diagnostics system, integrated with the method 10 acquires input data 12 from one or more sensors of a monitored system. The input data may be, for example, sensor data from an aircraft engine system, though sensors and corresponding sensor data relating to other monitored aircraft systems including avionics, power and mechanical systems may be used. While described below in the context of aircraft systems, the method 10 of identifying anomalous data is more generally applicable to machine health management, human health management, data exploration, decision support tasks, etc. That is, any system integrated with sensors capable of generating data affected by faults of that system may be monitored per an embodiment of the monitoring system.

A processor of the monitoring system may then take steps to preprocess the acquired data to prepare the data for modeling. The preprocessing steps may include deriving parameters 14 from the acquired data. For example, data from temperature sensors may be averaged to determine an average temperature parameter. Alternatively, the processor may compare data from different sensors. For example, the processor may calculate the divergence between engine exhaust temperature sensors for two different engines for use as a parameter. An additional preprocessing step may include a step of normalization 16. The step of normalization 16 may apply to the acquired data, the derived parameters or both. For example, temperature, pressure, spool speed and flow rate data may be corrected to international standard atmosphere (ISA) conditions.

Subsequent to the preprocessing of the acquired data, the processor may then extract features 18 from the data, the derived parameters and/or the normalized data. For example, trends in the data may be identified and removed by subtracting the median of a selected window of the data. The processor may employ other signal processing techniques to minimize or remove outliers or otherwise smooth the data resulting in a first data subset prepared for a step of modeling.

The processor may then, at step 20, feed the first data subset into a Gaussian mixture model built using normal operating conditions of the monitored system. For example, a model built upon the normal operating conditions of an aircraft engine may include variables describing aircraft altitude and speed along with the air temperature. By modeling the first data subset with a model based on normal operating conditions of the system, the processor may build a filter that may be used to identify or remove data collected during abnormal operating conditions of the monitored system. For example, the processor may flag data collected when the aircraft was flying at an unconventional altitude, speed or both. The Gaussian mixture model may be preferably formed as a normal Gaussian mixture model though other distributions may be contemplated. For example, the model may be formed as a bimodal Gaussian mixture model.

Based on the comparison of the first data subset and the model of the operating condition, the processor at step 22 may identify and flag data acquired during abnormal operating conditions. That is, when the data was collected during abnormal operating conditions, the first data subset may not present a good fit to the model of the normal operating condition. To determine whether the data presents a good fit to the model, the processor may compare the goodness of fit of the data to the model and one or more thresholds. The resulting data, including the data flagged as anomalous by comparison with the normal Gaussian mixture model, forms a second data subset.

The processor may then feed the second data subset into a set of asset performance models. The set of asset performance models may include models where the operating condition of the monitored system may affect the relationships between the values of the data parameters and models where the operating condition of the monitored system is irrelevant to the relationships between the values of the data parameters. The processor, at step 24, determines if the comparison at step 22 indicates that the second data subset contains anomalies in the operating condition of the monitored system. If so, then the processor at step 26 feeds the second data subset without the data points collected during the abnormal operating condition into at least one of a set of asset performance Gaussian mixture models. The asset performance Gaussian mixture models at step 26 include an operating condition Gaussian mixture model built using data affected by the operating conditions of the monitored system. The processor at step 28 feeds the second data subset into at least one of a set of asset performance Gaussian mixture models built using data not affected by operating conditions of the monitored system.

Based on the comparison of the second data subset and the set of asset performance models at steps 26 and 28, the processor may identify which data contribute to an abnormality in the monitored system, leaving a third data subset. That is, when the collected data was collected while an aspect of the monitored system is performing anomalously, the second data subset will not present a good fit to the model of the asset performance. As opposed to the output of the operating condition model at step 20 where the asset may be operating outside its normal mode of operation, the output of the asset performance models may indicate that the asset is operating within its normal mode of operation, but performing abnormally. The resulting data forms a third data subset.

Additional post-processing of the data may determine whether the data presents a good fit to the model by comparing the goodness of fit, based on the fitness score, of the data to the models and one or more thresholds at step 30. Further, the processor at step 32 may employ other signal processing techniques to minimize or remove outliers or otherwise smooth the data to better extract which data from the raw input data set is the anomalous data. The processor calculates residuals or measures of abnormality for the parameters (that is, the raw data from step 12) and the derived parameters (from step 14) to output, at step 34, a score of the overall measure of the monitored system and a measure of each parameter. In this way, the method of identifying anomalies 10 may determine an abnormally operating monitored system and an abnormally operating element in the monitored system. For example, one engine on an aircraft may be determined to be operating abnormally while the other three engines of the aircraft may be determined to be operating normally.

The processor may convert the anomaly model fitness score into a probability of anomaly measure, which is a normalized probability measure that ranges between zero and one. For each model, there is a probability of anomaly distribution which is an extreme value distribution. The processor may convert a fitness score value to the probability of distribution and determine a value indicative of the probability. Most fitness score values will result in a probability of anomaly of zero because most data will be normal. Because the probability of anomaly values range from zero to one, the probability of anomaly provides a measure that is normalized across models, enabling a comparison between model outputs. Consequently, such a normalized metric may be fed into a secondary process, such as automated reasoning, to determine the most likely fault that caused the anomaly.

FIG. 2 is a flowchart showing a method of diagnosing a fault 100 causing anomalous data according to an embodiment of the invention. Initially, at step 110, the data (along with the score of the overall measure of the monitored system and a measure of each parameter output at step 34 in FIG. 1) is input to the processor of a monitoring system. The processor may perform a number of logical sensor checks at step 112 to determine if a faulty sensor caused the anomaly in the data. If the processor determines that a faulty sensor caused the anomaly in the data, then at step 114, the processor determines that no further processing of the data is necessary and proceeds to step 138 where the processor issues an alert identifying to a user that a sensor fault has occurred. For example, if the processor determines that a raw data value from a sensor, such as a temperature sensor reading 1000 degrees higher than normal, is outside a predefined limit or a built-in sensor test fails, the processor may identify the sensor to a user via an automatically generated email. If the processor determines at step 114 that the anomalous data is not caused by a sensor fault, then the processor may feed the extracted anomalies through a set of probabilistic reasoning networks to diagnose the most likely cause of the detected anomaly. Probabilistic reasoning networks may include Bayesian networks and influence networks to classify the extracted anomalies according to fault type. Generally, probabilistic reasoning networks are a type of statistical model that represents a set of random variables and their conditional dependencies graphically. Via the probabilistic reasoning networks, the processor may determine the probabilities that an extracted anomaly is caused by a certain fault type. In this way, the processor may initiate a sequence of steps to determine the timing of a fault, that is, if the fault occurs instantaneously or progresses over a duration of time.

The processor may perform preprocessing operations at step 116 prior to feeding the extracted anomalies into the Bayesian and influence networks. The pre-processing operations at step 116 may include parameterization of the raw data. For example, the processor may compare absolute temperature measurements from one or more temperature sensors and form a parameter based on the comparison.

The processor may then feed the selected parameters into a multi-parameter step detection algorithm at step 118 to determine if a fault associated with the anomaly data occurred at a rate commensurate with that of the sample rate of the data. That is, values of the anomaly data increase (or decrease) by a substantial value across a sample duration during a step event. The multi-parameter step detection algorithm at step 118 characterizes the anomaly data by detecting a substantial rate of change of the values of one or more selected parameters of the anomaly data.

The processor may then feed the anomaly data into a step suppression model at step 120. The step suppression model at step 120 is a probabilistic reasoning network that may include hybrid Bayesian networks and influence networks. The step suppression model at step 120 represents a model where conditions or events may affect the monitored system to generate step responses that are not indicative of a fault in the monitored system. In other words, the step suppression model at step 120 models potential false alarms where anomaly data was not caused by a fault. Based on the results of the step suppression model at step 120, the processor at step 122 may determine the parameters and timestamp for the detected step. The processor may then perform a step 124 of thresholding where the goodness of fit for the anomaly data and the step suppression model determine if a non-fault event occurred. If the processor determines that a non- fault event occurred at step 126, the processor determines that no further processing of the data is necessary and proceeds to step 138.

If, at step 126, the processor does not determine that a non-fault event occurred, then the processor may feed the anomaly data into a step fault model at step 128. The step fault model at step 128 is another probabilistic reasoning network that may include hybrid Bayesian networks and influence networks. The step fault model at step 128 represents a model where conditions or events may affect the monitored system to generate step responses that are indicative of a fault in the monitored system. Based on the results of the step fault model at step 128, the processor at step 130 may determine the parameters and timestamp for the detected fault. For the remaining anomaly data that is not indicative of a step event, the processor may feed the anomaly data into a trend rate estimator at step 132 that determines the rate (over multiple samples of data) at which an extracted anomaly develops. The processor then feeds the extracted anomaly into a hybrid trend fault Bayesian network or influence network to determine the rate of the corresponding fault in the monitored system at step 134. Based on the results of the trend fault model at step 134, the processor at step 136 may determine the parameters, timestamp and duration for the detected fault.

While the above description describes three probability reasoning networks run in sequence for determining information relating to faults, additional probability reasoning networks may be implemented. Any probability reasoning networks that have been configured according to the method 100 to suppress other probability reasoning networks are run first, and then, depending on the results of the networks (i.e. whether the probabilities for an anomaly exceed a predetermined threshold), further networks may be run against the anomaly data. Each probability reasoning network is trained to output a probability of anomaly that the anomaly data input to the network was caused by a particular fault. The network builds its underlying model by a combination of learning from previous data characterizing the fault and a priori knowledge. For each fault network run, the processor will determine the probability that the anomalous data was caused by the fault modeled by the network. Configurable thresholds are set based on the probabilities of anomaly and alerts are generated at step 138 that display the most likely faults. Alerts may also be generated where the data did not match any of the known faults. The alerts may deliver information generated by the feature extractors such as which parameters have significant steps or trends in them. For example, a summary email may be sent containing any engine serial numbers showing anomalous data on a particular day and which have either a high probability of being a fault or exhibit significant features that may have caused the anomaly, such as a step change in several parameters.

One benefit of the modelling process described in the methods above is that it does not require data to be categorized as either training data or test data. By storing subsets of data within the model, not all of data is used to build all aspects of the model. In this way, the data is split up into multiple training sets and models. Each training data set effectively acts as a test data set for the models for which the data set did not contribute during the build process. Consequently, all available historical data may contribute to a model, apart from the data sets that are known a-priori to be anomalous. Consequently, online model updates may be performed in-situ as new data are acquired.

This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.