Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHOD FOR CONTROLLING SYSTEM
Document Type and Number:
WIPO Patent Application WO/2019/187297
Kind Code:
A1
Abstract:
An apparatus for controlling a system including a plurality of sources of signals causing a plurality of events includes an input interface to receive signals from the sources of signals, a memory to store a neural network trained to diagnose a control state of the system, a processor to submit the signals into the neural network to produce the control state of the system, and a controller to execute a control action selected according to the control state of the system. The neural network includes a sequence of layers, each layer includes a set of nodes, each node of at least an input layer and a first hidden layer following the input layer corresponds to a source of signal in the system. A pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold, such that the neural network is a partially connected neural network.

Inventors:
GUO, Jianlin (Inc. 201 Broadway, Cambridg, Massachusetts ., 02139, US)
LIU, Jie (Inc. 201 Broadway, Cambridg, Massachusetts ., 02139, US)
ORLIK, Philip (Inc. 201 Broadway, Cambridg, Massachusetts ., 02139, US)
Application Number:
JP2018/040785
Publication Date:
October 03, 2019
Filing Date:
October 26, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORPORATION (7-3 Marunouchi 2-chome, Chiyoda-ku Tokyo, 10, 10083, JP)
International Classes:
G05B23/02; G05B13/02; G05B19/042; G05B19/418
Foreign References:
US20170031329A12017-02-02
EP1022632A12000-07-26
US20090299695A12009-12-03
US20170061326A12017-03-02
US5386373A1995-01-31
US20150277416A12015-10-01
Other References:
HWARAN LEE ET AL: "Deep CNNs Along the Time Axis With Intermap Pooling for Robustness to Spectral Variations", IEEE SIGNAL PROCESSING LETTERS, 1 October 2016 (2016-10-01), New York, pages 1310 - 1314, XP055558409, Retrieved from the Internet [retrieved on 20190219], DOI: 10.1109/LSP.2016.2589962
Attorney, Agent or Firm:
SOGA, Michiharu et al. (S. SOGA & CO, 8th Floor Kokusai Building, 1-1, Marunouchi 3-chome, Chiyoda-k, Tokyo 05, 10000, JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

An apparatus for controlling a system including a plurality of sources of signals causing a plurality of events, comprising:

an input interface to receive signals from the sources of signals;

a memory to store a neural network trained to diagnose a control state of the system, wherein the neural network includes a sequence of layers, each layer includes a set of nodes, each node of an input layer and a first hidden layer following the input layer corresponds to a source of signal in the system, wherein a pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold, such that the neural network is a partially connected neural network;

a processor to submit the signals into the neural network to produce the control state of the system; and

a controller to execute a control action selected according to the control state of the system.

[Claim 2]

The apparatus of claim 1, wherein a number of nodes in the input layer equals a multiple of a number of the sources of signals in the system, and a number of nodes in the first hidden layer following the input layer equals the number of the sources of signals, wherein the input layer is partially connected to the first hidden layer based on probabilities of subsequent occurrence of the events in different sources of signals.

[Claim 3]

The apparatus of claim 2, wherein the probability of subsequent occurrence of the events of signal S, followed by events of signal Sj can be defined as

where M is number of signals, c is the number of times events of signal S;

followed by events of signal Sj.

[Claim 4]

The apparatus of claim 2, wherein the neural network is a time delay neural network (TDNN), and wherein the multiple for the number of nodes in the input layer equals a number of time steps in the delay of the TDNN.

[Claim 5]

The apparatus of claim 4, wherein the TDNN is a time delay feedforward neural network trained based on a supervised learning or a time delay auto encoder neural network trained based on an unsupervised learning.

[Claim 6]

The apparatus of claim 1 , wherein the probability of subsequent occurrence of the events in the pair of the different sources of signals is a function of a frequency of the subsequent occurrence of the events in the signals collected over a period.

[Claim 7]

The apparatus of claim 1, further comprising:

a neural network trainer configured

to evaluate the signals from the source of signals collected over a period of time to determine frequencies of subsequent occurrence of events within the period of time for different combinations of pairs of sources of signals;

to determine probabilities of the subsequent occurrence of events for different combinations of the pairs of sources of signals based on the frequencies of subsequent occurrence of events within the period of time;

to compare the probabilities of the subsequent occurrence of events for different combinations of pairs of sources of signals with the threshold to determine a connectivity structure of the neural network;

to form the neural network according to the connectivity structure of the neural network, such that a number of nodes in the input layer equals a first multiple of a number of the source of signals in the system, and a number of nodes in the first hidden layer following the input layer equals a second multiple of the number of the sources of signals, wherein the input layer is partially connected to the first hidden layer according to the connectivity structure; and

to train the neural network using the signals collected over the period of time.

[Claim 8]

The apparatus of claim 1, further comprising:

a neural network trainer configured

to evaluate the signals from the source of signals collected over a period of time to determine frequencies of subsequent occurrence of events within the period of time for different combinations of pairs of sources of signals;

to compare the frequencies of the subsequent occurrence of events for different combinations of pairs of sources of signals with the threshold to determine a connectivity structure of the neural network;

to form the neural network according to the connectivity structure of the neural network, such that a number of nodes in the input layer equals a first multiple of a number of the source of signals in the system, and a number of nodes in the first hidden layer following the input layer equals a second multiple of the number of the sources of signals, wherein the input layer is partially connected to the first hidden layer according to the connectivity structure; and

to train the neural network using the signals collected over the period of time.

[Claim 9]

The apparatus of claim 8, wherein the trainer forms a signal connection matrix representing the frequencies of the subsequent occurrence of the events. [Claim 10]

The apparatus of claim 1 , wherein the system is a manufacturing production line including one or combination of a process manufacturing and discrete manufacturing.

[Claim 1 1]

The apparatus of claim 1, wherein the subsequent occurrence of the events in the pair of the different sources of signals is a consecutive occurrence of events in a time sequence of all events of the system.

[Claim 12]

A method for controlling a system including a plurality of source of signals causing a plurality of events, wherein the method uses a processor coupled to a memory storing a neural network trained to diagnose a control state of the system, wherein the processor is coupled with stored instructions implementing the method, wherein the instructions, when executed by the processor carry out steps of the method, comprising:

receiving signals from the source of signals;

submitting the signals into the neural network retrieved from the memory to produce the control state of the system, wherein the neural network includes a sequence of layers, each layer includes a set of nodes, each node of an input layer and a first hidden layer following the input layer corresponds to a source of signal in the system, wherein a pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold; and

executing a control action selected according to the control state of the system.

[Claim 13] The method of claim 12, wherein a number of nodes in the input layer equals a multiple of a number of the source of signals in the system, and a number of nodes in the first hidden layer following the input layer equals the number of the sources of signals, wherein the input layer is partially connected to the first hidden layer based on probabilities of subsequent occurrence of the events in different sources of signals.

[Claim 14]

The method of claim 13, wherein the neural network is a time delay neural network (TDNN), and wherein the multiple for the number of nodes in the input layer equals a number of time steps in the delay of the TDNN, wherein the TDNN is a time delay feedforward neural network trained based on a supervised learning or a time delay auto-encoder neural network trained based on an unsupervised learning.

[Claim 15]

The method of claim 12, wherein the probability of subsequent occurrence of the events in the pair of the different sources of signals is a function of a frequency of the subsequent occurrence of the events in the signals collected over a period.

[Claim 16]

The method of claim 12, wherein the subsequent occurrence of the events in the pair of the different sources of signals is a consecutive occurrence of events in a time sequence of the events of the system.

[Claim 17]

A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method, the method comprising:

receiving signals from the source of signals;

submitting the signals into a neural network trained to diagnose a control state of the system to produce the control state of the system, wherein the neural network includes a sequence of layers, each layer includes a set of nodes, each node of an input layer and a first hidden layer following the input layer

corresponds to a source of signal in the system, wherein a pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold; and

executing a control action selected according to the control state of the system.

[Claim 18]

The medium of claim 17, wherein the subsequent occurrence of the events in the pair of the different sources of signals is a consecutive occurrence of events in a time sequence of the events of the system.

[Claim 19]

The medium of claim 17, wherein the neural network is a time delay neural network (TDNN), and wherein the multiple for the number of nodes in the input layer equals a number of time steps in the delay of the TDNN, wherein the TDNN is a time delay feedforward neural network trained based on a supervised learning or a time delay auto-encoder neural network trained based on an unsupervised learning.

[Claim 20]

The medium of claim 17, wherein the probability of subsequent occurrence of the events in the pair of the different sources of signals is a function of a frequency of the subsequent occurrence of the events in the signals collected over a period.

Description:
[DESCRIPTION]

[Title of Invention]

APPARATUS AND METHOD FOR CONTROLLING SYSTEM

[Technical Field]

[0001]

This invention relates generally to the anomaly and fault detection using machine learning techniques, and particularly to anomaly detection using neural networks.

[Background Art]

[0002]

Monitoring and controlling safety and quality are very important in manufacturing, where fast and powerful machines can execute complex sequences of operations at very high speeds. Deviations from an intended sequence of operations or timing can degrade quality, waste raw materials, cause down times and broken equipment, decrease output. Danger to workers is a major concern. For this reason, extreme care must be taken to carefully design manufacturing processes to minimize unexpected events, and also safeguards need to be designed into the production line, using a variety of sensors and emergency switches.

[0003]

The types of manufacturing include process and discrete manufacturing. In process manufacturing, products are generally undifferentiated, for example oil, natural gas and salt, Discrete manufacturing produces distinct items, e.g., automobiles, furniture, toys, and airplanes.

[0004]

One practical approach to increasing the safety and minimizing the loss of material and output is to detect when a production line is operating abnormally, and stop the line down if necessary in such cases. One way to implement this approach is to use a description of normal operation of the production line in terms of ranges of measurable variables, for example temperature, pressure, etc., defining an admissible operating region, and detecting operating points out of that region. This method is common in process manufacturing industries, for example oil refining, where there is usually a good understanding of permissible ranges for physical variables, and quality metrics for the product quality are often defined directly in terms of these variables.

[0005]

However, the nature of the working process in discrete manufacturing is different from that in process manufacturing, and deviations from the normal working process can have very different characteristics. Discrete manufacturing includes a sequence of operations performed on work units, such as machining, soldering, assembling, etc. Anomalies can include incorrect execution of one or more of tasks, or an incorrect order of the tasks. Even in anomalous situations, often no physical variables, such as temperature or pressure are out of range, so direct monitoring of such variables cannot detect such anomalies reliably.

[0006]

For example, a method disclosed in U.S. 2015/0277416 describes an event sequence based anomaly detection for discrete manufacturing. However, this method has high error rate when the manufacturing system has random operations and may not be suitable for different types of the manufacturing systems. In addition, this method requires that one event can only occur once in the normal operations and does not consider the simultaneous event occurrence, which is frequent in complex manufacturing system.

[0007]

To that end, there is a need to develop system and a method suitable for anomaly detection in different types of the manufacturing systems.

[Summary of Invention]

[0008] Some embodiments are based on the recognition that classes or types of the manufacturing operations can include process manufacturing and discrete manufacturing. For example, the anomaly detection methods for process manufacturing can aim to detect outliers of the data and anomaly detection methods for discrete manufacturing can aim to detect correct order of the operation executions. To that end, it is natural to design different anomaly detection methods for different class of manufacturing operations.

[0009]

However, complex manufacturing systems can include different types of the manufacturing including the process and the discrete manufacturing. When the process and the discrete manufacturing are intermingled on a signal production line, the anomaly detection methods designed for different types of the

manufacturing can be inaccurate. To that end, it is an object of some

embodiments to provide a system and a method suitable for anomaly detection in different types of the manufacturing systems.

[0010]

Some embodiments are based on recognition that the machine learning techniques can be applied for anomaly detection for both the process

manufacturing and the discrete manufacturing. Using machine learning, the collected data can be utilized in an automatic learning system, where the features of the data can be learned through training. The trained model can detect anomaly in real time data to realize predictive maintenance and downtime reduction.

[0011]

For example, neural network is one of the machine learning techniques that can be practically trained for complex manufacturing systems that include different types of the manufacturing. To that end, some embodiments apply neural network methods for anomaly detection in manufacturing systems. Using neural networks, additional anomalies that are not obvious from domain knowledge can be detected.

[0012]

Accordingly, some embodiments provide machine learning based anomaly detection methods that can be applied to both process manufacturing and discrete manufacturing with improved accuracy. For example, different embodiments provide neural network based anomaly detection methods for manufacturing systems to detect anomaly through supervised learning and unsupervised learning.

[0013]

However, one of the challenges in the field of neural networks is to find a minimal neural network topology that still satisfies application requirements.

Manufacturing systems typically have huge amount of data. Therefore, fully connected neural network may be computationally expensive or even impractical for anomaly detection in the complex manufacturing systems.

[0014]

In addition, some embodiments are based on understanding that pruning the fully connected neural network trained to detect anomalies in the complex manufacturing systems degrades the performance of the anomaly detection.

Specifically, some embodiments are based on the recognition that neural network pruning takes place during the neural network training process, which increases neural network complexity and training time, and also degrades anomaly and fault detection accuracy.

[0015]

Some embodiments are based on recognition that a neural network is based on a collection of connected units or nodes called artificial neurons or just neurons. Each connection between artificial neurons can transmit a signal from one to another. The artificial neuron that receives the signal can process and transmit the processed signal to other artificial neurons connected to it. In such a manner, for the neurons receiving the signal from another neuron, that transmitting neuron is a source of that signal.

[0016]

To that end, some embodiments are based on realization that each neuron of at least some layers of the neural network can be matched with a source of signal in the manufacturing system. Hence, the source of signal in the manufacturing system is represented by a neuron in a layer of the neural network. In such a manner, the number of neurons in the neural network can be selected as minimally required to represent the physical structure of the manufacturing system.

[0017]

In addition, some embodiments are based on recognition that a neural network is a connectionist system that attempts to represent mental or behavioral phenomena as emergent processes of interconnected networks of simple units. In such a manner, the structure of the neural network can be represented not only by a number of neurons at each level of the neural network, but also as the connection among those neurons.

[0018]

Some embodiments are based on realization that when the neurons of the neural network represent the sources of signals in the manufacturing system, the connection among the neurons of the neural network can represent the connection among the sources of signals in the manufacturing system. Specifically, the neurons can be connected if and only if the corresponding sources of signals are connected.

[0019]

Some embodiments are based on realization that the connection between two different sources of signals for the purpose of anomaly detection is a function of a frequency of subsequent occurrence of the events originated in by these two different sources of signals. For example, let’s say that a source of signal is a switch that can change its state from ON to OFF state. The change of the state and/or a new value of the state is a signal of the source. If, when a first switch changes the state the second switch always changes its state, those two source of signals are strongly connected and thus the neurons in the neural network corresponding to this pair of switches is connected as well. Conversely, if, when a first switch changes the state the second switch never changes its state, those two source of signals are not connected and thus the neurons in the neural network corresponding to this pair of switches is not connected as well.

[0020]

In practice, always following or never following events rarely happen. To that end, in some embodiments, a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold. The threshold is application dependent, and the probability of subsequent occurrence of the events can be selected based on a frequency of such a subsequent

occurrence in a training data, e.g., used to train neural network.

[0021]

In such a manner, the connections of the neurons represent connectionist system mimicking the connectivity within the manufacturing system. To that end, the neural network of some embodiments becomes partially connected network having topology based on event ordering relationship, which reduces the neural network complexity and training time, and improves anomaly detection accuracy.

[0022]

Accordingly, an embodiment discloses an apparatus for controlling a system including a plurality of sources of signals causing a plurality of events, including an input interface to receive signals from the sources of signals; a memory to store a neural network trained to diagnose a control state of the system/ wherein the neural network includes a sequence of layers, each layer includes a set of nodes, each node of an input layer and a first hidden layer following the input layer corresponds to a source of signal in the system, wherein a pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold, such that the neural network is a partially connected neural network; a processor to submit the signals into the neural network to produce the control state of the system; and a controller to execute a control action selected according to the control state of the system.

[0023]

Another embodiment discloses a method for controlling a system including a plurality of source of signals causing a plurality of events, wherein the method uses a processor coupled to a memory storing a neural network trained to diagnose a control state of the system, wherein the processor is coupled with stored instructions implementing the method, wherein the instructions, when executed by the processor carry out steps of the method, including receiving signals from the source of signals; submitting the signals into the neural network retrieved from the memory to produce the control state of the system, wherein the neural network includes a sequence of layers, each layer includes a set of nodes, each node of an input layer and a first hidden layer following the input layer corresponds to a source of signal in the system, wherein a pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold; and executing a control action selected according to the control state of the system.

[0024]

Yet another embodiment discloses a non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method, the method includes receiving signals from the source of signals; submitting the signals into a neural network trained to diagnose a control state of the system to produce the control state of the system, wherein the neural network includes a sequence of layers, each layer includes a set of nodes, each node of an input layer and a first hidden layer following the input layer

corresponds to a source of signal in the system, wherein a pair of nodes from neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold; and executing a control action selected according to the control state of the system. [Brief Description of Drawings]

[0025]

[Fig. 1]

FIG. 1 is a schematic diagram illustrating components of the manufacturing anomaly detection system 100 according to some embodiments.

[Fig- 2]

Fig. 2 shows a schematic of a feedforward neural network used by some embodiments for supervised machine learning.

[Fig. 3]

Fig. 3 shows a schematic of an autoencoder neural network used by some embodiments for unsupervised machine learning.

[Fig. 4A]

Fig. 4A illustrates general process of event sequence generation and distinct event extraction used by some embodiments.

[Fig. 4B]

Fig. 4B shows an example of event sequence generation and distinct event extraction of Fig. 4 A using three switch signals.

[Fig. 5A]

Fig. 5 A shows general form of the event ordering relationship table used by some embodiments.

[Fig. 5B]

Fig. 5B shows an example of the event ordering relationship table for the event sequence and distinct events shown in Fig. 4B.

[Fig. 6A]

Fig. 6A shows general form of the signal connection matrix generated from the order relationship table according to some embodiments.

[Fig. 6B]

Fig. 6B is an example of signal connection matrix corresponding to the event ordering relationship table shown in Fig. 5B.

[Fig. 7]

Fig. 7 shows an example of converting a fully connected time delay neural network (TDNN) to a simplified structured TDNN using a signal connection matrix according to some embodiments.

[Fig. 8A]

Fig. 8 A shows the experiment result comparison between fully connected autoencoder neural network and the structured autoencoder neural network constructed according to some embodiments.

[Fig. 8B]

Fig. 8B shows the experiment result comparison between fully connected autoencoder neural network and the structured autoencoder neural network constructed according to some embodiments.

[Fig. 9]

Fig. 9 shows a block diagram of apparatus 900 for controlling a system including a plurality of sources of signals causing a plurality of events in accordance with some embodiments.

[Fig. 10 A]

Fig. 10A shows a block diagram of a method to train the neural network according to one embodiment.

[Fig. 10B]

Fig. 10B shows a block diagram of a method to train the neural network according to an alternative embodiment.

[Description of Embodiments]

[0026]

Overview

FIG. 1 is a schematic diagram illustrating components of the manufacturing anomaly detection system 100 according to some embodiments. The system 100 includes manufacturing production line 110, a training data pool 120, machine learning model 130 and anomaly detection model 140. The production line 110 uses sensors to collect data. The sensor can be digital sensors, analog sensors, and combination thereof. The collected data serve two purposes, some of data are stored in training data pool 120 and used as training data to train machine learning model 130 and some of data are used as operation time data by anomaly detection model 140 to detect anomaly. Same piece of data can be used by both machine learning model 130 and anomaly detection model 140.

[0027]

To detect anomaly in a manufacturing production line 110, the training data are first collected. The training data in training data pool 120 are used by machine learning model 130 to train a neural network. The training data pool 120 can include either labeled data or unlabeled data. The labeled data have been tagged with labels, e.g., anomalous or normal. Unlabeled data have no label. Based on types of training data, machine learning model 130 applies different training approaches. For labeled training data, supervised learning is typically used and for unlabeled training data, unsupervised learning is typically applied. In such a manner, different embodiments can handle different types of data.

[0028] Machine learning model 130 learns features and patterns of the training data, which include the normal data patterns and abnormal data patterns. The anomaly detection model 140 uses the trained machine learning model 150 and the collected operation time data 160 to perform anomaly detection. The operation time data 160 can be identified normal or abnormal. For example, using normal data patterns 155 and 158, the trained machine learning model 150 can classify operation time data into normal data 170 and abnormal data 180. Operation time data XI 163 and X2 166 are classified as normal and operation time data X3 169 is classified as anomalous. Once anomaly is detected, necessary actions are taken 190.

[0029]

The anomaly detection process can be executed online or offline. Online anomaly detection can provide real time predictive maintenance. However, online anomaly detection requires fast computation capability, which in turn require simple and accurate machine learning model. The embodiments of the invention provide fast and accurate machine learning model.

[0030]

Neural networks for anomaly detection in manufacturing systems

Neural network can be employed to detect anomaly through both supervised learning and unsupervised learning. Some embodiments apply time delay neural network (TDNN) for anomaly detection in manufacturing systems. Using time delay neural network, not only current data but also historic data are used as input to neural network. The number of time delay steps is the parameter to specify number of historic data measurements to be used, e.g., if the number of time delay steps is 3, then data at current time t, data at time t-1 and data at time t-2 are used. Therefore, size of time delay neural network depends on the number of time delay steps. The time delay neural network architecture explores the relation of data signals in time domain. In manufacturing systems, the history of data signals may provide important future prediction.

[0031]

A TDNN can be implemented as a time delay feedforward neural network (TFFNN) or a time delay autoencoder neural network (TDANN). For anomaly detection in manufacturing systems, some embodiments apply the time delay feedforward neural network and some embodiments apply the time delay autoencoder neural network.

[0032]

Fig. 2 shows a schematic of a feedforward neural network used by some embodiments for supervised machine learning. A feedforward neural network is an artificial neural network wherein connections between the neurons do not form cycle. For the supervised learning, training data are labeled as either normal or abnormal. Under this condition, the supervised learning techniques are applied to the training model. The embodiments employ the time delay feedforward neural network to detect anomaly with labeled training data. For example, the

feedforward neural network shown in Fig. 2 includes the input layer 210, multiple hidden layers 220 and output layer 230. The input layer 210 takes data signals X 240 and transfers the extracted features through the weight vector W 260 and the activation function, e.g., Sigmoid function, to the first hidden layer. In the case of the time delay feedforward neural network, input data X 240 includes both current data and historic data. Each hidden layer 220 takes the output of the previous layer and bias 250 and transfers the extracted features to next layer. The value of bias is positive 1. The bias allows neural network to shift the activation function to the left or the right, which can be critical for successful learning. Therefore, the bias plays similar role as the constant b of a linear function y = ax + b. After multiple hidden layers of the feature extraction, neural network reaches to the output layer 230, which takes the output of the last hidden layer and bias 250 and uses a specific loss function, e.g., Cross-Entropy function, and formulates the corresponding optimization problem to produce final output Y 270, which classifies the test data as normal or abnormal.

[0033]

The manufacturing data may be collected under normal operation condition only since anomaly rarely happens in manufacturing system or the anomalous data are difficult to collect. Under this circumstance, the data are usually not labeled and therefore, the unsupervised learning techniques can be useful. In this case, some embodiments apply the time delay autoencoder neural network to detect anomaly.

[0034]

Fig. 3 shows a schematic of an autoencoder neural network used by some embodiments for unsupervised machine learning. An autoencoder neural network is a special artificial neural network to reconstruct the input data signals X 240 with the encoder 310 and the decoder 320 composed of a single or multiple hidden layers as shown in Fig. 3, where X 330 is the reconstructed data from the input data signals X 240. The reconstruction gives X = X. For the time delay

autoencoder neural network, input data X 240 includes both current data and historic data. The compressed features appear in the middle layer is usually called the code layer 340 in the network structure. An autoencoder neural network can also take bias 250. There are two types of autoencoder neural networks, i.e., tied weight and untied weight. The tied weight autoencoder neural network has a symmetric topology, in which weight vector W ; 260 on the encoder side is same as the weight vector W’, 350 on the decoder side. On the other hand, for the untied weight autoencoder neural network, the topology of the network is not necessarily symmetric and weight vectors on the encoder side are not necessarily same as weight vectors on decoder side.

[0035]

Event ordering relationship based neural network structure In a manufacturing system, tens to hundreds or thousands of sensors are used to collect data, which indicates that the amount of data is huge. As a result, size of the neural network applied to detect anomaly can be very large. Therefore, the problem of determining the proper size of the neural network is important.

[0036]

Some embodiments address the problem of determining the proper size of neural network. Even though the fully connected neural networks can learn its weights through training, appropriately reducing the complexity of the neural network can reduce computational cost and improve anomaly detection accuracy. To that end, it is an object of some embodiments to reduce neural network size without degrading the performance.

[0037]

The complexity of the neural network depends on the number of neurons and the number of connections between neurons. Each connection is represented by a weight parameter. Therefore, reducing the complexity of the neural network is to reduce the number of weights and/or the number of neurons. Some

embodiments aim to reduce neural network complexity without degrading the performance.

[0038]

One approach for tackling this problem is referred herein as pruning and includes training a larger than necessary network and then removing unnecessary weights and/or neurons. Therefore, the pruning is a time consuming process.

[0039]

The question is which weights and/or neurons are unnecessary. The conventional pruning techniques typically remove the weights with smaller values. There is no proof that the smaller weights are unnecessary. As a result, the pruning inevitably degrades the performance compared with fully connected neural network due to pruning loss. Therefore, the pruning candidate selection is of prime importance.

[0040]

Some embodiments provide an event ordering relationship based neural network structuring method, which make pruning candidate selection based on event ordering relationship information. Furthermore, instead of removing unnecessary weights and/or neurons during training process, the embodiments determine a neural network structure before the training. Notably, such a structure of partially connected neural network determined by some embodiments outperforms the fully connected neural network. The structured neural network reduces training time and improves anomaly detection accuracy. More precisely, the embodiments pre-process training data to find the event ordering relationship, which is used to determine important neuron connections of the neural network. The unimportant connections and the isolated neurons are then removed from neural network.

[0041]

To describe event ordering relationship based neural network structuring method, the data measurement collected from a sensor that monitors a specific property of the manufacturing system is called as a data signal, e.g., a voltage sensor measures voltage signal. A sensor may measure multiple data signals. The data signals can be measured periodically or aperiodically. In the case of periodic measurement, the time periods for measuring different data signals can be different.

[0042]

For a data signal, an event is defined as signal value change from one level to another level. The signal changes can be either out of admissible range or in admissible range. More specifically, an event is defined as

E = {S, ToS, T} (1)

where S represents data signal that results in the event, ToS indicates type of event for signal S and T is the time at which the signal value changed. For example, a switch signal can have an ON event and an OFF event. Therefore, an event may correspond to a normal operation execution or an anomalous incident in the system.

[0043]

For processing manufacturing, an event can represent abnormal status such as measured data being out of admissible operating range or normal status such as system changes from one state to another state. For discrete manufacturing, an event can represent an operation execution in correct order or in incorrect order.

[0044]

Before training neural network, the training data are processed to extract events for all training data signals. These events are used to build an event ordering relationship (EOR) table.

[0045]

Assume there are M data signals {Si}^, which generate N events. According to event occurring time, arrange these events into an event sequence {E j }i .

Because a type of the event may occur multiple times, assume event sequence {E j }i contains K distinct events {E i , where each E j (i = 1,2,...,K) has a format of {S, ToS}.

[0046]

Fig. 4 A illustrates general process of event sequence generation and distinct event extraction used by some embodiments. For each data signal in training data pool 120, a set of events is created 410 based on the changes of the signal value, the corresponding event types and the corresponding times of the signal value changes. After events are generated for all training data signals, the events are arranged into an event sequence 420 according to the event occurrence time. If multiple events occurred at same time, these events can appear in any order in the event sequence. Once event sequence is created, the distinct events are extracted 430 from the event sequence. A distinct event only represents an event type without regarding the event occurrence time.

[0047]

Fig. 4B illustrates an example of event sequence generation using three switch signals Si, S 2 and S 3 440. These switch signals can generate the events. If a switch signal changes its value from 0 to 1, an ON event is generated. On the other hand, if a switch signal changes its value from 1 to 0, an OFF event is generated. Each switch signal generates three ON/OFF events at different times. Signal S j generates three events 450 {E l l E I2 , E J3 }, Signal S 2 generates three events 460 {¾ ! , E 2 2, E 23 } and Signal S 3 creates three events 470 {E 3l , E 32 , E 33 }. According to event occurrence time, these nine events forms an event sequence 480 as {E n , E 2l , E 3l , E 2 , E 2 2, E l2 , E 33 , E 23 , E l3 }. This event sequence contains six distinct events 490 as (E^, E 2 , E 3 , E 4 , E 5 , E 6 } = {S r ON, S OFF, S 2 -ON, S 2 -OFF, S 3 -ON, S 3 - OFF} .

[0048]

Fig. 5 A shows general form of an event ordering relationship table used by some embodiments. Specifically, using event sequence {E i and the distinct events {E j )i , the event ordering relationship (EOR) table 500 can be built as shown in Fig. 5A, where e (i ¹ ]) is initialized to 0. In some implementations, during EOR table construction process, ey is increased by 1 for each event pair {E j , E j } occurrence in event sequence {E j }^. If event E j and E j occur at same time, both ey and e^ are increased by 1/2. Alternative embodiments use different values to build the EOR table. In any case, ey (i ¹ j) indicates the number of times event E j following event E j . A larger ey indicates that event E j tightly follows event E j , a smaller ey implies that event E j loosely follows event E j , and ey = 0 indicates that event E j never follows event E j . If both e and e j; are greater than zero, event E j and event E j can occur in either order. [0049]

Fig. 5B shows an example of the event ordering relationship table for the event sequence and distinct events shown in Fig. 4B.

[0050]

The event ordering relationship (EOR) table 500 is used by some

embodiments to construct neural network connections. Based on event ordering relationship table, a signal connection matrix (SCM) is constructed. The signal connection matrix provides the neural network connectivity structure.

[0051]

Fig. 6 A shows general form of the signal connection matrix generated from the event order relationship table according to some embodiments. In this case, a M by M signal connection matrix (SCM) 600 is constructed, where C y (i ¹ ]) represents the number of times events of signal S j following events of signal S . Therefore, C y = 0 indicates event of signal S j never follows event of signal S . A higher value of C y indicates that signal S j tightly depends on signal S; in the sense that the change of signal S; may most likely cause the change of signal S j . On the other hand, a lower value of C y implies that signal S j loosely depends on signal S . A threshold C TH can be defined for the neural network connection configuration such that if C > C TH , then signal S can be considered to impact signal S j . In this case, the connections from neurons corresponding to signal S; to the neurons corresponding to signal S j are considered as important. On the other hand, if C y < C TH , the connections from neurons corresponding to signal S to the neurons corresponding to signal S j are considered as unimportant.

[0052]

Alternatively, the signal connection matrix can also be used to define the probability of the subsequent occurrence of events for a pair of data signals. For two signals S; and S j , C j j represents the number of times events of signal S followed by events of signal S j . Therefore, the probability of subsequent occurrence of the events of signals S followed by events of S j can be defined as

[0053]

Notice that P(S i? S j ) and P(S j , S j ) can be different, i.e., signal S j may impact signal S j , but signal S j may not necessarily impact signal Sj. Using this probability, a threshold P TH can be defined such that if P(Sj, S j ) > P TH , then signal Sj can be considered to impact signal S j . Therefore, the connections from neurons

corresponding to signal S to the neurons corresponding to signal S j are considered as important.

[0054]

Fig. 6B is an example of signal connection matrix 610 corresponding to the event ordering relationship table shown in Fig. 5B. The signal connection matrix (SCM) 600 and/or 610 can be used to simplify the fully connected neural networks.

[0055]

Fig. 7 shows an example of converting a fully connected TDNN 710 to a simplified structured TDNN using signal connection matrix 730 according to some embodiments, wherein the TDNN can be the time delay feedforward neural network or the time delay autoencoder neural network. Assume three data signals are {Si, S 2 , S 3 }, the number of time delay steps is 2 and connection threshold C TH = 1. The S j o and Sn (1 < i < 3) denote measurements of signal S at time t and time t-1, which correspond to the nodes S i0 and Sn (1 < i < 3) in the neural network of Fig. 7. In this example, the structure from the input layer to the first hidden layer of the neural network is illustrated because these two layers have the most influence on the topology of the neural network. It can be seen that the number of nodes at the input layer is six, i.e., number of data signals multiplied by number of time delay steps, and the number of nodes at the first hidden layer is three, i.e., number of data signals. The objective of the first hidden layer node configuration is to concentrate features related to a specific data signal to a single hidden node.

[0056]

In this example, for the fully connected TDNN, there are total 18

connections. Using the signal connection matrix 730, 18 connections are reduced to 10 connections in the structured TDNN. For example, c J2 = 1 = C TH indicates that signal S j may impact signal S 2 . Therefore, collections from S l0 and S n to ¾ are important because H l2 is used to collect information for signal S 2 . On the other hand, c l3 = 0 < C TH indicates that connections from S JO and Sn to H l3 are not important and therefore, can be removed from neural network.

[0057]

Fig. 8 A and Fig. 8B show the experiment result comparison between fully connected autoencoder neural network and the structured autoencoder neural network constructed according to some embodiments. Specifically, Fig. 8 A shows experiment result of the fully connected autoencoder neural network and Fig. 8B depicts experiment result of the corresponding structured autoencoder neural network. Y-axis represents the test error and X-axis represents the time index, which converts data collection time to the integer index such that a value of the time index uniquely corresponds to the time of the data measurement. The data are collected from real manufacturing production line with a set of the unlabeled training data and a set of the test data, i.e., operation time data. During test data collection, an anomaly occurred in the production line. The anomaly detection method is required to detect if the test data is anomalous. If yes, the anomaly occurrence time is required to be detected.

[0058]

For the test error threshold 810 = 0.018, Fig. 8 A shows that the fully connected autoencoder neural network detected two anomalies, one corresponding to test error 820 and other corresponding to test error 830. The anomaly corresponding to test error 820 is a false alarm and the anomaly corresponding to test error 830 is the true anomaly. On the other hand, the structured autoencoder neural network only detected true anomaly corresponding to test error 840.

Therefore, the structured autoencoder neural network is more accurate than the fully connected autoencoder neural network. Fig. 8 A and Fig. 8B also show that the anomalous time index of the true anomaly detected by both methods is same, which corresponds to a time 2 seconds later than actual anomaly occurrence time.

[0059]

Exemplar Embodiment

Fig. 9 shows a block diagram of apparatus 900 for controlling a system including a plurality of sources of signals causing a plurality of events in accordance with some embodiments. An example of the system is a manufacturing production line. The apparatus 900 includes a processor 920 configured to execute stored instructions, as well as a memory 940 that stores instructions that are executable by the processor. The processor 920 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The memory 940 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. The processor 920 is connected through a bus 906 to one or more input and output devices.

[0060]

These instructions implement a method for detecting and/or diagnosing anomaly in the plurality of events of the system. The apparatus 900 is configured to detect objects anomalies using a neural network 931. Such a neural network is referred herein as a structure partially connected neural network. The neural network 931 is trained to diagnose a control state of the system. For example, the neural network 931 can be trained offline by a trainer 933 using training data to diagnose the anomalies online using the operating data 934 of the system.

Examples of the operating data include signals from the source of signals collected during the operation of the system, e.g., events of the system. Examples of the training data include the signals from the source of signals collected over a period of time. That period of time can be before the operation/production begins and/or a time interval during the operation of the system.

[0061]

Some embodiments are based on recognition that a neural network is based on a collection of connected units or nodes called artificial neurons or just neurons. Each connection between artificial neurons can transmit a signal from one to another. The artificial neuron that receives the signal can process and transmit the processed signal to other artificial neurons connected to it. In such a manner, for the neurons receiving the signal from another neuron, that transmitting neuron is a source of that signal.

[0062]

To that end, some embodiments are based on realization that each neuron of at least some layers of the neural network can be matched with a source of signal in the manufacturing system. Hence, the source of signal in the manufacturing system is represented by a neuron in a layer of the neural network. In such a manner, the number of neurons in the neural network can be selected as minimally required to represent the physical structure of the manufacturing system.

[0063]

In addition, some embodiments are based on recognition that a neural network is a connectionist system that attempts to represents mental or behavioral phenomena as emergent processes of interconnected networks of simple units. In such a manner, the structure of the neural network can be represented not only by a number of neurons at each level of the neural network, but also as the connection among those neurons.

[0064]

Some embodiments are based on realization that when the neurons of the neural network represent the sources of signals in the manufacturing system, the connection among the neurons of the neural network can represent the connection among the sources of signals in the manufacturing system. Specifically, the neurons can be connected if and only if the corresponding sources of signals are connected.

[0065]

Some embodiments are based on realization that the connection between two different sources of signals for the purpose of anomaly detection is a function of a frequency of subsequent occurrence of the events originated in by these two different sources of signals. For example, let’s say that a source of signal is a switch that can change its state from ON to OFF state. The change of the state and/or a new value of the state is a signal of the source. If, when a first switch changes the state the second switch always changes its state, those two source of signals are strongly connected and thus the neurons in the neural network corresponding to this pair of switches is connected as well. Conversely, if, when a first switch changes the state the second switch never changes its state, those two source of signals are not connected and thus the neurons in the neural network corresponding to this pair of switches is not connected as well.

[0066]

In practice, always following or never following events rarely happen. To that end, in some embodiments, a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold. The threshold is application dependent, and the probability of subsequent occurrence of the events can be selected based on a frequency of such a subsequent

occurrence in a training data, e.g., used to train neural network.

[0067]

In such a manner, the connections of the neurons represent connectionist system mimicking the connectivity within the manufacturing system. To that end, the neural network of some embodiments becomes partially connected network having topology based on event ordering relationship, which reduces the neural network complexity and training time, and improves anomaly detection accuracy.

[0068]

The neural network 931 includes a sequence of layers, each layer includes a set of nodes, also referred herein as neurons. Each node of at least an input layer and a first hidden layer following the input layer corresponds to a source of signal in the system. In the neural network 931, a pair of nodes from the neighboring layers corresponding to a pair of different sources of signals are connected in the neural network only when a probability of subsequent occurrence of the events in the pair of the different sources of signals is above a threshold. In a number of implementations, the neural network 931 is a partially connected neural network.

[0069]

To that end, the apparatus 900 can also include a storage device 930 adapted to store the neural network 931 and/or a structure 932 of the neural network including the structure of neurons and their connectivity representing a sequence of events in the controlled system. In addition, the storage device 930 can store a trainer 933 to train the neural network 931 and data 939 for detecting the anomaly in the controlled system. The storage device 930 can be implemented using a hard drive, an optical drive, a thumb drive, an array of drives, or any combinations thereof.

[0070]

The apparatus 900 includes an input interface to receive signals from the sources of signals of the controlled system. For example, in some implementations, the input interface includes a human machine interface 910 within the apparatus 900 that connects the processor 920 to a keyboard 911 and pointing device 912, wherein the pointing device 912 can include a mouse, trackball, touchpad, joy stick, pointing stick, stylus, or touchscreen, among others.

[0071]

Additionally, or alternatively, the input interface can include a network interface controller 950 adapted to connect the apparatus 900 through the bus 906 to a network 990. Through the network 990, the signals 995 from the controlled system can be downloaded and stored within the storage system 930 as training and/or operating data 934 for storage and/or further processing. The network 990 can be wired or wireless network connecting the apparatus 900 to the sources of the controlled system or to an interface of the controlled system for providing the signals and metadata of the signal useful for the diagnostic.

[0072]

The apparatus 900 includes a controller to execute a control action selected according to the control state of the system. The control action can be configured and/or selected based on a type of the controlled system. For example, the controller can render the results of the diagnosis. For example, the apparatus 900 can be linked through the bus 906 to a display interface 960 adapted to connect the apparatus 900 to a display device 965, wherein the display device 965 can include a computer monitor, camera, television, projector, or mobile device, among others.

[0073]

Additionally, or alternatively, the controller can be configured to directly or indirectly control the system based on results of the diagnosis. For example, the apparatus 900 can be connected to a system interface 970 adapted to connect the apparatus to the controlled system 975 according to one embodiment. In one embodiment, the controller executes a command to stop or alter the manufacturing procedure of the controlled manufacturing system in response to detecting an anomaly.

[0074]

Additionally, or alternatively, the controller can be configured to control different application based on results of the diagnosis. For example, the controller can submit results of the diagnosis to an application not directly involved to a manufacturing process. For example, in some embodiments, the apparatus 900 is connected to an application interface 980 through the bus 906 adapted to connect the apparatus 900 to an application device 985 that can operate based on results of anomaly detection.

[0075]

In some embodiments, the structure of neurons 932 is selected based on a structure of the controlled system. For example, in one embodiment, in the neural network 931, a number of nodes in the input layer equals a multiple of a number of the sources of signals in the system. For example, if the multiple equals one, the number of nodes in the input layer equals the number of the sources of signals in the system. In such a manner, each node can be matched to a source signal. In some implementations, however, the multiple is greater than one, such that multiple nodes can be associated with the common source of signal. In those implementations, the neural network is a time delay neural network (TDNN), and the multiple for the number of nodes in the input layer equals a number of time steps in the delay of the TDNN.

[0076]

Additionally, a number of node in hidden layers can also be selected based on a number of source signal. For example, in one embodiment, a number of nodes in the first hidden layer following the input layer equals the number of the sources of signals. This embodiment also gives physical meaning to the input layer to represent the physical structure of the controlled system. In addition, this embodiment allows the first most important tier of connections in the neural network, i.e., the connections between the input layer and the first hidden layer to represent the connectivity among the events in the system represented by the nodes. Specifically, the input layer is partially connected to the first hidden layer based on probabilities of subsequent occurrence of the events in different sources of signals.

[0077]

In various embodiments, the probability of subsequent occurrence of the events in the pair of the different sources of signals is a function of a frequency of the subsequent occurrence of the events in the signals collected over a period. For example, in some implementations, the subsequent occurrence of the events in the pair of the different sources of signals is a consecutive occurrence of events in a time sequence of all events of the system. In alternative implementations, the subsequent occurrence can allow a predetermined number of intervening events. This implementation adds flexibility into the structure of the neural network making the neural network adaptable do different requirements of the anomaly detection.

[0078]

Fig. 10A shows a block diagram of a method used by a neural network trainer 933 to train the neural network 931 according to one embodiment. In this embodiment, the structure 932 of the neural network is determined from the probabilities of the subsequent occurrence of events, which are in turn functions of the frequencies of subsequent occurrence of events. To that end, the embodiment evaluates evaluate the signals 1005 from the source of signals collected over a period of time to determine 1010 frequencies 1015 of subsequent occurrence of events within the period of time for different combinations of pairs of sources of signals. For example, the embodiment determines the frequencies as shown in Figures 4A, 4B, 5A, and 5B and corresponding description thereof.

[0079]

Next, the embodiment determines 1020 probabilities 1025 of the subsequent occurrence of events for different combinations of the pairs of sources of signals based on the frequencies of subsequent occurrence of events within the period of time. The embodiment can use various statistical analysis of the frequencies to derive the probabilities 1025. For example, some implementations use equation (2) to determine the probability of subsequent occurrence of events for a pair of signals.

[0080]

This embodiment is based on recognition that the complex manufacturing system can have different types of events with different inherent frequencies. For example, the system can be designed such that under normal operations a first event is ten times more frequent than a second event. Thus, the fact that the second event appears after the first event only one out of ten times is not indicative by itself of the strength of dependency of the second event on the first event. The statistical methods can consider the natural frequencies of events in determining the probabilities of the subsequent occurrences 1025. In this case, the probability of the subsequent occurrence is at most 0.1.

[0081]

After, the probabilities are determined, the embodiment compares 1030 the probabilities 1025 the subsequent occurrence of events for different combinations of pairs of sources of signals with a threshold 1011 to determine a connectivity structure of the neural network 1035. This embodiment allows using a single threshold 101 1, which simplifies its implementation. Example of the connectivity structure is a connectivity matrix 600 of Figure 6A and 610 of Figure 6B.

[0082]

Fig. 10B shows a block diagram of a method used by a neural network trainer 933 to train the neural network 931 according to alternative embodiment. In this embodiment, the connectivity structure of the neural network 1035 is determined directly from the frequencies of subsequent occurrence 1015. The embodiment to compares 1031 the frequencies of the subsequent occurrence 1015 of events for different combinations of pairs of sources of signals with one or several thresholds 1012 to determine a connectivity structure of the neural network 1035. This embodiment is more deterministic than the embodiment of Figure 10 A.

[0083]

After the connectivity structure of the neural network 1035 is determined, the embodiments of Figure 10A and Figure 10B form 1040 the neural network 1045 according to the structure of the neural network 1035. For example, the neural network includes an input layer, an output layer and a number of hidden layers. The number of nodes in the input layer of the neural network equals a first multiple of a number of the source of signals in the system, and a number of nodes in the first hidden layer following the input layer equals a second multiple of the number of the sources of signals. The first and the second multiples can be the same or different. In addition, the input layer is partially connected to the first hidden layer according to the connectivity structure.

[0084]

Next, the embodiments train 1050 the neural network 1045 using the signals 1055 collected over the period of time. The signals 1055 can be the same or different from the signals 1005. The training 1050 optimizes parameters of the neural network 1045. The training can use different methods to optimize the weights of the network such as stochastic gradient descent.

[0085]

The above-described embodiments of the present invention can be

implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When

implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. A processor may be implemented using circuitry in any suitable format.

[0086]

Also, the embodiments of the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.

[0087]

Use of ordinal terms such as“first,”“second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.