Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR MODELLING INDUSTRIAL PROCESSES
Document Type and Number:
WIPO Patent Application WO/2022/117582
Kind Code:
A1
Abstract:
The present invention relates to a method and system for modelling industrial processes, including closed loop feedback processes. The system comprises at least one sensor for measuring at least one input for the industrial process; and a processor which is configured to receive a measurement of the at least one input from the at least one sensor at an input time. The processor is also configured to implement a hybrid neural network model to output a derivative of the at least one output at the input time based on the received measurement, wherein the neural network model incorporates at least one neural network block and a first-principle block incorporating dynamic model comprising an ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; input the derivative to an ordinary differential equation solver to predict the at least one output at a subsequent time; and output the prediction of at least one output at the subsequent time using the at least one measured input.

Inventors:
JAHANSHAHI ESMAEIL (NO)
REKSTAD ARNSTEIN (NO)
TORKELSEN VEGARD KLEPPE (NO)
Application Number:
PCT/EP2021/083621
Publication Date:
June 09, 2022
Filing Date:
November 30, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS ENERGY AS (NO)
International Classes:
G05B13/02; G05B13/04; G05B17/02; G06F30/20
Domestic Patent References:
WO2020214075A12020-10-22
WO2020214075A12020-10-22
Foreign References:
EP3193218A22017-07-19
Other References:
LJUNG: "System Identification: Theory for the User", 1999, UPPER SADDLE RIVER
BILLINGS ET AL.: "Identification of nonlinear systems - a survey", IEE PROCEEDINGS D - CONTROL THEORY AND APPLICATIONS, vol. 126, no. 6, 1980, pages 2727 - 285
SCHMIDHUBER: "Deep learning in neural networks: An overview", NEURAL NETWORKS, vol. 61, 2015, pages 85 - 117, XP055283630, DOI: 10.1016/j.neunet.2014.09.003
GIRI ET AL.: "Lecture notes in control and information sciences", vol. 404, 2010, SPRINGER, article "Block-oriented nonlinear system identification"
BILLINGS ET AL.: "Identification of systems containing linear dynamic and static nonlinear elements", AUTOMATICA, vol. 18, no. 1, 1980, pages 15 - 26
WILLS ET AL.: "Identification of Hammerstein-Wiener models", AUTOMATICA, vol. 49, no. 1, 2013, pages 70 - 81
HE ET AL.: "Deep Residual Learning for Image Recognition", COMPUTER VISION AND PATTERN RECOGNITION, 2016
CHANGE ET AL.: "Multi-Level Residual Networks from Dynamical Systems View", SIXTH INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS, 2018
CHEN ET AL.: "Neural Ordinary Differential Equations", 32ND CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS, 2018
KARLSSONSVANSTROM: "Masters Thesis", 2019, CHALMERS UNIVERSITY OF TECHNOLOGY, article "Modelling Dynamical Systems Using Neural Ordinary Differential Equations: Learning ordinary differential equations from data using neural networks"
MORK ET AL.: "Bachelor Thesis", 2012, article "Studies in Autotuning"
KINGMA ET AL.: "A Method for Stochastic Optimization", 3RD INTERNATIONAL CONFERENCE FOR LEARNING REPRESENTATIONS, 2015
IVANESCU: "Mechanical Engineer's Handbook", 2001, article "Control"
JAZAR: "Theory of Applied Robotics: Kinematics, Dynamics and Control", 2010
Attorney, Agent or Firm:
ROTH, Thomas (DE)
Download PDF:
Claims:
CLAIMS

1. A computer-implemented method for modelling an industrial process, wherein the industrial process is a closed loop process comprising a controller, the method comprising: measuring, using at least one sensor, at least one input for the industrial process at an input time; and predicting at least one output of the industrial process at a subsequent time using the at least one measured input; wherein predicting the at least one output comprises using a hybrid neural network model to output a derivative of the at least one output at the input time, wherein the hybrid neural network model incorporates at least one neural network model and a first-principle model incorporating a dynamic model comprising at least one ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; and inputting the derivative to an ordinary differential equation solver to predict at least one output at the subsequent time; wherein the at least one neural network model comprises a first memoryless nonlinear block which is parametrized by a first vector and a second memoryless nonlinear block which is parametrized by a second vector; wherein the first-principle model is parametrized by a third vector; wherein a coefficient vector comprising the first, second and third vectors parametrizes the hybrid neural network model whereby the hybrid neural network model is fitted to training input and output observations from the industrial process, wherein the hybrid neural network model further incorporates a control block for incorporating parameters of the controller into the hybrid neural network model; and wherein using the hybrid neural network model comprises receiving, at at least one of the first and second memoryless nonlinear blocks, a controller output from the control block; receiving, at the first memoryless nonlinear block, the at least one measured input; producing, by the first memoryless nonlinear block, a first intermediate output; receiving, at the second memoryless nonlinear block, an output of the industrial process at the input time; producing, by the second memoryless nonlinear block, a second intermediate output; receiving, at the first principle model, the first and second intermediate outputs; and producing, by the first principle model, the derivative of the at least one output at the input time.

2. The method of claim 1 , further comprising inputting, to the control block, an output of the industrial process at the input time; outputting from the control block to at least one of the first memoryless nonlinear block and the second memoryless nonlinear block, a controller output.

3. The method of claim 1 or claim 2, wherein measuring at least one input comprises measuring at least one of a setpoint and a disturbance.

4. The method of claim 3, further comprising inputting, to the control block, the at least one measured setpoint; and inputting, to at least one of the first memoryless nonlinear block and the second memoryless nonlinear block, the at least one measured disturbance.

5. The method of any one of the preceding claims, wherein the hybrid neural network model is defined as: where wt is the first intermediate output, zt is the second intermediate output, ut is a controller output from the control block, yt is an output of the industrial process at the input time t, dt is the at least one measured input known as an input disturbance, rt is an input setpoint, c(.) represents the control block with fixed known parameters, fh(.,α) and fw-1(.,β) are the first and second memoryless nonlinear blocks parametrized by the first and second vectors a and p, respectively, g(.,y) is the dynamic model for the first principle model which is parametrized by the third vector y, and 0 is the coefficient vector that parametrizes the hybrid neural network model to fit a set of training observations.

6. The method of any one of the preceding claims, wherein the first memoryless nonlinear block is the first nonlinear block from the Hammerstein-Wiener model structure and the second memoryless nonlinear block is the inverse of the second nonlinear block from the Hammerstein-Wiener model structure.

7. The method of any one of the preceding claims, wherein the output of the industrial process which is input to the second memoryless nonlinear block is selected from a measured output and a predicted output.

8. The method of any one of claims 1 to 6, further comprising repeating the measuring and predicting steps for multiple iterations, wherein in an initial iteration, the measuring step comprises measuring an initial value of the at least one output at the input time and the predicting step comprises using the at least one measured input and the measured initial value of the at least one output to predict the at least one output of the industrial process at a subsequent time; and in subsequent iterations, the predicting step uses the at least one measured input and the predicted value of the at least one output which was predicted in a previous iteration.

9. The method of any one of the preceding claims, further comprising inputting the derivative to an ordinary differential equation solver having an adaptive time step.

10. The method of any one of the preceding claims, wherein the dynamic model is a linear model which is selected from a first order dynamic model and a second order dynamic model.

11 . The method of any one of the preceding claims, wherein the industrial process is a gravity separation process for separating oil, water and gas and measuring the at least one output comprises measuring oil level, water level and gas pressure.

12. The method of claim 11 , wherein the dynamic model comprises six inputs including three disturbances and three setpoints; one for each of the inflow rates of oil, water and gas.

13. The method of any one of claims 1 to 10, wherein the industrial process is controlling a robotic arm having two connected links and measuring the at least one output comprises measuring the angles of the two links.

14. A non-transitory computer readable medium carrying processor control code which when implemented in a system causes the system to carry out the method of any one of claims 1 to 13.

15. A data processing system for modelling an industrial process, wherein the industrial process is a closed loop process, the system comprising at least one sensor for measuring at least one input for the industrial process; a controller; and a processor which is configured to receive a measurement of the at least one input from the at least one sensor at an input time; implement a hybrid neural network model to output a derivative of the at least one output at the input time based on the received measurement, wherein the hybrid neural network model incorporates at least one neural network model and a first-principle model incorporating a dynamic model comprising an ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; input the derivative to an ordinary differential equation solver to predict the at least one output at a subsequent time; and output the prediction of at least one output at the subsequent time using the at least one measured input; wherein the at least one neural network model comprises a first memoryless nonlinear block which is parametrized by a first vector and a second memoryless nonlinear block which is parametrized by a second vector; wherein the first-principle model is parametrized by a third vector; wherein a coefficient vector comprising the first, second and third vectors parametrizes the hybrid neural network model whereby the hybrid neural network model is fitted to training input and output observations from the industrial process, wherein the hybrid network model further incorporates a control block for incorporating parameters of the controller into the hybrid neural network model; and wherein using the hybrid neural network model comprises receiving, at at least one of the first and second memoryless nonlinear blocks, a controller output from the control block; receiving, at the first memoryless nonlinear block, the at least one measured input; producing, by the first memoryless nonlinear block, a first intermediate output; receiving, at the second memoryless nonlinear block, an output of the industrial process at the input time; producing, by the second memoryless nonlinear block, a second intermediate output receiving, at the first principle model, the first and second intermediate outputs; and producing, by the first principle model, the derivative of the at least one output at the input time.

Description:
METHOD AND SYSTEM FOR MODELLING INDUSTRIAL PROCESSES

FIELD OF INVENTION

The present invention relates to a method and system for modelling industrial processes, for example by producing dynamic digital twins for industrial processes and/or identifying dynamic models of control loops for PID tuning.

BACKGROUND OF INVENTION

Economic optimization, Advances Process Control (APC), improved process control performance, automatic fault detection, and estimation of unmeasured process variables have numerous benefits and incentives in different processing industries. These methodologies heavily rely on mathematical models of the processing plants (systems) and the ability to update the models. The need for accurate mathematical models has been one of the main challenges, e.g., in Nonlinear Model-Predictive Control (NMPC).

There has been a great interest in data-driven modelling techniques in academia and the industry in the last two decades. The main drivers for data-driven approaches are the emergence of new technologies in sensing instrumentation, improved and secure data acquisition protocols, the ability to store a large volume of data in time-series databases, and advances in machine learning techniques. The data-driven modelling is also referred to as “system identification”. The classic system identification methods have been used since the 1970s (for example as described in System Identification: Theory for the User by Ljung 1999 published by Upper Saddle River, NJ, PTR Prentice Hall). They focus on identifying a linear dynamic model for the process and use the linear model for controller design.

As described in “Identification of nonlinear systems - a survey” by Billings et al published in IEE proceedings D - Control Theory and Applications 126(6): 2727-285 1980, real processes typically have a nonlinear nature. The conventional feedforward Deep Neural Networks, also known as Multi-Layer Perceptron (MLP), find a static nonlinear function of the process that is y = f(x). Here f is a static (i.e., memoryless) model and does not include any dynamic information. These static models can only describe steady-state behaviour and cannot represent the transient (i.e., time-dependent) responses. On the other hand, the nonlinear dynamic models can represent both the transient and steady-state behaviour of the process.

There have been recent developments to add dynamics to Deep Neural Networks. LSTM (Long-Short Term Memory) and ResNet (Residual Networks) are the two most well-known examples of such networks. LSTM and ResNet have many drawbacks (such as high memory usage) and are not able to capture the dynamics efficiently because of their structural limitations. W02020/214075 describes a method and corresponding systems and computer-programs for evaluating and/or adapting one or more technical models related to an industrial and/or technical process. The method comprises obtaining (S1) a fully or partially acausal modular parameterized model of an industrial and/or technical process comprising at least one physical sub-model and at least one neural network sub-model, including one or more parameters of the parameterized model. The method further comprises generating (S2) a system of differential equations based on the parameterized model, and simulating (S3) the dynamics of one or more states of the industrial and/or technical process over time based on the system of differential equations. The method also comprises applying (S4) reverse-mode automatic differentiation with respect to the system of differential equations when simulating the industrial and/or technical process in order to generate an estimate representing an evaluation of the model of the industrial and/or technical process.

An improved system and method are thus desirable.

SUMMARY OF INVENTION

To address these problems, the present invention provides a computer-implemented method for modelling an industrial process, wherein the industrial process is a closed loop process comprising a controller. The method comprising: measuring, using at least one sensor, at least one input for the industrial process at an input time; and predicting at least one output of the industrial process at a subsequent time using the at least one measured input. Predicting the at least one output comprises using a hybrid neural network model to output a derivative of the at least one output at the input time, wherein the hybrid neural network model incorporates at least one neural network model and a first- principle model incorporating a dynamic model comprising at least one ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; and inputting the derivative to an ordinary differential equation solver to predict at least one output at the subsequent time. The at least one neural network model comprises a first memoryless nonlinear block which is parametrized by a first vector and a second memoryless nonlinear block which is parametrized by a second vector. The first-principle model is parametrized by a third vector. A coefficient vector comprising the first, second and third vectors parametrizes the hybrid neural network model whereby the hybrid neural network model is fitted to training input and output observations from the industrial process. The hybrid neural network model further incorporates a control block for incorporating parameters of the controller into the hybrid neural network model. Using the hybrid neural network model comprises: receiving, at at least one (or both) of the first and second memoryless nonlinear blocks, a controller output from the control block; receiving, at the first memoryless nonlinear block, the at least one measured input; producing, by the first memoryless nonlinear block, a first intermediate output; receiving, at the second memoryless nonlinear block, an output of the industrial process at the input time; producing, by the second memoryless nonlinear block, a second intermediate output receiving, at the first principle model, the first and second intermediate outputs; and producing, by the first principle model, the derivative of the at least one output at the input time.

According to another aspect of the invention, there is provided a data processing system for modelling an industrial process wherein the industrial process is a closed loop process. The system comprises at least one sensor for measuring at least one input for the industrial process; a controller; and a processor. The processor is configured to receive a measurement of the at least one input from the at least one sensor at an input time; implement a hybrid neural network model to output a derivative of the at least one output at the input time based on the received measurement, wherein the hybrid neural network model incorporates at least one neural network model and a first-principle model incorporating a dynamic model comprising an ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; input the derivative to an ordinary differential equation solver to predict the at least one output at a subsequent time; and output the prediction of at least one output at the subsequent time using the at least one measured input. The at least one neural network model comprises a first memoryless nonlinear block which is parametrized by a first vector and a second memoryless nonlinear block which is parametrized by a second vector. The first-principle model is parametrized by a third vector. A coefficient vector comprising the first, second and third vectors parametrizes the hybrid neural network model whereby the hybrid neural network model is fitted to training input and output observations from the industrial process. The hybrid neural network model further incorporates a control block for incorporating parameters of the controller into the hybrid neural network model. Using the hybrid neural network model comprises: receiving, at at least one (or both) of the first and second memoryless nonlinear blocks, a controller output from the control block; receiving, at the first memoryless nonlinear block, the at least one measured input; producing, by the first memoryless nonlinear block, a first intermediate output; receiving, at the second memoryless nonlinear block, an output of the industrial process at the input time; producing, by the second memoryless nonlinear block, a second intermediate output, receiving, at the first principle model, the first and second intermediate outputs; and producing, by the first principle model, the derivative of the at least one output at the input time.

We also describe a computer-implemented method for modelling an industrial process, the method comprising: measuring, using at least one sensor, at least one input for the industrial process at an input time; and predicting at least one output of the industrial process at a subsequent time using the at least one measured input; wherein predicting the at least one output comprises using a hybrid neural network model to output a derivative of the at least one output at the input time, wherein the hybrid neural network model incorporates at least one neural network block and a first-principle block using a dynamic model comprising at least one ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; and inputting the derivative to an ordinary differential equation solver to predict at least one output at the subsequent time. We also describe a data processing system for modelling an industrial process. The data processing system for modelling an industrial process may comprise at least one sensor for measuring at least one input for the industrial process; and a processor which is configured to receive a measurement of the at least one input from the at least one sensor at an input time; implement a neural network model to output a derivative of the at least one output at the input time based on the received measurement, wherein the neural network model incorporates at least one neural network block and a first-principle block incorporating a dynamic model comprising an ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; input the derivative to an ordinary differential equation solver to predict the at least one output at a subsequent time; and output the prediction of the at least one output at the subsequent time using the at least one measured input.

The following features apply to both the computer-implemented method and the data processing system described above.

The measuring and predicting steps may be repeated for multiple iterations. In an initial iteration, the measuring step may include measuring an initial value of the at least one output at the initial step (i.e. time t=0). The predicting step may include using the at least one measured input and the measured initial value of the at least one output to predict the at least one output of the industrial process at a subsequent time. In subsequent iterations, the predicting step may use the at least one measured input and the predicted value of the at least one output which was predicted in a previous iteration. Alternatively, in subsequent iterations, the predicting step may use the at least one measured input and a measured value of the at least one output. In other words, the output of the industrial process which is input to the second memoryless nonlinear block may be selected from a measured output and a predicted output. Similarly, the output of the industrial process which is input to the control block may be selected from a measured output and a predicted output.

The dynamic model may be a linear or non-linear model. The first principle model may be termed a first principle block may (and the terms may be used interchangeably). Similarly, the at least one neural network model may be termed a neural network block (and the terms may be used interchangeably). The first principle model receives an input from the at least one neural network model.

The at least one neural network model may comprise a first memoryless nonlinear block parametrized by a first vector and a second memoryless nonlinear block parametrized by a second vector. The first memoryless nonlinear block may receive the at least one measured input and may thus be termed an input neural block (and the terms may be used interchangeably). The first memoryless nonlinear block may output a first intermediate output. The second memoryless nonlinear block may receive the at least one output and may thus be termed an output block(and the terms may be used interchangeably). The second memoryless nonlinear may output a second intermediate output. In the initial iteration, the output neural block may receive an initial value of the least one output. In subsequent iterations, the output neural block may receive a value of the at least one output which was predicted in the previous iteration. In other words, the input neural block may implement a first memoryless nonlinear block parametrized by a first vector and the output neural block may implement a second memoryless nonlinear block parametrized by a second vector. Both the first and second intermediate outputs may be input to the first principle block. The hybrid neural network model may be expressed as: where wt is the intermediate output from the first nonlinear block, zt is the intermediate output from the second nonlinear block, yt is the output of the industrial process at the input time t, fn(.,α) and fw" 1 (.,β) are memoryless nonlinear blocks parametrized by the vectors a and p, respectively, g(.,y) is the dynamic first-principle block which is parametrized by the vector y, and 0 is the coefficient vector that parametrizes the model to fit the input and output observations from the industrial process. The input and output observations (UN and YN) are the measurements which are used to train the hybrid neural network model and may be termed a set of training observations.

The nonlinear block fn block may be considered similar to the first nonlinear block from the Hammerstein-Wiener model structure (which may also be termed the Hammerstein nonlinear block) and fw" 1 may be considered to be the inverse of the fw or second nonlinear block from the Hammerstein- Wiener model structure (which may also be termed the Wiener nonlinear block). The combined Hammerstein-Wiener model thus comprises a dynamic linear element g(.,y) sandwiched between two static nonlinear elements fn and fw. In the arrangement described above, both the nonlinear blocks occur before the dynamic block g(.,y) and thus the nonlinear block from the Wiener model fw(.,β) may be considered to inverted relative to the order in the Hammerstein-Wiener model and is thus represented in the expressions above as fw" 1 (.,β).

The industrial process is a closed loop process comprising at least one controller, for example a PID (proportional-integral-derivative) controller. The hybrid neural network model further incorporates a control block (which may be another nonlinear neural network block). Alternatively, the controller parameters may be known and incorporated in the hybrid neural network model. In the closed loop process, measuring at least one input may comprise measuring at least one of a setpoint and a disturbance. A setpoint may be defined as the desired value of the measurement and may alternatively termed a reference value. The controller may regulate the measurement on the setpoint by changing the input. The disturbance may be defined as any independent (i.e. not controlled) variable that affects the process operation. The effect of the disturbances may be undesirable and may be counteracted by control (regulation). The disturbance may be measurable or unmeasured. The hybrid neural network model for a closed loop process may be defined as: u t = c(y t ,r t ) where wt is the first intermediate output, zt is the second intermediate output, ut is a controller output from the control block, yt is an output of the industrial process at the input time, dt is the input disturbance and the at least one measured input, n is the input setpoint and the at least one measured input, c(.) is the controller block with fixed known parameters, fn(.,α) and fw’ 1 (.,β) are the first and second memoryless nonlinear blocks parametrized by the the first and second vectors a and p, respectively, g(.,y) is the first-principle model which is parametrized by the third vector y, and 0 is the coefficient vector that parametrizes the hybrid neural network model to fit the training data (i.e. at least one predicted output to at least one measurement from the actual process).

The dynamic model in the first-principle block may be a first order or a second order dynamic model defined by at least one ordinary differential equation. For example, for a first order dynamic model, the model may be of the form: where wt is the input to the model, zt is the output from the model, K P is the steady-state gain, T is the time constraint, and 0 P is the time delay. The dynamic model may thus include steady-state process gains and time constants. Where there are multiple outputs from the industrial process, additional terms may be added to represent interaction between outputs from the industrial process.

The hybrid neural network model may be trained before predicting the at least one output. The system may further comprise a memory for storing training data for training the hybrid neural network model. The storage may be any suitable memory, e.g. non-volatile or volatile memory. The storage may be local to the processor, e.g. located within the same system, or may be remote from the processor, e.g. at a different location such as the cloud.

The hybrid neural network model outputs a derivative of the at least one output at the input time and the derivative is input to an ordinary differential equation solver to predict at least one output. The underlying ODE of the dynamic model may be considered to be directly encoded in the hybrid neural network model during training. After training, the hybrid neural network model may take any point y(t) as input to predict the next point y(t+At) after time At by taking steps in the ODE solver, based on the information about the derivative encoded in the neural network model. The prediction may be done by outputting a derivative dy/dt(tj) at the input point y(tj) and inputting this derivative to the ODE solver so that the final output is y(tj+i). In other words, a previous prediction for y(t) at an input time can be used to predict the next time point y(t+At). The initial condition of y(t) at t=0 may be measured or otherwise input. The output of the industrial process which is input to the second memoryless nonlinear block and/or the control block may be selected from a measured output and a predicted output.

The ODE solver may have an adaptive step size, whereby it is possible to handle time-series data with irregular time stamps. For example, the adaptive time step may be tj and the output from the neural network model may be expressed as:

As an example, the industrial process may be a gravity separation process for separating oil, water and gas. Measuring the at least one output may comprise measuring the outputs: oil level, water level and gas pressure. The dynamic model may comprise six inputs including three disturbances (oil inflow, water inflow and gas inflow) and three setpoints (setpoints for oil level, water level and gas pressure). The dynamic model may include steady-state process gains and time constants together with the effects of the disturbances and additional terms to represent interaction between the different outputs. For example, the dynamic model may be defined by the following ordinary differential equations: where K po , K pg and K pw are the steady-state process gains with respect to the control inputs u o u, u ga s and Uwater, Toil, Tgas and Twater are the time constants, Kdo, Kdg and Kdw represent the effect of the input disturbances don, dgas and dwater, K w0 and K go represent the effects of the gas and water outputs y gas and ywater when modelling the derivative for the oil output y O ii, K ow and K gw represent the effects of the oil and gas outputs y O ii and y gas when modelling the derivative for the water output ywater and K og and K wg represent the effects of the oil and water outputs y O ii and y W at when modelling the derivative for the gas output ygas

As another example, the industrial process may be controlling a robotic arm having two connected links. Measuring the at least one output may comprise measuring the angles of the two links and/or the velocities of the two links. The angle of the first link may be measured as the angle between a long axis of the first link and a reference line (e.g. a line parallel to a work surface on which the arm is situated). The angle of the second link may be measured as the angle between a long axis of the second link and the long axis of the first link. The dynamic model of the first-principle block may comprise two inputs: two torques, one for each of the links. The dynamic model may include steadystate process gains and time constants together with additional terms to represent interaction between the different outputs, For example, the dynamic model may be defined by the following ordinary differential equations: where qi and q2 are the angles of the first and second links respectively, wi and W2 are the velocities of the first and second links respectively. k Pi and k Pi are the steady state process gains with respect to the control inputs (torque). T S I and S 2 are the second order time constants and and : 2 are the damping factors. There are additional terms to account for the interactions between the two links, namely C12, C21, ki2, k2i.

According to another aspect of the invention, there is provided a (non-transitory) computer readable medium carrying processor control code which when implemented in a system causes the system to carry out the method described above.

The foregoing has outlined rather broadly the features and technical advantages of the present disclosure so that those skilled in the art may better understand the detailed description that follows. Additional features and advantages of the disclosure will be described hereinafter that form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiment disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form.

Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words or phrases used throughout this patent document: the terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation ; the term "or" is inclusive, meaning and/or; the phrases associated with" and "associated therewith," as well as derivatives thereof: may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term "controller" means any device, system or part thereof that controls at least one operation, whether such a device is implemented in hardware, firmware, software or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases. While some terms may include a wide variety of embodiments, the appended claims may expressly limit these terms to specific embodiments.

BRIEF DESCRIPTION OF THE DRAWINGS

The above-mentioned attributes and other features and advantages of this invention and the manner of attaining them will become more apparent and the invention itself will be better understood by reference to the following description of embodiments of the invention taken in conjunction with the accompanying drawings, wherein like numbers designate like objects, and in which

FIG. 1 shows a schematic block diagram of a system in which an embodiment can be implemented;

FIG. 2 shows a schematic block diagram of elements of the system of FIG.1 ;

FIG. 3 shows a schematic block diagram of an alternative arrangement of elements of the system of FIG.1 ;

FIG. 4a shows a schematic block diagram representation of the Hammerstein-Wiener model which is adapted for the system of FIG. 1 ;

FIG. 4b compares a schematic block diagram of a ResNet and the neural ordinary differential equation (NODE) which is adapted for the system of FIG. 1 ;

FIG. 5 is a schematic diagram of a separator which can be modelled using the system of FIG.1 ;

FIG. 6 is a schematic diagram of the separator of FIG. 5 as modelled by a standard model;

FIGs. 7a, 7b and 7c are graphs plotting the variation in oil level (m), control signal and oil inflow (kg/s) with time for the training data for the oil level controller in FIG. 5 or 6;

FIGs. 8a, 8b and 8c are graphs plotting the variation in water level (m), control signal and water inflow (kg/s) with time for the training data for the water level controller in FIG. 5 or 6;

FIGs. 9a, 9b and 9c are graphs plotting the variation in gas pressure (bar), control signal and gas inflow (kg/s) with time for the training data for the gas level controller in FIG. 5 or 6;

FIGs. 10a, 10b and 10c are graphs plotting the variation in oil level (m), control signal and oil inflow (kg/s) with time for the validation data for the oil level controller in FIG. 5 or 6; FIGs. 11 a, 11 b and 11 c are graphs plotting the variation in water level (m), control signal and water inflow (kg/s) with time for the validation data for the water level controller in FIG. 5 or 6;

FIGs. 12a, 12b and 12c are graphs plotting the variation in gas pressure (bar), control signal and gas inflow (kg/s) with time for the validation data for the gas level controller in FIG. 5 or 6;

FIG. 13 is a flowchart of a method which may be implemented in the system of FIG. 1 ;

FIG. 14 is a schematic illustration showing a two degrees-of-freedom robotic arm;

FIG. 15 is a schematic illustration of the robotic arm of FIG 14 simulated in Matlab;

FIGs. 16a, 16b and 16c are graphs plotting the variation in angle (rad), velocity (Rad/s) and torque (N.m) with time for the training data for the control of a first link in the robotic arm in FIG. 14 or 15;

FIGs. 17a, 17b and 17c are graphs plotting the variation in angle (rad), velocity (Rad/s) and torque (N.m) with time for the training data for the control of a second link in the robotic arm in FIG. 14 or 15;

FIGs. 18a, 18b and 18c are graphs plotting the variation in angle (rad), velocity (Rad/s) and torque (N.m) with time for the validation data for the control of a first link in the robotic arm in FIG. 14 or 15

FIGs. 19a, 19b and 19c are graphs plotting the variation in angle (rad), velocity (Rad/s) and torque (N.m) with time for the validation data for the control of a second link in the robotic arm in FIG. 14 or 15, and

FIG 20 shows the trajectories of the robotic arm of FIG 14 as predicted by the various models.

DETAILED DESCRIPTION OF INVENTION

FIGURES 1 through 19c, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged device. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.

Figure 1 is a schematic block diagram illustrating the components of a system implementing an industrial process. The system comprises a computing device 10 which may perform the methods described below. One or more sensors 50 capture one or more measurements which are sent to the computing device 10 for use in the method as described below. The outputs may be input to a controller such as a PID (proportional-integral-derivative) controller 40 or to a user 70 via any suitable user interface 72, e.g. a screen on a computer or other electronic device. PID controllers are typically used in industrial processes having a control loop feedback mechanism. The computing device 10 may also be connected to a database 80, which stores for example the training data 82 and the hybrid neural network model(s) 84 which define the industrial processes and which are implemented on the neural network engine. As explained below, the hybrid neural network model combines a model of the physics of each industrial process (e.g. one or more ordinary differential equations) into an adaptation of the Hammerstein-Wiener model.

The computing device 10 may be formed from one or more servers and the steps (or tasks) in the method described below may be split across the one or more servers or the cloud. The computing device 10 may include one or more processors 12, one or more memory devices (generically referred to herein as memory 14), one or more input/output ("I/O") interface(s) 16, one or more data ports 18, and data storage 20. The computing device 10 may further include one or more buses 32 that functionally couple various components of the computing device 10.

The data storage 20 may store one or more operating systems (O/S) 22; and one or more program modules, applications, engines, computer-executable code, scripts, or the like such as, for example, a neural network engine incorporating a physical model as described below to form a physics-informed neural network 24 and an ordinary differential equation (ODE) solver 26. The neural network 24 together with the ODE solver may be considered to be a neural ODE formulation.

Any of the components depicted as being stored in data storage 20 may include any combination of software, firmware, and/or hardware. The software and/or firmware may include computer-executable code, instructions, or the like that may be loaded into the memory 14 for execution by one or more of the processor(s) 12 to perform any of the operations described below in connection with correspondingly named engines. For example, the processor(s) 12 may be configured to execute computer-executable instructions of the various program modules, applications, engines, or the like of the system to cause or facilitate various operations described below.

The processor(s) 12 may include any suitable processing unit capable of accepting data as input, processing the input data in accordance with stored computer-executable instructions, and generating output data. The processor(s) 12 may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on- a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 12 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. Referring to other illustrative components of the computing device, the memory 14 of the computing device 10 may include volatile memory (memory that maintains its state when supplied with power) such as random access memory (RAM) and/or non-volatile memory (memory that maintains its state even when not supplied with power) such as read-only memory (ROM), flash memory, ferroelectric RAM (FRAM), and so forth. In various implementations, the memory 14 may include multiple different types of memory such as various types of static random access memory (SRAM), various types of dynamic random access memory (DRAM), various types of unalterable ROM, and/or writeable variants of ROM such as electrically erasable programmable read-only memory (EEPROM), flash memory, and so forth. The memory 14 may include main memory as well as various forms of cache memory.

The input/output (I/O) interface(s) 16 may facilitate the receipt of input information by the computing device 10 from one or more I/O devices (e.g. the sensor(s)) as well as the output of information from the computing device 10 to the one or more I/O devices (e.g. the PID controller(s)). The one or more data ports 18 via which the computing device 10 may communicate with any of the processing modules or the database 80. The bus(es) 32 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computerexecutable code), signalling, etc.) between various components of the computing device 10. The bus(es) 32 may be associated with any suitable bus architecture.

The O/S 22 may include a set of computer-executable instructions for managing hardware resources of the system and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the O/S 22 may control execution of one or more of the program modules depicted as being stored in the data storage 20. The data storage 20 and/or the database 80 may include removable storage and/or non-removable storage. The data storage 20 may store computer-executable code, instructions, or the like that may be loadable into the memory 14 and executable by the processor(s) 12 to cause the processor(s) 14 to perform or initiate various operations. The data storage 20 may additionally store data that may be copied to memory 14 for use by the processors) 12 during the execution of the computer-executable instructions. Moreover, output data generated as a result of execution of the computer-executable instructions by the processor(s) 12 may be stored initially in memory 14, and may ultimately be copied to data storage 20 or database 80.

Those of ordinary skill in the art will appreciate that the hardware depicted in Figure 1 may vary for particular implementations. The depicted example is provided for the purpose of explanation only and is not meant to imply architectural limitations with respect to the present disclosure.

Figure 2 is a more detailed block drawing of one variant of a new hybrid artificial neural network (ANN) approach which may be used to model an industrial process. The ANN comprises a hybrid neural network model 200 implemented on a plurality of neural network models, a first-principle model 206 and an ordinary differential equation (ODE) solver block 220. The new ANN may be termed a hybrid NODE (neural ordinary differential equation) approach because it comprises a plurality of neural models and a first principle model. Specifically, the neural ODE network is used to identify a Hammerstein-Wiener model. The ODE solver block may be any suitable block, including for example a block which uses Euler methods or Runge-Kutta methods. The skilled person will appreciate that the terms “model” and “block” may be used interchangeably. Therefore, “the plurality of neural network models” may be termed “the plurality of neural network blocks”, “and the “first-principle model” may be termed “first principle block” etc.

The hybrid neural network model blocks include a first nonlinear block 202, a second nonlinear block 204 and a first principle model 206. In this arrangement, the industrial process being modelled is being operated in closed loop conditions with at least one PID (proportional integral derivative) controller or similar controller. Accordingly, the data driven model must account for the controllers in the industrial process. The hybrid neural network model blocks thus include a PID control block 208. In this example, the controller parameters may be known and incorporated in the hybrid neural network model by use of the PID control block 208.

The model identification may generally be defined as the problem of using N-point data measurements of inputs UN={U1 , U2, ... u n } and outputs Y N ={y 1 , y 2 ,... y n }to estimate a coefficient vectors that parametrizes the hybrid neural network model to fit the UN and YN observations (i.e. to fit the set of training observations). Once the hybrid neural network model has been trained to fit the observations, the hybrid neural network model may then be used to predict future behaviour of the industrial process and in particular the output u from the controller.

In this arrangement, the controller sets the inputs ut and thus as illustrated, the inputs to the system which may be measured are the setpoints n and disturbances dt together with the measurements of the output at the input time yt. As shown, the PID control block 208 receives the inputs, rt and yt. A PID controller is a well-known type of controller which is defined by the following equations:

The output ut from the PID control block 208 is input to both the first nonlinear block 202 and the second nonlinear block 204. The first nonlinear block 202 also receives dt as input and thus may be termed an input neural block because it receives the measured input. The second nonlinear block 204 receives yt and dt as inputs and thus may be termed an output neural block because it receives the output at the input time. Thus, the term the first and second nonlinear blocks and input and output neural blocks may be used interchangeably. The first nonlinear block 202 and second nonlinear block 204 are nonlinear blocks which may be modelled by feedforward dense neural networks. Feedforward neural networks are the simplest form of networks where connections between the nodes do not form a cycle (for example as described in “Deep learning in neural networks: An overview” by Schmidhuber published in Neural Networks 61 : 85- 117, 2015). This class of neural networks is also referred to as a ‘vanilla’ neural network. Also, the first nonlinear block and the second nonlinear block may be modelled using other types of Neural Networks such as LSTM and Convolutional Networks. The outputs from the first nonlinear block 202 and thesecond nonlinear block 204 are wt and zt respectively. These outputs (which may also be termed intermediate outputs of the neural network) are input into to the first principle model 206 which may represent a linear or non-linear model which is defined by at least one ordinary differential equation (ODE). The overall model structure (may be defined as follows: where wt is the first intermediate output, zt is the second intermediate output, ut is the output from the controller, yt is the output, rt is the input setpoint and dt is the input disturbance and the at least one measured input, c(.) is the control block with fixed known parameters, fH(.,α) and fw" 1 (.,β) are memoryless non linear blocks parametrized by the vectors a and p, respectively, g(.,y) is the first- principle model which is parametrized by the vector y and derived as explained below, and 0 is the coefficient vector that parametrizes the model to fit the UN and YN observations. The first nonlinear block fH block may be considered similar to the nonlinear block from the Hammerstein-Wiener model structure and may be termed a Kammerstein Neural Network NNH. The second nonlinear block fw" 1 may be considered to be the inverse of the fw block from the Hammerstein-Wiener model structure may be termed a Wiener Neural Network NNw. .

The neural network model outputs the derivative of the output at the input time to the ODE solver 220. As explained with reference to Figure 4b, the ODE solver 220 takes the steps from an input yj at time tj to an output yj+i at time tj+i , with the derivative at each point fed to the ODE solver 220. The ANN block having the first nonlinear block, second nonlinear block and the first principle model learns the local derivative (t,) at the input y (tj) which is necessary for the ODE solver 220 to step to the output y(tj+i). This allows the hybrid NODE approach shown in Figure 2 to be used for modelling dynamical systems.

Figure 3 is a detailed block drawing of an alternative variant of a new artificial neural network (ANN) also termed a hybrid NODE approach which may be used to model an industrial process. The same reference numbers are used for the same blocks in Figure 2. The ANN comprises a neural network 300 including a first nonlinear block 202, a second nonlinear block 204 and a first principle model 206. The output from the neural network 300 is input to an ODE solver block 220. In this arrangement, the industrial process being modelled is not operated in closed loop conditions and thus in contrast to the arrangement in Figure 2, there is no PID control block 208. As with Figure 2, the first and second nonlinear blocks may be termed the input neural block and the output neural block and the terms may be used interchangeably. The terms model and block may also be used interchangeably.

The dynamic model structure is simpler than the one in Figure 3 and may be expressed as an ordinary differential equation: where yt is the output, ut is the input and f is a function modelling the system which fits the measured inputs and outputs. In this arrangement, there is no controller fixing the inputs ut and thus, the hybrid neural network model structure is simpler than the one shown above. For example, the hybrid neural network model may be expressed in more detail as: where wt is the first intermediate output, zt is the second intermediate output, ut is an input from the controller or a manual input given by human (operator) or another system, yt is the output of the industrial process at the input time t, fH(.,α) and fw’ 1 (.,β) are the first and second memoryless nonlinear blocks parametrized by the vectors a and p, respectively, g(.,y) is the model in the first principle block which is parametrized by the vector y and derived as explained below, and 0 is the coefficient vector that parametrizes the model to fit the UN and YN observations. The f H (.,α) and fw -1 (.,β) are the same as the blocks in Figure 2. .

In both Figures 2 and 3, the new model structure is a Hammerstein-Wiener-like model in which the inverse of the function block f w is used and the linear block g is replaced with a general first principle block. The new hybrid neural network model structure accounts for the interactions between the inputs and outputs which is particularly useful when known disturbances are included as additional inputs.

Figure 4a is a block diagram representation of a Hammerstein-Wiener model which is adapted and then incorporated in the arrangements shown in Figures 2 and 3. Block-oriented models are widely used for nonlinear system. The Hammerstein model consists of a static non-linear element followed by a linear dynamic part. The Wiener model is the reverse of this combination so that the linear element occurs before the static nonlinear characteristic. A combination of the two models, namely the Hammerstein-Wiener model consists of a dynamic linear element sandwiched between two static nonlinear elements as described for example in “Lecture notes in control and information sciences: vol 404: Block-oriented nonlinear system identification” by Giri et al published by Springer in 2010.

The identification of the Hammerstein-Wiener model has been an active topic in academia since the 1980s (for example as described in “Identification of systems containing linear dynamic and static nonlinear elements by Billings et al published in Automatica 18(1): 15-26 1980 and “Identification of Hammerstein-Wiener models by Wills et al published in Automatica 49(1): 70-81 2013). The Hammerstein-Wiener model may be expressed as: where ut is the input, yt the output, fH and f w are memoryless nonlinear blocks, g is a linear block and these blocks are respectively parametrized by vectors a, p and y. Note that the modelling errors and measurement noise in the model structure have been ignored for simplicity.

In the Hammerstein model, the static nonlinear element fH(.,α) is followed by a linear dynamic part g(.,y). In the Wiener model, the linear dynamic part g(.,y) occurs before the static nonlinear element fw(.,β). The combined Hammerstein-Wiener model thus comprises a dynamic linear element g(.,y) sandwiched between two static nonlinear elements. In the arrangement of Figures 2 and 3, both the nonlinear blocks occur before the dynamic block g(.,y) and thus the nonlinear block from the Wiener model fw(.,β) may be considered to inverted relative to the order in the Hammerstein-Wiener model and is thus represented in the expressions above as fw- 1 (.,β).

In the arrangements of Figures 2 and 3, the linear block g(.,y) in the Hammerstein-Wiener model structure is replaced with a more general first-principle model which may be a linear or nonlinear model expressed as an ordinary differential equation (ODE). In the case of using a linear first-principle model, the block g(.,y) is modelled using a linear time-invariant (LTI) system. For instance, a first-order or a second order dynamic model can be used. The first-order plus time delay (FOPTD) is the simplest form of the dynamic model, widely used in process engineering. The FOPTD model is defined by the following ODE: where K P is the steady-state gain, T is the time constraint, and 0 P is the time delay. In the Laplace Domain, the first-order system is a transfer function:

The second-order system, which is a common description of many dynamic processes, is defined as follows:

This second order differential equation has output z(t) and four unknown parameters. The four parameters are the gain K P , damping factor the second order time constrain T S , and dead time 0 P . The transfer function for the second-order model is in the form:

The second order differential equation can be split into two first order differential equations, which is referred to as the state-space form: where xi is the measured output z(t) and X2 (i.e. derivative of z(t)) is a helper state variable.

Figure 4b compares a schematic representation of the NODE approach used in Figure 2 or Figure 3 with a known residual network (ResNet) (for example as described in “Deep Residual Learning for Image Recognition” by He et al published in Computer Vision and Pattern Recognition in 2016). A residual network is a deep neural network formed from stacking simple residual blocks which contain identity skip connections that bypass the residual layers. A residual block 400, as shown on the left of Figure 4b, can be written as: y j+1 = y j + G(y j j ) forj = 0, ... ,N - 1 where y, is the feature map at thejth layer, 0j represents the jth layer’s network parameters, G is referred to as a residual module and in this example consists of two convolution layers 402. Without losing generality, a parameter h can be added so that the residual module can be rewritten as G=hF (for example, as described in “Multi-Level Residual Networks from Dynamical Systems View” by Change et al published in the Sixth International Conference on Learning Representations in 2018. The residual block becomes: y j+1 = y j + hF(y j , θ j ) which can be rewritten as:

For a sufficiently small h, the above equation may be regarded as a forward Euler discretization of the initial value ODE: y(t) = F(y(t), θ(t)), y(0) = y 0 for 0 ≤ t ≤ T where time t corresponds to the direction from input to output, y(0) is the input feature and y(T) is the output feature map. Thus, the problem of learning the network parameters, 0, is equivalent to solving a parameter estimation problem or optimal control problem involving the ODE in the equation above.

The new parameter h is called the step size of the discretization. In the original formulation of the equation for ResNet, h does not exist and is implicitly absorbed by the residual module G. h may be called the implicit step size. ResNets equally discretize [0,T] using time points To, Ti, ... , Tj, ...Td, where To=O, Td=T and d is the number of blocks. Thus, each time step is

For the accuracy of the ResNet network, the step size h should be a small value. Therefore, to use a ResNet network to model a dynamic system over a long time horizon [0,T], the number of required ResNet blocks d will be very large and not practical.

The Neural Ordinary Differential Equations (NODE) approach which forms the basis of the hybrid NODE model used in Figure 2 or Figure 3 is a new family of artificial neural networks which is described for example in “Neural Ordinary Differential Equations” by Chen et al published in the 32 nd conference on Neural Information Processing Systems (NeurlPS 2018). This new family of ANNs is based on the notion that the skip connections in ResNet can be seen as a realization of Euler’s method for numerically solving ODEs. As shown in the right hand side of Figure 4b, instead of letting the network learn the residuals between fixed points as in ResNet, NODE parameterizes the local derivation of the input data with a neural network block. The ODE solver of Figures 2 or 3 takes the steps from an input yj at time tj to an output yj+i at time tj+i , with the derivative at each point fed to the ODE solver.

One of the advantages of the NODE approach used in the present approach compared to ResNet lies in the choice of the ODE solver. By building NODE with an ODE solver with an adaptive size, it is possible to handle time-series data with irregular time stamps. Thus, it is possible to trade off accuracy for speed by taking larger time steps.

This adaptive NODE approach is illustrated in the right hand side of Figure 4b which shows that the inputs are y(tj) and tj which is an adaptive time step. The output from the ANN is:

In other words, the ANN block learns the local derivative at the input y(tj) which is necessary for the ODE solver to step to the output y(tj+i). The underlying ODE of the data is directly encoded in the ANN block during training. After training, NODE can take any point y(t) as input to predict the next point y(t+At) after time At by taking steps in the ODE solver, based on the information about the derivative encoded in the ANN block. This means that NODE is predicting the dynamics of the unknown system (for example in line with “Modelling Dynamical Systems Using Neural Ordinary Differential Equations: Learning ordinary differential equations from data using neural networks” a Masters Thesis by Karlsson and Svanstrom published in Chalmers University of Technology in 2019).

Merely as an example, the hybrid node approach of Figure 2 is implemented to model an industrial process in one of the standard processing units at oil production facilities - the gravity separator.

Figure 5 illustrates a separator 100 which, comprises a large horizontal tank 140 having an inlet 146 which receives a mixture of oil, gas and water. The separator 100 is connected to a gas control system 110 for drawing off gas, an oil control system 120 for drawing out oil and a water control system 130 for drawing out water. The tank 140 comprises a gas outlet 112 to the gas control system 110, an oil outlet 122 to the oil control system 120 and a water outlet 132 to the water control system 130. The abbreviations used in Figure 5 are CV: control valve, PC: pressure controller, LC: level controller, FT: flow transmitter (flow-rate measuring instrument), LT: level transmitter and PT: pressure transmitter.

Flow of separated gas through the gas outlet 112 is controlled by a valve CV3 which is opened and closed by a pressure controller PC3. Gas flows through the gas control system to a gas outlet FT3. Similarly, flow of separated oil through the oil outlet 122 is controlled by a valve CV1 which is opened and closed by a level controller LC1 . Oil flows through the oil control system to an oil outlet FT 1 . Flow of separated water through the water outlet 132 is controlled by a valve CV2 which is opened and closed by a level controller LC2. Water flows through the water control system to a water outlet FT2.

The basis for the operation of the gravity separator is that oil and water have different densities. With gravity and passing time, heavier water 142 will gather at the bottom of the tank 140, and lighter oil 144 will collect above. Water 142 is separated from the oil outlet by a partition weir 146. At the bottom of the separator, the water will be blocked from the oil outlet 122 while oil flows over the partition weir 146. Bubbles of gas in the crude will rise to the surface overtime and accumulate on the top to be drawn out through the gas outlet 112.

The control of water-level, oil-level, and gas pressure is needed to make the separation work optimally (for example as described in Studies in Autotuning. By Mork et al published in 2012 as a Bachelor Thesis, H0gskolen i S0r-Tnandelag). Each of the control systems thus typically comprises a sensor, for example a gas pressure sensor PT3 in the gas control system, an oil level sensor LT1 in the oil control system and a water level sensor LT2 in the water control system. ‘Oil In Water’ (OIW) sensors may also be used to monitor the separation performance. Excessive oil in the water (or vice versa, water in oil) can lead to process shutdown. If the water level is allowed to rise above the dividing weir, water might flow over and out of the oil outlet. A gas blowout is the most common cause of a shutdown. The gas blowout is caused by a low oil level so that the gas flows out through the oil output. Fewer shutdowns mean that less oil production is lost, and more oil production means more money.

The control performance (i.e., tracking setpoints and smooth operation) is essential to operate the separator in an economically optimal way. Optimal tuning of the controllers PID parameters will ensure that the process variables are stable, contributing to a smoother and safer operation with fewer shutdowns. A relatively accurate dynamic model is needed for control design and optimal tuning to improve safety and control performance.

Figure 6 illustrates a simulation of the three-phase separator and its controllers. The simulation is generated using the OLGA simulator which is the industry-leading dynamic multiphase flow simulator owned by Schlumberger. The simulated separator is horizontal with 3.7 m diameter and 20 m length. The nominal values of the simulated separator process are given in the table below.

The data from the separator process are obtained when the process is in operation in closed-loop conditions. That is, three PID regulators, with known parameters, control the separator. The three PI controllers in the OLGA simulator were initially tuned by trial and error to get a stable response for generating training data and validating the models. These parameters are given in the table below.

Similarly, the industrial process illustrated in Figures 5 and 6, and in particular the three PID regulators, may be modelled using the hybrid NODE model described above. The process is a Multi- In put MultiOutput (MIMO) system. The oil and water levels and the gas pressure represent three outputs yt. The six inputs are the inflow rates of oil, water, and gas which are considered known disturbances dt, and the three setpoints n The inputs may also be known as the Degrees of Freedom (DOF). The closed- loop separator process thus has six DOF: three setpoints and three inflow disturbances.

The physics associated with the closed-loop separator process may be modelled using first-order mechanistic models for the measured outputs (one state per measurement), there are three state variables. In addition, the six inputs (DOF) are augmented into the state vector and there are three state variables for the three PI controllers (integral of control error). Thus, the augmented Neural ODE system will have 12 state variables in total. The model for the first-principle block of the Neural ODE model is defined by the following ordinary differential equations:

First-order models are used with K po , K pg and K pw being the steady-state process gains with respect to the control inputs Uou, u gas and Uwater, Toil, T gas and Twater are the time constants, and Kdo, K Pg and Kdw represent the effect of the input disturbances don, d gas and dwater. There are also linear terms for interactions between different phases. For example, K w0 and K go represent the effects of the gas and water outputs y gas and ywater when modelling the derivative for the oil output y o u. Similarly, K ow and K gw represent the effects of the oil and gas outputs y O ii and y gas when modelling the derivative for the water output ywater. Finally, K og and K wg represent the effects of the oil and water outputs y O ii and y wa t when modelling the derivative for the gas output y gas .

The neural blocks in the hybrid Neural ODE model may be modelled using feedforward sequential neural networks. Any general feed-forward neural network may be used and merely as a non-limiting example, the Keras module of the TensorFlow library may be used to build the nonlinear blocks, as given below.

F = tf.keras. Sequential

F.add(keras. layers. Dense(24, input_shape=(9,), activation- tanh'))

F.add(keras. layers. Dense(64, activation- relu')) F.add(keras. layers. Dense(64, activation- tanh'))

F.add(keras. layers. Dense(64, activation- relu'))

F.add(keras. layers. Dense(64, activation- tanh'))

F.add(keras. layers. Dense(3, activation- tanh', kernel_initializer='zeros'))

For comparison, the proposed Hybrid NODE model outlined above and the original OLGA simulation are compared with a purely linear ODE model which uses only the three differential equations above (i.e., without the nonlinear blocks) and a Neural ODE model such as that shown in Figure 4b that only consists of a Multi-Layer Perceptron (MLP) network and does not integrate the Hammerstein-Wiener model.

Figures 7a to 9c plot simulation results which are used as a training dataset together with various model outputs. To generate the training data, step changes to the DOF of the Olga model are applied. The simulation time was 100 hours, but 10,000 sec initial transients were discarded. The training dataset contains 35,000 data points for each measurement with a sampling interval of 10 sec that is about 97.22 hours long. The six target features to train the models are the oil and water levels, the gas pressure, and the three controller signals. Since the process operates in closed-loop conditions, the three measured variables always track the given setpoints, and thus do not provide an adequate error measure for the model fitting. Therefore, it is necessary to include the control signals in the cost. The Mean-Square Error (MSE) between the measurements and the model output may be used as the cost function. At each training epoch, a batch of 1000 data points with a random starting point is used. Therefore, the cost function to minimize is formulated as follows: where N is the batch size, y and u are the model outputs for the measurements and the controller signals, respectively. The model training may be performed using any suitable optimizer such as the Adam optimizer (for example as described in A Method for Stochastic Optimization by Kingma et al published in the 3rd International Conference for Learning Representations in 2015) from the TensorFlow library (Abadi, Agarwal et al. 2015).

Figures 7a and 7b show that the results of the various models have significant overlap when modelling the oil level change and the control signal for the oil PID controller over time, respectively. Figure 7c shows the change in oil flow overtime. Similarly, Figures 8a and 8b show that the results of the various models have significant overlap when modelling the water level change and the control signal for the water PID controller over time. Figure 8c shows the change in water flow over time. Similarly, Figures 9a and 9b show that the results of the various models have significant overlap when modelling the gas pressure change and the control signal forthe gas PID controller over time. Figure 9c shows the change in gas flow over time. The greatest variation is shown in the linear ODE model which cannot describe the nonlinear behaviour. The inflow rates shown in Figures 7c, 8c and 9c are independent variables (i.e. disturbances) which are not generated by the control system.

Figures 10a to 12c plot simulation results from the various models using a new dataset with setpoint and disturbance values different from the training dataset to validate the models. As with the results based on the training data, the simulation results in Figures 10a to 12c compare the simulation results of the three different models: linear ODE, MLP node and hybrid neural ODE model (labelled hybrid node) with the reference process in OLGA.

Figures 10a and 10b plot the variation in oil level and the control signal for the oil PID controller over time, respectively. Figure 10c shows the change in oil flow as it varies over time. Figures 11 a and 11 b plot the variation in the water level and the control signal for the water PID controller over time. Figure 11 c shows the change in water flow as it varies over time. Figures 12a and 12b plot the variation in the gas pressure change and the control signal for the gas PID controller over time. Figure 12c shows the change in gas flow as it varies over time. As above, the inflow rates shown in Figures 10c, 11 c and 12c are independent variables (i.e. disturbances) which are not generated by the control system.

As described above, the hybrid neural ODE model includes both linear and nonlinear blocks and as shown in Figures 10a to 12c predict the behaviour of the process’s more accurately than the two other models. In validating the model, it is noted that the measured outputs (levels and pressure) are regulated on the same given setpoints. Therefore, the measured outputs look very similar for all models. However, the control signals resulting from different models can be examined for a visual comparison. The MLP NODE does not have a good prediction capability for new conditions (setpoints and inflow rates). This problem is due to the overfitting issues of the MLP models.

Due to nonlinearity, the process gain varies by changing the operating point. This effect leads to the linear model’s poor performance when operating on different setpoints than the training dataset. However, the Hybrid NODE model can capture the nonlinear behaviour with fair accuracy. The Hybrid NODE approach can also represent the interaction between the oil, water, and gas phases better than the linear ODE model.

The Mean-Square Error (MSE) of the three models for both the training and the validation dataset are presented in the table below:

Both the hybrid NODE and MLP NODE models give similar results for the training MSE. However, the linear ODE model cannot describe the nonlinear behaviour and thus has different results for the MSE. The Linear ODE has a lower MSE on the validation dataset than the MLP NODE, despite having a higher MSE in training. Unlike MLP models, simple mechanistic models with few parameters do not suffer from the overfitting problem.

Figure 13 is a flowchart of the hybrid NODE approach of the present techniques. The hybrid neural network model used in the approach defines the model for the industrial process which has been trained as described below. In a first step S100, a generic model defining the first-principle model (also known as first principle block) is specified. The first principle model may be a linear or nonlinear model which describes the dynamics (i.e. the physics) of the industrial process. For example, the first principle model may include one or more ordinary differential equations which may be first or second order. The first principle model may be obtained from a library or other source. Alternatively, the first principle model may be defined based on a rough understanding of underlaying physics of the process, the inputs (degrees of freedom) and the outputs. For example, for the case study above, the oil level, the water level and the gas pressure which are measured by the appropriate sensors are the three outputs (within the vector yt). The inflow rates of oil, water and gas are considered known disturbances and there are also three setpoints. The disturbances and the setpoints are the inputs. An ordinary differential equation may be defined for each of the measured outputs. The equation may include steady-state process gains and time constants together with the effects of the disturbances and additional terms to represent interaction between the different phases.

The next step is S102 and the first and second nonlinear block (also known as input and output neural blocks) f H and are defined using Neural Network Layers. As explained above, the first-principle model is connected to the first and second nonlinear blocks whereby the overall hybrid neural network model is defined as: w t = f H (u t , α) (open loop process) or w t = f H (u t ,d t , α) (closed loop process) where ut is an output from the controller (control block) in a closed loop process u t = c(y t ,r t ) or one of the measured inputs for an open loop process, wt is the first intermediate output, zt is the second intermediate output, yt is the output and dt is the input disturbance (used and measured in the closed loop process), fH(.,α) and fw" 1 (.,β) are the first and second memoryless nonlinear blocks parametrized by the first and second vectors α and β, respectively, g(.,y) is a model for the first principle block which is parametrized by the third vector γ and derived as explained below, and 0 is the coefficient vector that parametrizes the model to fit the U N and Y N observations (i.e. the set of training observations).

The next step S104 is obtain data to train the overall hybrid neural network model. An example of how the training data may be obtained is described above in the case study. The training data contains measurements of inputs and the associated outputs. It will be appreciated that although steps S100 to S104 are shown sequentially, they may be done in parallel or in a different order.

The next step S106 is to train the hybrid neural network model using the training data. This step estimates a coefficient vector s which fit the inputs and outputs within the hybrid neural network model prediction to the measurements of the inputs and outputs within the training data. In the open loop process the inputs are represented by ut and in the closed loop mode these inputs ut are set by the controller and additional inputs such as the disturbances and the setpoints are also included. There may be a determination at step S108 to ensure that the model has been adequately trained. For example, validation data may be used to confirm that the model is accurate.

Once the hybrid neural network model has been trained to fit the observations, additional measurements of the input(s) (and output(s) where measured outputs are used) at fixed or input time(s) may be obtained at step S110 and the hybrid neural network model may then be used to predict future behaviour of the industrial process at subsequent time steps S112, i.e. the output at a subsequent point in time. The subsequent point in time may be any suitable length, e.g. minutes or hours. The predicted output may be output, e.g. to a user interface or to a controller (when used in the closed loop process) and may be used in any appropriate manner, e.g. to determine adjustments to improve the operation of the industrial process and/or may be used for Condition Monitoring where the system may issue alerts if the measured process output deviates from the predicted (expected) output. Also, the predicted output may be used as a “soft” sensor, if a physical sensor fails.. This may then be iteratively repeated for subsequent time steps. As explained in detail above, the prediction may be done by outputting a derivative dy/dt(tj) at the input point y(tj) and inputting this derivative to the ODE solver so that the final output is y(tj+i). In other words, any measurement y(t) taken at an input time can be used to predict the next time point y(t+At). When using the hybrid neural network model for prediction, only the initial condition of the outputs y(0) is specified. Then the predicted output at each step y(t+At) is used as the y(t) input for the next step. It will also be appreciated that other machine learning techniques may be used to predict control signals but in this case, the controller signal is calculated by the controller with known parameters.

The hybrid NODE approach of the present techniques thus formulates the Hammerstein-Wiener model in a neural ODE form. As described above, this is achieved by significant changes, namely using a first-principle block and by using the inverse of one of the functional blocks in the Hammerstein-Wiener model. The proposed hybrid neural network model structure allows the combination of simple first- principle models with neural network models. This allows an accurate dynamic model for an industrial process, e.g. the three-phase separator process of the example above, to be developed. The model accuracy on different validation datasets confirms the potential of using the proposed hybrid model structure as a general-purpose model for plants with control loops. An advantage of the hybrid neural ODE model is that the model weights for both the first principle and neural blocks may be simultaneously trained using efficient optimisation tools such as those in the TensorFlow library. Also the neural ODE’s advantages compared to an LSTM or ResNet is that the feedback control may be incorporated into the model directly. This makes the hybrid NODE approach suitable for application in the process industry, especially the control systems design.

One of the dynamic modelling challenges is the non-linearity of the processes because the process gain changes for different operation conditions. The proposed approach handles nonlinearity with reasonable accuracy but does not suffer from overfitting because of including a physical representation in the model structure. The approach can also model the interaction between different process variables.

Figure 14 illustrates a second industrial process which can be predicted using the techniques described above. Figure 14 shows a robotic arm having two degrees of freedom which is also known as a two-link planar manipulator. This is a well-known example in mechanical engineering and robotics literature and is described for example in Chapter 9 Control by Ivanescu of the Mechanical Engineer’s Handbook published in 2001 or in the “Theory of Applied Robotics: Kinematics, Dynamics and Control (2 nd edition) by Jazar published in 2010.

Two controllers (220, 222) are used to drive the robot on planned trajectories. The controllers may be PID controllers or any other suitable controller. The inputs to the system are torque applied to the joints to move the links, and the controlled outputs are the angles (qi and q2) of the two links (h and h). It is also possible to measure the angular velocity of the two links (wi and W2). The position (x,y) of the end tip may be calculated from the angles and the lengths of the two arms using a “forward kinematics” transformation.

Figure 15 is a schematic illustration of a simulation of the robotic arm in Figure 14. In this example, the simulation is performed in the Simulink environment of Matlab using the Simscape Multibody™ library but it will be appreciated that this is just an illustrative approach. Using the simulation, it is possible to model multibody systems using blocks representing bodies, joints, constraints, force elements and sensors. The simulation formulates and solves the equations of motion for the complete mechanical system. The first joint is connected to the origin location known as “the world frame”. Two PI controllers have been added to the Simulink model to control the angles by manipulating the two torque inputs. Both PI controllers are set with a proportional gain of 200 and an integral time of 10 seconds. The parameters of the simulated model are shown in the table below:

The rigorous first principle models for the robotic arms with multiple degrees of freedom have been studied in the literature, for example in “Theory of Applied Robotics: Kinematics, Dynamics and Control (2 nd edition) by Jazar published in 2010. In the methodology described above, we apply a different approach where a generic first-principle model similar to the first and second order differential equations described above in relation to Figures 2 and 3. As in the separator process example described above, the hybrid nature of the model structure of Figure 2 allows the use of generic first principle models combined with neural networks. The two angles qi and q2 and the two velocities wi and W2 are the outputs yt and the two setpoints for the angles rt are the two degrees of freedom or inputs to the model. No disturbance is considered for this example.

It is well known that velocity is the rate of change (time derivative) of the position. Accordingly, in this example. We use second-order mechanistic models forthe measured outputs to relate the links' angles q and their velocities w. There are thus four state variables (angle and velocity of each link) for the first principle block. The two inputs (degrees of freedom) are augmented and there are the state variables for the two controllers (integral of control error). Accordingly, the closed loop neural ODE model will have eight state variables in total. The differential equations for the first-principle block may be expressed as:

The parameters K P 1 and K P1 are the steady state process gains with respect to the control inputs (torque). Similar to the equations for the generic second order equation, τ S 1 and S 2 are the second order time constants and and are the damping factors. There are additional terms to account for the interactions between the two links, namely C12, C21, ki2, k2i. The neural blocks which are used are the same as for the separator model. The closed-loop system has two degrees of freedom: two setpoints for the link angles. Training data is needed to train the model. Sinusoidal signals with variable amplitude are applied to the setpoints of the Matlab model described above to generate the training data. The simulation time was 20,250 seconds but 250 second initial transients were discarded. Due to the fast dynamics of the system, the Matlab model is relatively stiff. Therefore a variable step-size solver (ODE 45) is used to simulate the robot model. The training dataset contains 202,786 data points for each measurement.

Figures 16a to 16c show respectively the variation in angle, velocity and torque with time for the first link for the first 1000 seconds of the 200,000 second training database. Similarly, Figures 17a to 17c show respectively the variation in angle, velocity and torque with time for the second link for the first 1000 seconds of the 200,000 second training database.

As in the previous example, the values predicted by the proposed Hybrid NODE model outlined above are compared with a purely linear ODE model which uses only the differential equations above (i.e., without the nonlinear blocks) and an MLP NODE model that only consists of a Multi-Layer Perceptron (MLP) network and does not integrate the Hammerstein-Wiener model. The variation in the setpoint and the measured values is also shown. As in the previous example, the mean-square error (MSE) between the measurements and the model output is minimised as a cost function to train the various models. At each training epoch, a batch of 1000 data points is used with a random starting point. The same cost function as that outlined above is used. As shown in Figures 16a to 17c, the hybrid and MLP node models give similar results for the training data but the linear ODE model is not able to accurately predict the torque variation over time.

Figures 18a to 19c plot simulation results from the various models using a new dataset with setpoint values different from the training dataset to validate the models. In this example, the robotic arm is given a circular trajectory to follow. As with the results based on the training data, the simulation results in Figures 18a to 19c compare the simulation results of the three different models: linear ODE, MLP node and hybrid neural ODE model (labelled hybrid node) with the reference model in Matlab. Figures 18a to 18c show respectively the variation in angle, velocity and torque with time for the first link. Similarly, Figures 19a to 19c show respectively the variation in angle, velocity and torque with time for the second link.

The nonlinearity of the system means that the process gain varies by changing the operating point. The validation results shows that the hybrid neural ODE model, which includes both linear and non-linear blocks as described above, can predict the robot’s torque and velocity more accurately than the two other models. By contrast, the linear model has poor performance for predicting the torque but gives perfect results for the predicting the velocity because it includes the first-principle relationship between the position and velocity (i.e. q = w . The measured outputs for the angle are regulated on the same given setpoints and are thus aligned for all the models. For a visual comparison, the graphs of torque and velocity thus show how well the newly proposed model fits the measurements from the reference model.

The mean square error for the training and validation exercises are shown below:

As is shown in the table above, the hybrid NODE models has a significantly lower MSE for both the training data and the validation data.

Figure 20 compares the trajectory prediction of the different models in the x-y coordinates. As it will be appreciated, controllers do not necessarily provide perfect control with zero errors and thus the real trajectory is not a perfect circle for any of the models or the measured trajectory. The hybrid NODE model proposed by this invention is the closest fit to the robot trajectory.

It should be appreciated that the engines and the program modules depicted in the Figures are merely illustrative and not exhaustive and that processing described as being supported by any particular engine or module may alternatively be distributed across multiple engines, modules, or the like, or performed by a different engine, module, or the like. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the system and/or hosted on other computing device(s) accessible via one or more of the network(s), may be provided to support the provided functionality, and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of engines or the collection of program modules may be performed by a fewer or greater number of engines or program modules, or functionality described as being supported by any particular engine or module may be supported, at least in part, by another engine or program module. In addition, engines or program modules that support the functionality described herein may form part of one or more applications executable across any number of devices of the system in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the engines or program modules may be implemented, at least partially, in hardware and/or firmware across any number of devices.

The operations described and depicted in the illustrative methods may be carried out or performed in any suitable order as desired in various example embodiments of the disclosure. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular system, system component, device, or device component may be performed by any other system, device, or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure.

Computer-executable program instructions may be loaded onto a special-purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that execution of the instructions on the computer, processor, or other programmable data processing apparatus causes one or more functions or operations specified in the flow diagrams to be performed. These computer program instructions may also be stored in a computer- readable storage medium (CRSM) that upon execution may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means that implement one or more functions or operations specified in the flow diagrams. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process.

Additional types of CRSM that may be present in any of the devices described herein may include, but are not limited to, programmable random access memory (PRAM), SRAM, DRAM, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information and which can be accessed. Combinations of any of the above are also included within the scope of CRSM. Alternatively, computer- readable communication media (CRCM) may include computer-readable instructions, program modules, or other data transmitted within a data signal, such as a carrier wave, or other transmission. However, as used herein, CRSM does not include CRCM.

We also describe in an example, a computer-implemented method for modelling an industrial process, the method comprising: measuring, using at least one sensor, at least one input for the industrial process at an input time; and predicting at least one output of the industrial process at a subsequent time using the at least one measured input; wherein predicting the at least one output comprises using a hybrid neural network model to output a derivative of the at least one output at the input time, wherein the neural network model incorporates at least one neural network block and a first-principle block incorporating a dynamic model comprising at least one ordinary differential equation defining the rate of change overtime of the at least one output as a function of the or each associated input; and inputting the derivative to an ordinary differential equation solver to predict at least one output at the subsequent time.

In the example method above, the dynamic model may be a linear model. The first principle block may receive an input from the at least one neural network block. The at least one neural network block may comprise an input neural block and an output neural block. The input neural block may implement a first memoryless nonlinear block parametrized by a first vector and the output neural block may implement a second memoryless nonlinear block parametrized by a second vector.

In the example method above, the industrial process may be a closed loop process comprising at least one controller and the neural network model further incorporates a control block. Measuring at least one input may comprise measuring at least one of a setpoint and a disturbance. The neural network model may be defined as: where ut is the output from the controller, yt is the output and dt is the input disturbance, fh(.,α) and fw _ 1 (.,P) are memoryless nonlinear blocks parametrized by the vectors a and p, respectively, g(.,y) is a dynamic model for the first principle block which is parametrized by the vector y, and 0 is the coefficient vector that parametrizes the model to fit the UN and YN observations.

In the example method above, the method may comprise inputting the derivative to an ordinary differential equation solver having an adaptive time step. The dynamic model may be a first order dynamic model defined by at least one ordinary differential equation of the form: where wt is the input to the model, zt is the output from the model, K P is the steady-state gain, is the time constraint, and 0 P is the time delay.

In the example method above, the industrial process may be a gravity separation process for separating oil, water and gas and measuring the at least one output comprises measuring oil level, water level and gas pressure. In the example method above, the dynamic model may comprise six inputs including three disturbances and three setpoints for each of the inflow rates of oil, water and gas.

In the example method above, the industrial process may be controlling a robotic arm having two connected links and measuring the at least one output comprises measuring the angles of the two links. We also describe in an example, a data processing system for modelling an industrial process, the system comprising at least one sensor for measuring at least one input for the industrial process; and a processor which is configured to receive a measurement of the at least one input from the at least one sensor at an input time; implement a hybrid neural network model to output a derivative of the at least one output at the input time based on the received measurement, wherein the hybrid neural network model incorporates at least one neural network block and a first-principle block incorporating a dynamic model comprising an ordinary differential equation defining the rate of change over time of the at least one output as a function of the or each associated input; input the derivative to an ordinary differential equation solver to predict the at least one output at a subsequent time; and output the prediction of at least one output at the subsequent time using the at least one measured input and output.