Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTIMIZATION OF GAS LIFT WELL INJECTION VALVE USING VIRTUAL FLOW METER ON EDGE BOX
Document Type and Number:
WIPO Patent Application WO/2023/066547
Kind Code:
A1
Abstract:
A method of estimating flow values from a well for a mixed fluid flow using a data driven virtual flow meter is provided. The method includes receiving gas lift performance data from a well, the gas lift performance data including at least a gas injection valve setting and a choke valve setting, determining an optimal valve setting for each of the gas injection valve and the choke valve using an AI trained optimization model based on the gas lift performance data, predicting operating data using the AI trained optimization model based on the determined optimal valve setting for each of the gas injection valve and the choke valve. The predicted operating data is then input into a virtual flow meter model to predict a first flow value of a first fluid and a second flow value of a second fluid using an AI-based trained flow model and the predicted operating data, the first fluid and the second fluid being mixed together as part of the mixed fluid flow, the reconstruction error being indicative of the accuracy of the predicted flow value.

Inventors:
MITTAL AKASH (IN)
WIMMER HELMUT (AT)
SCHNABL HELMUT (AT)
Application Number:
PCT/EP2022/073407
Publication Date:
April 27, 2023
Filing Date:
August 23, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS ENERGY GLOBAL GMBH & CO KG (DE)
International Classes:
E21B43/12; E21B47/10
Foreign References:
US20210089905A12021-03-25
Other References:
AL SELAITI IMAN ET AL: "Robust Data Driven Well Performance Optimization Assisted by Machine Learning Techniques for Natural Flowing and Gas-Lift Wells in Abu Dhabi", 21 October 2020 (2020-10-21), pages 1 - 27, XP055981772, Retrieved from the Internet [retrieved on 20221115]
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of estimating flow values from a well for a mixed fluid flow using a data driven virtual flow meter, the method comprising: receiving gas lift performance data from a well, the gas lift performance data including at least a gas injection valve setting and a choke valve setting;

5 determining an optimal valve setting for each of the gas injection valve and the choke valve using an Al trained optimization model based on the gas lift performance data; predicting operating data using the Al trained optimization model based on the determined optimal valve setting for each of the gas injection valve and the choke valve; receiving input data from the well, the input data indicative of current operating parameters of the well; receiving predicted operating data from the Al trained optimization model; calculating a reconstruction error using an Al-based trained reconstruction error model and the predicted operating data from the Al trained optimization model; predicting a first flow value of a first fluid and a second flow value of a second fluid using an Al-based trained flow model and the predicted operating data, the first fluid and the second fluid being mixed together as part of the mixed fluid flow, the reconstruction error being indicative of the accuracy of the predicted flow value.

2. The method of claim 1, wherein the gas lift performance data includes well data obtained from sensors in the well and indicative of current operating parameters of the well.

3. The method of claim 2, wherein the sensors are selected from the group consisting of temperature, pressure, and valve setting and wherein the operating parameters are selected from temperature, pressure, and valve setting, respectively.

28

RECTIFIED SHEET (RULE 91) ISA/EP

4. The method of claim 1, wherein the determining step uses mathematical regression.

5. The method of claim 1, wherein the gas injection valve setting controls a gas injection rate into the well.

6. The method of claim 1, wherein the choke valve setting controls a rate of the mixed fluid flow from the well.

7. The method of claim 1, further comprising operating the well by a controller utilizing the optimal valve setting.

8. The method of claim 1, further comprising requesting additional operating data from the well for retraining the trained reconstruction error model and the trained flow model.

9. The method of claim 1, further comprising pre-processing the operating data to remove outlier data.

10. The method of claim 9, further comprising using a K-nearest neighbors algorithm in an iterative process to remove a portion of the outlier data.

11. The method of claim 1, wherein the well is an oil well and the predicting step includes separately predicting a flow rate of water, a flow rate of oil, and a flow rate of gas.

12. The method of claim 1, wherein each of the trained flow model, the trained reconstruction error model, and the Al trained optimization model employs an artificial neural network.

RECTIFIED SHEET (RULE 91) ISA/EP

Description:
OPTIMIZATION OF GAS LIFT WELL INJECTION VALVE USING VIRTUAL FLOW

METER ON EDGE BOX

BACKGROUND

[0001] The present disclosure generally relates to the field of resource extraction, and more particularly relates to a method to calculate an optimal valve setting for a valve regulating a mixed fluid flow in a well utilizing a data driven optimization model. The optimal valve setting may them be utilized for monitoring a mixed fluid flow from a well such as an oil well in an industrial environment.

[0002] In the oil and gas industry, multiphase flowrate measurements of an oil well play an important role in production optimization and reservoir management. The flow of the oil is measured with parameters such as flow rate, gas to oil ratio, water cut and the like. Flowmeters are used to measure these parameters and indicate the flow of the oil through various parts of a production system or well. Conventionally, hardware multiphase flow meters are used for measuring these parameters.

BRIEF SUMMARY

[0003] In one construction, a method of estimating flow values from a well for a mixed fluid flow using a data driven virtual flow meter is provided. The method includes receiving gas lift performance data from a well, the gas lift performance data including at least a gas injection valve setting and a choke valve setting, determining an optimal valve setting for each of the gas injection valve and the choke valve using an Al trained optimization model based on the gas lift performance data, predicting operating data using the Al trained optimization model based on the determined optimal valve setting for each of the gas injection valve and the choke valve, predicting operating data using the Al trained optimization model based on the determined optimal valve setting for each of the gas injection valve and the choke valve, receiving input data from the well, the input data indicative of current operating parameters of the well, receiving predicted operating data from the Al trained optimization model, calculating a reconstruction error using an Al-based trained reconstruction error model and the predicted operating data from the Al trained optimization model, and predicting a first flow value of a first fluid and a second flow value of a second fluid using an Al-based trained flow model and the predicted operating data, the first fluid and the second fluid being mixed together as part of the mixed fluid flow, the reconstruction error being indicative of the accuracy of the predicted flow value.

[0004] The foregoing has outlined the technical features of the present disclosure so that those skilled in the art may better understand the detailed description that follows. Additional features and advantages of the disclosure will be described hereinafter that form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiments disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form.

[0005] Also, before undertaking the Detailed Description below, it should be understood that various definitions for certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases. While some terms may include a wide variety of embodiments, the appended claims may expressly limit these terms to specific embodiments.

[0006] Variously disclosed embodiments include a method to calculate an optimal valve setting for a valve regulating a mixed fluid flow in a well using a data driven optimization model. The method includes receiving gas lift performance data from a well, the gas lift performance data including at least a valve setting for a valve disposed within the well, measuring a fluid flow rate at the valve setting, determining an optimal valve setting for the valve using an Al trained optimization model based on the gas lift performance data, and predicting operating data using the Al trained optimization model based on the determined optimal valve setting.

[0007] In another embodiment, a method of estimating flow values from a well for a mixed fluid flow using a data driven virtual flow meter is provided. The method includes receiving gas lift performance data from a well, the gas lift performance data including at least a valve setting for a valve disposed within the well, measuring a fluid flow rate at the valve setting, determining an optimal valve setting for the valve using an Al trained optimization model based on the gas lift performance data, predicting operating data using the Al trained optimization model based on the determined optimal valve setting, receiving operating data from the well, the operating data indicative of current operating parameters of the well, receiving predicted operating data from the Al trained optimization model, calculating a reconstruction error using an Al-based trained reconstruction error model and the predicted operating data from the Al trained optimization model, and predicting a flow value using an Al-based trained flow model and the predicted operating data in response to a comparison between the reconstruction error and a predefined allowable error indicating that the reconstruction error is acceptable. In response to the comparison between the reconstruction error and the predefined allowable error indicating that the reconstruction error is not acceptable, the method includes retraining the trained flow model using the operating data from the well to create a retrained flow model, retraining the trained reconstruction error model using the operating data from the well to create a retrained reconstruction error model, replacing the trained flow model with the retrained flow model, and replacing the trained reconstruction error model with the retrained reconstruction error model.

[0008] The foregoing has outlined rather broadly the technical features of the present disclosure so that those skilled in the art may better understand the detailed description that follows. Additional features and advantages of the disclosure will be described hereinafter that form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiments disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form.

[0009] Also, before undertaking the Detailed Description below, it should be understood that various definitions for certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases. While some terms may include a wide variety of embodiments, the appended claims may expressly limit these terms to specific embodiments.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] To identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.

[0011] FIG. 1 is a schematic illustration of a resource field including a number of wells. [0012] FIG. 2 is a schematic illustration of a portion of a process for collecting a resource from a reservoir.

[0013] FIG. 3 is a flow chart illustrating the data flow from a well to a data driven flow meter.

[0014] FIG. 4 is a flow chart illustrating a portion of an autoencoder training process.

[0015] FIG. 5 is a chart illustrating the process of determining the optimum iteration to end outlier data removal.

[0016] FIG. 6 is a flow chart illustrating the operation of the virtual flow meter using operational well data.

[0017] FIG. 7 is a flow chart illustrating the operation of the virtual flow meter including the retraining decision process and the retraining process.

[0018] FIG. 8 is a flow chart illustrating the steps of the autoencoder training process.

[0019] FIG. 9A graphically illustrates raw training data.

[0020] FIG. 9B graphically illustrates transient and steady state training data with outlier data removed.

[0021] FIG. 9C graphically illustrates training data with outlier data and transient data removed.

[0022] FIG. 10 illustrates a process for retraining and for making a retraining decision.

[0023] FIG. 11 illustrates a functional block diagram of an example computer system.

[0024] FIG. 12 illustrates a block diagram of a data processing system 1200 in which an embodiment of the virtual flow meter may be implemented.

[0025] FIG. 13 illustrates a flow chart illustrating the steps of an optimal valve setting calculation process.

DETAILED DESCRIPTION

[0026] Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in this description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. [0027] Various technologies that pertain to systems and methods will now be described with reference to the drawings, where like reference numerals represent like elements throughout. The drawings discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged apparatus. It is to be understood that functionality that is described as being carried out by certain system elements may be performed by multiple elements. Similarly, for instance, an element may be configured to perform functionality that is described as being carried out by multiple elements. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.

[0028] Also, it should be understood that the words or phrases used herein should be construed broadly, unless expressly limited in some examples. For example, the terms “including,” “having,” and “comprising,” as well as derivatives thereof, mean inclusion without limitation. The singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further, the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term “or” is inclusive, meaning and/or, unless the context clearly indicates otherwise. The phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like. Furthermore, while multiple embodiments or constructions may be described herein, any features, methods, steps, components, etc. described with regard to one embodiment are equally applicable to other embodiments absent a specific statement to the contrary.

[0029] Also, although the terms "first", "second", "third" and so forth may be used herein to refer to various elements, information, functions, or acts, these elements, information, functions, or acts should not be limited by these terms. Rather these numeral adjectives are used to distinguish different elements, information, functions or acts from each other. For example, a first element, information, function, or act may be termed a second element, information, function, or act, and, similarly, a second element, information, function, or act may be termed a first element, information, function, or act, without departing from the scope of the present disclosure.

[0030] In addition, the term "adjacent to" may mean that an element is relatively near to but not in contact with a further element or that the element is in contact with the further portion, unless the context clearly indicates otherwise. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Terms “about” or “substantially” or like terms are intended to cover variations in a value that are within normal industry manufacturing tolerances for that dimension. If no industry standard is available, a variation of twenty percent would fall within the meaning of these terms unless otherwise stated.

[0031] FIG. 1 schematically illustrates a resource field 100 such as an oil field that is arranged to extract a resource 104 (e.g., oil, natural gas) that is trapped beneath a surface 102. Typically, large resource fields 100 include multiple wells 120 each arranged to extract a portion of the resource 104. As used herein, the term “well” is not geographically limited to only the location of a wellhead and a pipe or hole. Rather, “well” includes all the components that may be required or optionally included to deliver the resource 104 to a pipeline or storage facility. Each of the wells 120 may be arranged to extract the resource 104 from a different depth or the same depth as may be required. Typical resource fields 100 include multiple wells 120 however, a single well 120 may be employed in some situations.

[0032] Some wells 106 include a separator 302 (shown in FIG. 2 and FIG. 3) and choke valve 118 that operates to control the flow of fluid through the well 120 and in particular through the well head 110. The separator 302 operates to separate the incoming flow into separate constitute components and may be located near the well head 112 or may be remotely located. Thus, in a well 120 where the resource 104 emerges as a mix of oil, water, and natural gas, the separator 302 is arranged to separate that flow into three distinct flows, one predominately water, one predominately oil, and one predominately natural gas. It should also be noted that a single separator 202 could be employed to handle the flow from a single well head 112 or multiple well heads 112 as may be desired.

[0033] Each well 120 of the illustrated construction includes a well head 110 that sits on the surface 102 and connects to a well bore 108 that extends downward from the surface 102 to a desired depth. The surface equipment may include a pump (e.g., sucker rod pump), valves, measuring equipment, and the like. "Sucker rod pump" (SRP), sometimes referred to as a pumpjack is an over ground drive for a reciprocating piston pump in an oil well. It is used to mechanically lift liquid out of the well if not enough bottom hole pressure exists for the liquid to flow all the way to the surface. The arrangement is commonly used for onshore wells producing little oil. In some wells 120, an artificial lift device 116 is needed to aid in extracting the resource 104. In FIG. 1, an artificial lift device 116 in the form of an electrical submerged pump 106 is positioned at the bottom or near the bottom of each of the well bores 108 and operates to pump the resource 104 upward through the well bore 108 when the well pressure is insufficient to push the resource 104 upward naturally and at a desired rate. "Electrical submerged pump" (ESP) is a device which has a hermetically sealed motor close- coupled to the pump body. The whole assembly is submerged in the fluid to be pumped. Of course, where natural flow due to underground pressure is insufficient, other artificial lift devices 118 such as other pumps, or other modes of forcing the resource 104 upward (e.g., progressive cavity pump (PCP), gas lift, etc.) may be employed as artificial lift devices 116. Thus, the term “artificial lift device” should be understood to encompass any system or device that enhances the lifting capacity beyond that provided by the natural existing pressure within the well 120.

[0034] Many resources 104, when extracted exit the ground as a multi-phase flow (i.e., mixture of liquid, gas, and/or solid) and/or contain two or more different substances having different, and in many cases vastly different densities (e.g., oil, water, and natural gas). It is desirable to accurately measure the flow of each of these substances from each of the wells 120. However, the aforementioned factors make such measurements difficult.

[0035] In general, flow measuring systems fall into one of three types of flow meters that may be employed to measure the complicated flow of a resource 104. These flow meters include hardware flow meters, physics-based virtual flow meters, and data-driven virtual flow meters. Hardware flow meters directly measure the flow rate of the fluid. However, these flow meters are very expensive and have high operating costs making them undesirable. Physics-based flow meters are flow meters that rely on known relationships between measurable properties of the resource 104 and its flowrate and therefore require exact modeling (e.g., well geometry, fluid properties, and reservoir conditions). These flow meters require sensors not typically used in wells 120 and these and other components require continuous manual calibration, are only accurate in a narrow operating range, and therefore require expertise to use accurately. [0036] FIG. 1 illustrates a virtual flow meter 114 associated with one of the wells 120 to allow for the determination of a flow rate from that well 120. Each well 120 may include a similar virtual flow meter 114 and the virtual flow meter 114 or some of the components of the virtual flow meter 114 may be spaced apart from the well 120 as may be desired.

[0037] The data driven virtual flow meter 114 is generally computer-based and uses the data available from existing sensors in the well 120 that it is monitoring. The virtual flow meter 114 may be included in a single computer or device or may have different portions housed in different computing or intelligent devices. For example, the virtual flow meter 114 may be housed in an edge computing device that is positioned near the well 120 or may be housed in a remote computer some distance away. The computer includes a processor, memory, data storage (volatile and non-volatile), communication devices, and typically user input and output devices. The communication devices allow for the transmission of sensor data from the well 120 to the virtual flow meter 114 and specifically, the computer or computer device that houses the virtual flow meter 114.

[0038] FIG. 2 schematically illustrates an arrangement that may be employed to extract the resource 104 from a reservoir 202. While FIG. 2 illustrates a single reservoir 202, multiple reservoirs 202 could be employed in extracting the resource. As such, the single reservoir 202 simply represents the source of the resource 104. Multiple wells 204a-204d are employed to extract the resource 104 with more or fewer wells 204a-204d being possible. Each well 204a- 204d experiences different input conditions based on the condition of the reservoir 202 and the position of the well 204a-204d. In addition, each well 204a-204d is operable to discharge the extracted resource to one or more manifolds 206a-206b. In the illustrated construction, two manifolds 206a, 206b are illustrated with a single manifold 206a or more than two manifolds 206a could be employed if desired. Each manifold 206a, 206b collects some or all of the resource 104 from some or all of the wells 204a-204d. During typical operation, each of the wells 204a-204d delivers the resource to one of the manifolds 206a, 206b.

[0039] The manifolds 206a, 206b collect the resource 104 from each of the wells 204a-204d and then direct that resource to one or more separators 208a, 208b. While two separators 208a, 208b are illustrated, a single separator 208a or more than two separators 208a, 208b could be employed if desired. Preferably, a single separator 208a or 208b is employed at any given time to facilitate measuring total flows of the constituents of the resource 104 if desired. Ideally flow is measured for only one well (eg well 204a) and not total flow of all wells 204a-204d. Thus the constituents of that well 204a are measured. In different arrangements the total flow of several wells 204a-204d may be measured depending on the separator design. Each separator 208a, 208b is arranged to separate the specific constituents found in the resource 104 being extracted. In one common application, the resource 104 includes water, natural gas, and oil. In this application, each separator 208a, 208b discharges three streams that are collected in one of a first fluid reservoir 210, a second fluid reservoir 212, or a third fluid reservoir 214. The first fluid reservoir 210 collects water, the second fluid reservoir 212 collects natural gas, and the third fluid reservoir 214 collects oil. It should be noted that the term “reservoir” in this context simply means a common collection location. Each reservoir could be a pipeline that directs the material for further processing or collection.

[0040] FIG. 3 illustrates a data pipeline 300 that describes the data collection and analysis process for the fully data driven virtual flow meter 114. The virtual flow meter 114 predicts the flow or production rates of the well 120 by modeling the well 120 (and more accurately by correlating data values from the well 120 to production rates) with all its multi-variate conditions. For example, for a well 120 that includes the electrical submerged pump 106, input pressure, motor current, motor power, motor temperature and other relevant physical parameters may be used to develop a trained flow model 316. The actual field data from available measurements is used for model building and training. The trained flow model 316 may then be used as part of the virtual flow meter 114 to provide usable flow rates based on the exact operating conditions of the electrical submerged pump 106 and the well 120. It should be noted that the term “model” as used herein simply means that some aspect of the well 120 is represented mathematically. The representation may include equations, tables, or other means that describe correlations between datasets and actual operation of the well 120.

[0041] The virtual flow meter 114 also includes a reconstruction error model 322 that is used to assess the quality of the predictions made by the trained flow model 316. Both the trained flow model 316 and the reconstruction error model 322 utilize artificial intelligence and specifically artificial neural networks to operate with the desired accuracy and speed. "Artificial neural network" (ANN) is the piece of a computing system designed to simulate the way the human brain analyzes and processes information. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, may transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and may signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times. Of course, other artificial intelligence techniques such as flow model generic ANN, autoencoder for reconstruction, LSTM for dynamic lag, Kohonen RF for clustering, XGBOOST, etc. could be employed in conjunction with or in place of an artificial neural network.

[0042] As an overview, the data pipeline 300 includes a number of steps and processes that each fall into one of the categories of data cleaning, data pre-processing, model building and training, and model retraining. In preferred constructions, the data cleaning category includes an automated step for outlier removal 318 which removes outliers in the input data as will be discussed in greater detail.

[0043] The data pre-processing category may include autoencoders 306 and a pre-processor 308 that analyze incoming data to verify that it falls within the expected ranges for the particular data source and sensor and will be discussed in greater detail. Model building and model training 312 makes use of an artificial neural network (ANN) architecture to develop an untrained flow model 314 that is then trained to become a trained flow model 316 that is used to predict the flow rates of the well 120. A reconstruction error model 322 is created in a similar manner. The reconstruction error model 322 is used to detect changes in input conditions. The reconstruction error model 322 outputs an indicator (reconstruction error) indicative of how the input data differs from the known or trained data patterns. The reconstruction error model 322 provides a good measure of the prediction accuracy of the flow rates produced by the trained flow model 316 as it may provide a quality indication based on the input data.

[0044] The model retraining category operates when the reconstruction error model 322 detects a change in the operating conditions of a predetermined magnitude. The retraining portion initiates a retraining process to retrain the trained flow model 316 using the new operating data. [0045] With reference to FIG. 3, the data pipeline 300 uses operational data 304 collected from the well 120 and a separator 302, as well as any other components desired. The operational data 304 collected from the well 120 may include input pressure, output pressure, motor current, motor power consumption, pressure differentials, casing pressure, tube pressure, temperature of the resource 104, etc. Similarly, operational data 304 may be collected from the separator 302 indicative of its operation. This operational data 304 may include pressures, temperatures, differential pressures, and the like.

[0046] As illustrated in FIG. 4, the collected data, or operational data 304 is fed to an autoencoder 206 where the operational data 304 is processed for use in the reconstruction error model 322. The autoencoder 306 employs an unsupervised data science modeling technique which learns the normal data representation and interaction between the input features without any explicit target data or outputs. "Autoencoder" refers to a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input. In the illustrated application, for a given set of operating conditions or operational data 304, the input data is expected to follow a normal representation, which may be learned by the autoencoders 306. It follows an encoding and decoding later flow, and with proper hyper parameter tuning, it may learn the input data representation and correlations. In other constructions, the autoencoder may be replaced by or used in conjunction with other unsupervised learning techniques such as PC A, K-means clustering, or similar autoencoders.

[0047] The autoencoders 306 employ an artificial neural network to analyze the operational data 304 from the various operational sensors to learn the normal data distribution and to determine what data values may fall out of an expected range. The use of the autoencoders 306 and the artificial neural network allows for more accurate analysis by accounting for situations where the well 120 is operating in an expected manner but one or more data points may be out of an expected range. The output of the autoencoders 306 is used as part of the reconstruction error model 322 as will be discussed in greater detail. As with any artificial neural network, the autoencoders 306 must be trained. This process is described in more detail with regard to FIG. 8. [0048] Returning to FIG. 3, the operational data 304 is also passed to the pre-processor 308 for pre-processing. The pre-processor 308 operates to remove outlier data to allow for more accurate calculations from the trained flow model 316 and for better data for training. The outlier removal 318 is performed autonomously and generally includes three different methodologies. The first methodology is a rule-based system for removing gross outliers. Gross outliers are outliers that are out of range from the general sensor readings. The most common outliers are those in flow readings with too high or too low of a value. This may be caused by abrupt flow measurements (transients). Pressure and temperature sensors are susceptible to the same type of out-of-range data errors. These are removed using general cut off ranges that may be specific to each well 120 and each sensor and that are typically determined after an analysis of the expected data ranges.

[0049] The second methodology is steady-state detection for narrow operating range data. For narrow operating ranges, the data is expected to be nearly constant and therefore a gradientbased or rate of change based method to remove data that are outside the steady-state is applied. The gradients may be measured with respect to one or more of the speed or frequency of the electrical submerged pump 106, pump inlet pressure, water flowrate, oil flow rate, and/or other suitable parameters. A threshold is selected based on the rate of change and all sample points that have gradients or rates of change below the selected value are retained, while those outside of the selected gradient or rates of change are discarded. Thus, only data corresponding to the steady state for these narrow range values are used for further analysis and the rest are discarded. In some constructions, a K-nearest neighbors algorithm is employed to determine if the gradients or rates of change exceed the predetermined values. "K-nearest neighbors algorithm" (KNN) refers to a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether A NN is used for classification or regression. In k-NN classification, the output is a class membership. In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors.

[0050] The third methodology employed for outlier removal is an adaptive outlier removal process 500 applicable to larger operating range data. For larger operating ranges, the data may have more outliers such as zero readings (i.e., a sensor out of range low value) as well as sudden spikes and dips (e.g., transients). Electrical noise between the sensor and the preprocessor 308 may cause some of these data outliers. To assure that only data in acceptable ranges is used, an adaptive method of outlier removal is employed. As illustrated in FIG. 5, an iterative process is employed. In each iteration, a set of points are removed based on an algorithmic decision score and these decision scores are monitored. When the decision score reaches a local minimum 502, the loop terminates, and the final set of points are returned.

[0051] The algorithmic decision score is determined for every data point by first measuring a variance of a full data set (Vo) for the sensor or data in question. Using a suitable algorithm (autoencoder, KNN, Isolation Forest etc.), an algorithmic decision store is obtained for each sample data point. For each iteration, variance or a suitable statistical metric is computed from the decision scores of all data points. In each iteration, a set percent of data points corresponding to highest decision scores are removed from the analysis. After removal of the above data points, the same algorithm is applied on the remaining data-points and decision scores and corresponding variance scores are again computed. The process terminates when variance score of the new iteration is higher than the previous iteration. Sample data is then analyzed using the model with each sample data point receiving an algorithmic decision score. In the illustrated construction, an algorithmic decision score of “0” represents inlier data and a score of “1” represents outlier data. Of course, many different values may be assigned to inliers and outliers as desired (e.g., the “0” and “1” may be reversed). The variance of the sample data including only the inlier values (Vi) is then determined. If the variance of the inlier values (Vi) is less than the variance of the full data set (Vo) the aforementioned process is repeated. When the variance of the inlier values (Vi) is greater than the variance of the full data set (Vo) the aforementioned process terminates. This point should be the local minimum 502 illustrated in FIG. 5.

[0052] Returning to FIG. 3, with the outliers now removed from the data, the clean data is passed on for feature extraction 310, sometimes referred to as feature engineering. Feature extraction 310 selects features that may be used for training the untrained flow model 314. Most of the features may be classified as sensor data, production separator data, and/or well data. Sensor data is generally continuously measured at a predetermined sample rate (e.g., once per minute) and may include electrical submerged pump 106 properties such as current, leakage current, power, differential pressure, inlet pressure, outlet pressure, fluid temperature, winding temperature, vibration, and the like. In addition, other values related to the well 120 that may be measured include casing pressure, tubing pressure, motor frequency, and the like. Production separator data may include a gross production rate, a net production rate, a gas production rate, and separator pressures and temperatures. These values are either periodically sampled at a high sampling rate (e.g., between 20ms and 100ms) or event-based measurements (on-change). Well data is typically data that is static and only changes in response to well or equipment changes. Well data may include the pump design, pump depth, tubing diameter, perforation top, perforation bottom, nominal motor voltage, nominal motor performance (efficiency), and well path (e.g., inclination, azimuth), as well as other similar data.

[0053] With feature extraction 310 complete, the feature extracted data may be used to train the untrained flow model 314. In general, sensor data, well data, and separator pressure are used as inputs to the untrained flow model 314 and the production rates (e.g., gross, net, and gas production rates) are used as outputs. During model training 312, input data is provided to the untrained flow model 314 and the untrained model predicts production rates. The predicted production rates are compared to the known values from the data output from feature extraction 310. The model is adjusted, and this process is repeated until the predicted production rates converge on, and closely match the known values.

[0054] Next, a cross-validation step 320 is performed to verify that the model is performing as desired. Data taken from feature extraction 310 or other data may be input into the model and the predicted production rates again compared to the known production rates to verify that the model is producing accurate results. In addition, data from similar wells 120 may be employed for the cross-validation step 320 if desired. Once the model passes the cross-validation step 320, it becomes the trained flow model 316 and may be used for operational predictions.

[0055] Data output from feature extraction 310 is also used by the reconstruction error model 322. Specifically, sensor data is used as both inputs and outputs for the reconstruction error model 322.

[0056] As illustrated in FIG. 6, the now trained flow model 316 and the reconstruction error model 322 may be used with actual well data 604 to predict the production rates for the well 120. The well data 604 may include motor current, motor power, pump pressure differential, pump pressure, casing pressure, tube pressure, fluid temperature, motor temperature, pump vibration, and the like. The well data 604 is passed through the pre-processor 308 to remove any outlier data and to assure that the well data 604 is suitable for use and will provide accurate results when presented to each of the reconstruction error model 322 and the trained flow model 316. [0057] The reconstruction error model 322 and the trained flow model 316 are used in a realtime scenario for prediction. The reconstruction error model 322 is used to calculate a reconstruction error 602 to determine if and by how much the input data has changed when compared to the original data used for training. The trained flow model 316 predicts production, or flow rates for each component of the mixed flow resource 104. In oil field applications, the resource 104 may be a mixed flow of oil, water and natural gas with the trained flow model 316 being able to separately predict production or flow rates for each of the oil, water, and natural gas.

[0058] During the process of predicting production rates, the new well data 604 is provided to both the reconstruction error model 322 and the trained flow model 316. The reconstruction error model 322 checks the quality of the data by comparing the data to the expected data for the particular well 120. The reconstruction error model 322 generates the reconstruction error 602 for the new data point and checks if that error exceeds predetermined limits to evaluate if the conditions of the well 120 have changed to the point where the predictions from the trained flow model 316 may be inaccurate.

[0059] At nearly the same time, the trained flow model 316 generates a flow model prediction 606 that includes estimated or predicted production or flow rates based on the new well data 604. Skilled users of the system may use the reconstruction error 602 and the flow model predictions 606 to determine the production rates for the well 120 and whether or not the conditions of the well 120 have changed to the point that the trained flow model 316 is no longer accurate.

[0060] FIG. 7 illustrates the operation of the trained flow model 316 and the reconstruction error model 322 and in particular illustrates the operation of a model retraining process 710 that is initiated based on a retraining decision 702. More specifically, and as was discussed with regard to FIG. 6, new well data 604 is received by the virtual flow meter 114 for processing. The new well data 604 is first analyzed by the reconstruction error model 322 to determine the reconstruction error 602. The reconstruction error 602 is indicative of the level of change that may have occurred in the well 120, a change in the reservoir, or in the operation of the well 120. The retraining decision 702 compares the reconstruction error 602 to a predefined threshold or allowable reconstruction error 602. If the reconstruction error 602 is less than the predefined threshold, the decision is a “Yes” and the new well data 604 is passed to the trained flow model 316 which then operates to output flow model predictions 606 as discussed above. If however, the reconstruction error 602 is greater than the predefined threshold, the model retraining process 710 may be automatically initiated.

[0061] The model retraining process 710 begins by requesting new well test 704. The data from the new well test 704 will be similar to the original data used to train the trained flow model 316 and the reconstruction error model 322 and will be treated in a similar manner. The new data is used to retrain each of the trained flow model 316 and the reconstruction error model 322 to produce a retrained flow model 706 and a retrained reconstruction error model 708. These models then replace the trained flow model 316 and the reconstruction error model 322 that were in place prior to the retraining and are used for future predictions until an additional retraining is required.

[0062] The model retraining process 710 may be autonomous such that it occurs automatically without any user intervention to assure that the trained flow model 316 or the retrained flow model 706 produces suitably accurate results. However, other constructions may include semiautomatic initiation or supervised initiation.

[0063] FIG. 8 illustrates one possible training process 800 that may be employed to train the autoencoders 306 for use with the reconstruction error model 322. The training process 800 includes an outlier removal step 802, a clustering step 804, a sample data extraction step 806, autoencoder training 808, and autoencoder testing 810.

[0064] The outlier removal step 802 utilizes steady-state and/or K-Nearest neighbors algorithms to determine which data points may be outliers and removes those data points from the data set as was described with regard to FIG. 3.

[0065] Next, the clustering step 804 uses K-means clustering with a pre-assigned K to cluster the data into respective labels or categories. "K-means clustering" refers to a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. For example, the clustering may be based on the pump frequency such that the data is clustered or grouped based on pump frequency values. As should be understood, other clustering techniques may also be employed, or the clustering step may be omitted.

[0066] With the clustering complete, a sample data extraction step 806 is performed. In one arrangement, an equal number of data is sampled from each cluster to remove bias towards clusters or data sets with a higher number of data points. [0067] The extracted sampled data is then used for autoencoder training 808. As discussed above, the data is provided to the autoencoder 306 and the results of the autoencoder 306 are compared to known results. A reconstruction plot indicative of the error of the autoencoder 306 may then be produced.

[0068] Finally, the autoencoder training process 800 is completed with autoencoder testing 810. The autoencoder 306 is tested using a blind dataset (i.e., a dataset with known results that are compared to the predicted results) or using the same training datasets to verify that the results produced by the autoencoder 306 is sufficiently close to the expected or known results to allow production use of the autoencoders 306.

[0069] Training data is an important aspect of the virtual flow meter. Without proper training data, the resulting model could be inaccurate. To assure proper training data, the raw data 902, illustrated in FIG. 9A goes through a preprocessing step.

[0070] The raw data 902 includes both input data and output data for both transient and steady state operation. The virtual flow meter is mainly used to predict steady state operating conditions such that the use of steady state training data is preferred. While transient data could be used for additional training, it would not increase the accuracy of the virtual flow model at predicting steady state operating conditions and is therefore intentionally omitted in this example.

[0071] The raw data 902 illustrated in FIG. 9A may include outlier data and/or data that is obviously wrong (e.g., failed sensor or out of range values). This obviously incorrect data is removed in the first step of data preprocessing. This step in the process may be autonomous or could include some user intervention or interaction. In addition, the outlier removal process may be the same as or similar to the step of outlier removal 318 described with regard to removing inaccurate incoming data during operation.

[0072] The removal of the outlier data from the raw data 902 leads to a set of clean data 904 illustrated in FIG. 9B. The clean data 904 still includes transient data and steady state data but many of the clearly inaccurate data has been removed. Next, the clean data 904 is processed to separate the transient data. In one example, a K-nearest neighbors algorithm or autoencoder is employed to facilitate the separation of the transient data. Again, the processes and systems employed may be the same as or similar to those used to preprocess the incoming operating data prior to it being submitted to the virtual flow meter. [0073] With the separation of the transient data complete, the resulting preprocessed training data 906 as illustrated in Figs. FIG. 9C is ready to be used for training.

[0074] As noted above, the examples described herein do not utilize the transient data. However, other systems may employ the transient data if desired. For example, transient data could be used to calibrate time lag behavior between various data and/or to determine flow propagation behavior between various positions (e.g., between a well 120 and a separator 302). This data could then be used to train a refined model that carries dynamic time lags (within the data).

[0075] FIG. 10 illustrates in a flow chart format a routine 1000 that may be followed by the virtual flow meter 114. In block 1002, operating data is received from the well, the operating data is indicative of the current operating parameters of the well. In block 1004, routine 1000 calculates a reconstruction error using an Al-based trained reconstruction error model and the operating data from the well. In block 1006, routine 1000 predicts a flow value using an AI- based trained flow model and the operating data from the well in response to a comparison between the reconstruction error and a predefined allowable error indicating that the reconstruction error is acceptable. In block 1008, routine 1000, in response to the comparison between the reconstruction error and a predefined allowable error indicating that the reconstruction error is not acceptable initiates a retraining process. In block 1010, routine 1000 retrains the trained flow model using the operating data from the well to create a retrained flow model. In block 1012, routine 1000 retrains the trained reconstruction error model using the operating data from the well to create a retrained reconstruction error model. In block 1014, routine 1000 replaces the trained flow model with the retrained flow model. In block 1016, routine 1000 replaces the trained reconstruction error model with the retrained reconstruction error model.

[0076] With reference to FIG. 11, an example computer system 1100 is described that may house a portion of, or the entire virtual flow meter 114. The computer system 1100 employs at least one data processing system 1102. A data processing system may comprise at least one processor 1116 (e.g., a microprocessor/CPU, GPU, and the like). The processor 1116 may be configured to carry out various processes and functions described herein by executing from a memory 1126, computer/processor executable instructions 1128 corresponding to one or more applications 1130 (e.g., software and/or firmware) or portions thereof that are programmed to cause the at least one processor to carry out the various processes and functions described herein.

[0077] The memory 1126 may correspond to an internal or external volatile or nonvolatile processor memory 1118 (e.g., main memory, RAM, and/or CPU cache), that is included in the processor and/or in operative connection with the processor. Such a memory may also correspond to non-transitory nonvolatile storage device 1120 (e.g., flash drive, SSD, hard drive, ROM, EPROMs, optical discs/drives, or other non-transitory computer readable media) in operative connection with the processor.

[0078] The described data processing system 1102 may optionally include one or more display devices 1112 and one or more input devices 1114 in operative connection with the processor. The display device, for example, may include an LCD or AMOLED display screen, monitor, VR headset, projector, or any other type of display device capable of displaying outputs from the processor. The input device, for example, may include a mouse, keyboard, touch screen, touch pad, trackball, buttons, keypad, game controller, gamepad, camera, microphone, motion sensing devices that capture motion gestures, operational sensors (e.g., pressure, temperature, flow, etc.), or other type of input device capable of providing user inputs or other information to the processor.

[0079] The data processing system 1102 may be configured to execute one or more applications 1130 that facilitates the features described herein.

[0080] For example, as illustrated in FIG. 11, the at least one processor 1116 may be configured via executable instructions 1128 (e.g., included in the one or more applications 1130) included in at least one memory 1126 to operate all or a portion of the virtual flow meter 114.

[0081] Referring now to FIG. 12, a methodology is illustrated that facilitates operation of a portion of, or all the virtual flow meter 114. While the methodology is described as being a series of acts that are performed in a sequence, it is to be understood that the methodology may not be limited by the order of the sequence. For instance, unless stated otherwise, some acts may occur in a different order than what is described herein. In addition, in some cases, an act may occur concurrently with another act. Furthermore, in some instances, not all acts may be required to implement a methodology described herein. [0082] It should be appreciated that this described methodology may include additional acts and/or alternative acts corresponding to the features described previously with respect to the data processing computer system 1100.

[0083] It is also important to note that while the disclosure includes a description in the context of a fully functional system and/or a series of acts, those skilled in the art will appreciate that at least portions of the mechanism of the present disclosure and/or described acts may be capable of being distributed in the form of computer/processor executable instructions 1128 (e.g., software/firmware applications 1130) contained within a storage device 1120 that corresponds to a non-transitory machine-usable, computer-usable, or computer- readable medium in any of a variety of forms. The computer/processor executable instructions 1128 may include a routine, a sub-routine, programs, applications, modules, libraries, and/or the like. Further, it should be appreciated that computer/processor executable instructions may correspond to and/or may be generated from source code, byte code, runtime code, machine code, assembly language, Java, JavaScript, Python, Julia, C, C#, C++ or any other form of code that may be programmed/configured to cause at least one processor to carry out the acts and features described herein. Still further, results of the described/claimed processes or functions may be stored in a computer-readable medium, displayed on a display device, and/or the like.

[0084] It should be appreciated that acts associated with the above-described methodologies, features, and functions (other than any described manual acts) may be carried out by one or more data processing systems 1102 via operation of one or more of the processors 1116. Thus, it is to be understood that when referring to a data processing system, such a system may be implemented across several data processing systems organized in a distributed system in communication with each other directly or via a network.

[0085] As used herein a processor corresponds to any electronic device that is configured via hardware circuits, software, and/or firmware to process data. For example, processors described herein may correspond to one or more (or a combination) of a microprocessor, CPU, GPU, or any other integrated circuit (IC) or other type of circuit that is capable of processing data in a data processing system 1102. As discuss previously, the processor 1116 that is described or claimed as being configured to carry out a particular described/claimed process or function may correspond to a CPU that executes computer/processor executable instructions 1128 stored in a memory 1126 in the form of software to carry out such a described/claimed process or function. However, it should also be appreciated that such a processor may correspond to an IC that is hardwired with processing circuitry (e.g., an FPGA or ASIC IC) to carry out such a described/claimed process or function. Also, it should be understood, that reference to a processor may include multiple physical processors or cores that are configured to carry out the functions described herein. In addition, it should be appreciated that a data processing system and/or a processor may correspond to a controller that is operative to control at least one operation.

[0086] In addition, it should also be understood that a processor that is described or claimed as being configured to carry out a particular described/claimed process or function may correspond to the combination of the processor 1116 with the executable instructions 1128 (e.g., software/firmware applications 1130) loaded/installed into the described memory 1126 (volatile and/or non-volatile), which are currently being executed and/or are available to be executed by the processor to cause the processor to carry out the described/claimed process or function. Thus, a processor that is powered off or is executing other software, but has the described software loaded/stored in a storage device 1120 in operative connection therewith (such as on a hard drive or SSD) in a manner that is available to be executed by the processor (when started by a user, hardware and/or other software), may also correspond to the described/claimed processor that is configured to carry out the particular processes and functions described/claimed herein.

[0087] FIG. 12 illustrates a further example of a data processing system 1200 with which one or more embodiments of the data processing system 1102 described herein may be implemented. For example, in some embodiments, the at least one processor 1116 (e.g., a CPU or GPU) may be connected to one or more bridges/buses/controllers 1202 (e.g., a north bridge, a south bridge). One of the buses for example, may include one or more I/O buses such as a PCI Express bus. Also connected to various buses in the depicted example may include the processor memory 1118 (e.g., RAM) and a graphics controller 1204. The graphics controller 1204 may generate a video signal that drives the display device 1112. It should also be noted that the processor 1116 in the form of a CPU may include a memory therein such as a CPU cache memory. Further, in some embodiments one or more controllers (e.g., graphics, south bridge) may be integrated with the CPU (on the same chip or die). Examples of CPU architectures include IA-32, x86-64, and ARM processor architectures.

[0088] Other peripherals connected to one or more buses may include communication controllers 1214 (Ethernet controllers, WiFi controllers, cellular controllers) operative to connect to a network 1222 such as a local area network (LAN), Wide Area Network (WAN), the Internet, a cellular network, and/or any other wired or wireless networks or communication equipment. The data processing system 1200 may be operative to communicate with one or more servers 1224, and/or any other type of device or other data processing system that is connected to the network 1222. For example, in some embodiments, the data processing system 1200 may be operative to communicate with a memory 1126. Examples of a database may include a relational or non-relational database (e.g., Influx DB, Oracle, Microsoft SQL Server). Also, it should be appreciated that is some embodiments, such a database may be executed by the processor 1116.

[0089] Further components connected to various busses may include one or more I/O controllers 1212 such as USB controllers, Bluetooth controllers, and/or dedicated audio controllers (connected to speakers and/or microphones). It should also be appreciated that various peripherals may be connected to the I/O controller(s) (via various ports and connections) including the input devices 1114, and output devices 1206 (e.g., printers, speakers) or any other type of device that is operative to provide inputs to and/or receive outputs from the data processing system.

[0090] Also, it should be appreciated that many devices referred to as input devices 1114 or output devices 1206 may both provide inputs and receive outputs of communications with the data processing system 1200. For example, the processor 1116 may be integrated into a housing (such as a tablet) that includes a touch screen that serves as both an input and display device. Further, it should be appreciated that some input devices (such as a laptop) may include a plurality of different types of input devices 1114 (e.g., touch screen, touch pad, and keyboard). Also, it should be appreciated that other hardware 1208 connected to the I/O controllers 1212 may include any type of device, machine, sensor, or component that is configured to communicate with a data processing system.

[0091] Additional components connected to various busses may include one or more storage controllers 1210 (e.g., SATA). A storage controller 1210 may be connected to a storage device 1120 such as one or more storage drives and/or any associated removable media. Also, in some examples, a storage device 1120 such as an NVMe M.2 SSD may be connected directly to a bus (e.g., bridges/buses/controllers 1202) such as a PCI Express bus.

[0092] It should be understood that the data processing system 1200 may directly or over the network 1222 be connected with one or more other data processing systems such as a server 1224 (which may in combination correspond to a larger data processing system). For example, a larger data processing system may correspond to a plurality of smaller data processing systems implemented as part of a distributed system in which processors associated with several smaller data processing systems may be in communication by way of one or more network connections and may collectively perform tasks described as being performed by a single larger data processing system.

[0093] A data processing system in accordance with an embodiment of the present disclosure may include an operating system 1216. Such an operating system may employ a command line interface (CLI) shell and/or a graphical user interface (GUI) shell. The GUI shell permits multiple display windows to be presented in the graphical user interface simultaneously, with each display window providing an interface to a different application or to a different instance of the same application. A cursor or pointer in the graphical user interface may be manipulated by a user through a pointing device such as a mouse or touch screen. The position of the cursor/pointer may be changed and/or an event, such as clicking a mouse button or touching a touch screen, may be generated to actuate a desired response. Examples of operating systems that may be used in a data processing system may include Microsoft Windows, Linux, UNIX, iOS, macOS, and Android operating systems.

[0094] As used herein, the processor memory 1118, storage device 1120, and memory 1126 may all correspond to the previously described memory 1126. Also, the previously described applications 1130, operating system 1216, and data 1220 may be stored in one more of these memories or any other type of memory or data store. Thus, the processor 1116 may be configured to manage, retrieve, generate, use, revise, and/or store applications 1130, data 1220 and/or other information described herein from/in the processor memory 1118, storage device 1120, and/or memory 1126.

[0095] In addition, it should be appreciated that data processing systems may include virtual machines in a virtual machine architecture or cloud environment that execute the executable instructions. For example, the processor and associated components may correspond to the combination of one or more virtual machine processors of a virtual machine operating in one or more physical processors of a physical data processing system 1200. Examples of virtual machine architectures include VMware ESXi, Microsoft Hyper-V, Xen, and KVM. Further, the described executable instructions 1128 may be bundled as a container that is executable in a containerization environment such as Docker executed by the processor 1116. [0096] Also, it should be noted that the processor 1116 described herein may correspond to a remote processor located in a data processing system such as a server that is remote from the display and input devices described herein. In such an example, the described display device and input device may be included in a client data processing system (which may have its own processor) that communicates with the server (which includes the remote processor) through a wired or wireless network (which may include the Internet). In some embodiments, such a client data processing system, for example, may execute a remote desktop application or may correspond to a portal device that carries out a remote desktop protocol with the server in order to send inputs from an input device to the server and receive visual information from the server to display through a display device. Examples of such remote desktop protocols include Teradici's PCoIP, Microsoft's RDP, and the RFB protocol. In another example, such a client data processing system may execute a web browser or thin client application. Inputs from the user may be transmitted from the web browser or thin client application to be evaluated on the server, rendered by the server, and an image (or series of images) sent back to the client data processing system to be displayed by the web browser or thin client application. Also, in some examples, the remote processor described herein may correspond to a combination of a virtual processor of a virtual machine executing in a physical processor of the server.

[0097] Those of ordinary skill in the art will appreciate that the hardware and software depicted for the data processing system may vary for particular implementations. The depicted examples are provided for the purpose of explanation only and is not meant to imply architectural limitations with respect to the present disclosure. Also, those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure is not being depicted or described herein. Instead, only so much of a data processing system as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the data processing system 1200 may conform to any of the various current implementations and practices known in the art.

[0098] As mentioned previously, gas lifting methods may be employed to artificially lift the resource 104 upward. With gas lifting, a portion of the produced gas from the multiphase fluid is compressed and re-injected to the bottom of the well 120 via a specifically designed mandrel setup acting as a valve between production tubing and annulus. That way, the gas liquid ratio may be shifted to lower the density of the fluid/gas mixture (gas bubbles in liquid) and eventually creates and re-establishes a continuous flow of multiphase mixture. The production of oil utilizing a gas lifting method is a function of the gas injected, the rate of production, the depth of the mandrel valve installation, as well as characteristics of the resource 104. The rate of production may be controlled via setting the opening of a choke valve 118.

[0099] Optimization of either the production of oil or economic parameters, i.e., cost to run or operate the equipment, is therefore essential and traditionally has been done utilizing PID controllers for setting the choke valve 118 openings. These PID controllers, however, do not take into account depletion of the resource 104, changes in compression of injection gas, or changes in the properties of the resource 104.

[0100] FIG. 13 illustrates the operation of an Al trained optimization model 1302 combined with the trained flow model 316 in a flow chart format. The Al trained optimization model 1302 provides a calculation of optimal values for the choke valve 118 setting and other independent gas injection flow settings based on desired production fluid flows or economic parameters as described above. Utilizing the optimal settings, the Al trained optimization model 1302 may be connected to become the input of the trained virtual flow model 316 along with input resulting from operating data of the well 1308.

[0101] The Al trained optimization model 1302 takes in as input gas injection setting data 1304 which may include setting data for the opening of the choke valve 118. Other variables may be input to the Al trained optimization model 1302 including measured well data 1308 . Well data 604 may include data obtained from sensors such as temperature sensor data, pressure sensor data, choke valve 118 opening data, etc.

[0102] In order to train the Al trained optimization model 1302 the gas injection setting data 1304 may be utilized to train an untrained optimization model 1302. Oil production flow rates may be measured at different gas injection valve settings at given production pressures in order to produce gas lift performance curves depicting gas production rate as a function of a specific production pressure. This data may then be stored for comparison with the production rates predicted by the virtual flow meter 114.

[0103] The Al trained optimization model 1302 may then determine an optimal choke valve setting for an injection rate and production rate using a mathematical regression or an optimization algorithm running with the mathematical regression. Subsequently, the Al trained optimization model 1302 uses the optimal valve setting to predict operational data 304 which may then be used as input data to the trained flow model 316. In a similar way, an optimal choke valve setting may be obtained by the Al trained optimization model 1302.

[0104] The proposed method allows optimization to any measured and extrapolated gas lift performance curves. Current methods are typically limited to a relatively small amount of distinct gas lift performance curves which makes exact tuning difficult. In addition, the proposed method is fully data driven, i.e., no physical modeling is needed, that can identify changes in the reservoir conditions. The optimization data for the gas injection and oil production flow can then be used as an input to ahe virtual flow meter 114 which predicts a fluid flow value. The fluid flow value can then be compared to the measured oil production flow rate. When the difference between the fluid flow value and the predicted fluid flow value is above a threshold value, a retraining of the Al trained optimization model 1302 may be triggered automatically or manually.

[0105] Although an exemplary embodiment of the present disclosure has been described in detail, those skilled in the art will understand that various changes, substitutions, variations, and improvements disclosed herein may be made without departing from the spirit and scope of the disclosure in its broadest form.

[0106] None of the description in the present application should be read as implying that any particular element, step, act, or function is an essential element, which must be included in the claim scope: the scope of patented subject matter is defined only by the allowed claims. Moreover, none of these claims are intended to invoke a means plus function claim construction unless the exact words "means for" are followed by a participle.

[0107] The data drive optimization model may be combined with the real time data driven virtual flow meter to provide optimal valve settings eliminating exact physical and mathematical modeling. Estimating fluid flows for a well with a mixed fluid flow utilizing the combined model allows for real time optimization of valve settings as the model is able to analyze its performance and retune itself when conditions change. As with the virtual flow meter, the optimization model 1302 may be housed on an edge computing device that is positioned near the well or could be housed in a remote computer some distance away

[0108] Although an exemplary embodiment of the present disclosure has been described in detail, those skilled in the art will understand that various changes, substitutions, variations, and improvements disclosed herein may be made without departing from the spirit and scope of the disclosure in its broadest form. [0109] None of the description in the present application should be read as implying that any particular element, step, act, or function is an essential element, which must be included in the claim scope: the scope of patented subject matter is defined only by the allowed claims. Moreover, none of these claims are intended to invoke a means plus function claim construction unless the exact words "means for" are followed by a participle.