Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTIMIZATION OF A MULTI-PERIOD MODEL FOR VALUATION APPLIED TO FLOW CONTROL VALVES
Document Type and Number:
WIPO Patent Application WO/2013/059079
Kind Code:
A1
Abstract:
Apparatus and methods for controlling equipment to recover hydrocarbons from a reservoir including constructing a collection of reservoir models wherein each model represents a realization of the reservoir and comprises a subterranean formation measurement, estimating the measurement for the model collection, and controlling a device wherein the controlling comprises the measurement estimate wherein the constructing, estimating, and/or controlling includes a rolling flexible approach and/or a nearest neighbor approach.

Inventors:
PRANGE MICHAEL DAVID (US)
BAILEY WILLIAM J (US)
WANG DADI (US)
Application Number:
PCT/US2012/059899
Publication Date:
April 25, 2013
Filing Date:
October 12, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SCHLUMBERGER CA LTD (CA)
SCHLUMBERGER SERVICES PETROL (FR)
SCHLUMBERGER HOLDINGS (GB)
SCHLUMBERGER TECHNOLOGY BV (NL)
PRAD RES & DEV LTD (GB)
SCHLUMBERGER TECHNOLOGY CORP (US)
International Classes:
E21B44/00; G01V9/00
Foreign References:
US20040065439A12004-04-08
US20100163230A12010-07-01
US20100071897A12010-03-25
Other References:
KUCHUK, FIKRI. ET AL.: "Determination of In Situ Two-Phase Flow Properties Through Downhole Fluid Movement Monitoring.", 2010 SOCIETY OF PETROLEUM ENGINEERS., pages 575 - 587
Attorney, Agent or Firm:
GREENE, Rachel et al. (IP Administration Center of ExcellenceRoom 472, Houston TX, US)
Download PDF:
Claims:
In the Claims:

1. A method for controlling equipment to recover hydrocarbons from a reservoir, comprising:

constructing a collection of reservoir models wherein each model represents a realization of the reservoir and comprises a subterranean formation measurement;

estimating the measurement for the model collection; and

controlling a device wherein the controlling comprises the measurement estimate, wherein the constructing, estimating, and/or controlling comprise a rolling flexible approach.

2. The method of claim 1, wherein the estimating comprises a simulator.

3. The method of claim 1, wherein the controlling comprises an optimizer.

4. The method of claim 3, wherein the optimizer resets the measurements and operates in a rolling fashion.

5. The method of claim 1, wherein the controlling flow rates comprises a decision resolution.

6. The method of claim 5, wherein the estimating comprises the decision resolution

7. The method of claim 1, wherein the estimating comprises basis-function regression.

8. The method of claim 1, wherein the estimating comprises k-neighbor approach.

9. The method of claim 1, wherein the geophysical measurements are surface sensors, downhole sensors, temporary sensors, permanent sensors, well logs, fluid production, well tests, electromagnetic surveys, gravity surveys, nuclear surveys, tiltmeter surveys, seismic surveys, water, oil, or gas flow measurements, and/or separated or combined flow measurements.

10. The method of claim 1, further comprising flooding with oil, gas, water, or carbon dioxide, EOR, static or controllable downhole valves, well placement, platform type and placement, drilling, heating the formation, or geosteering.

11. A method for controlling equipment to recover hydrocarbons from a reservoir, comprising:

constructing a collection of reservoir models wherein each model represents a realization of the reservoir and comprises a subterranean formation measurement;

estimating the measurement for the model collection; and

controlling a device wherein the controlling comprises the measurement estimate, wherein the collecting, estimating, and/or controlling comprise a nearest neighbor approach.

12. The method of claim 11, wherein the estimating comprises a simulator.

13. The method of claim 11, wherein the controlling comprises an optimizer.

14. The method of claim 13, wherein the optimizer resets the measurements and operates in a rolling fashion.

15. The method of claim 11, wherein the controlling flow rates comprises a decision resolution.

16. The method of claim 15, wherein the estimating comprises the decision resolution

17. The method of claim 11, wherein the estimating comprises basis-function regression.

18. The method of claim 11, wherein the estimating comprises k-neighbor approach.

19. The method of claim 11, wherein the geophysical measurements are surface sensors, downhole sensors, temporary sensors, permanent sensors, well logs, fluid production, well tests, electromagnetic surveys, gravity surveys, nuclear surveys, tiltmeter surveys, seismic surveys, water, oil, or gas flow measurements, and/or separated or combined flow measurements.

20. The method of claim 1 1, further comprising flooding with oil, gas, water, or carbon dioxide, EOR, static or controllable downhole valves, well placement, platform type and placement, drilling, heating the formation, or geosteering.

Description:
Priority Claim

[0001] This application claims priority as a PCT application of United States Provisional Application Serial Number 61/549526 filed on October 20, 2011, which is incorporated by reference herein.

Field

[0002] This application relates to methods and apparatus to control and optimize flow control valves in the oil field services industry.

Background

[0003] In the operation of oil wells, a critical issue is how to control the flow rate of oil such that revenue from the well is maximized. In long horizontal or multi-lateral wells, it may be advantageous to separately control rates in different parts of the well, e.g., to delay water incursions. Such control can be achieved through downhole flow control valves. The flow control valves (FCVs) are installed underground in wells to regulate the flow of crude oil. A typical example is shown in Figures 1 A and IB (prior art), where there are three horizontal boreholes in a well, with an FCV at the head of each borehole. In practice, operators closely monitor the geophysical properties of the well and dynamically adjust the FCVs such that maximum revenue from the well can be attained. Here we are concerned about expected revenue because the evolution of the geophysical nature of the well is a stochastic process, with grid, aquifer strength and oil-water contact as the uncertainty in the system. For a risk-neutral operator, the difference between the expected revenue of the well without FCVs and the expected revenue of the well with FCVs installed and optimally controlled with future measurements gives the value of the FCV itself. Our task is to find the optimal control strategy and hence the value of the FCVs.

[0004] There are two major obstacles before us, the curse of dimensionality and the non- Markovian property of the problem. To derive the optimal value, we model the downhole flow control problem as a dynamic programming problem and solve for the optimal control policy backwardly. In order to derive the maximum production, the operator has to be forward-looking in decision-making. The decision made in the current period will affect the measurements observed in the future and hence the decisions made in the future. The operator must therefore take future measurements and decisions into account when setting the valves in the current period. In other words, the operator learns from future

information. The fundamental theory and standard computational technique for dynamic programming can be found in the literature. However, application of dynamic

programming to real world problems is often hindered by the so-called curse of dimensionality, which means that the state space grows exponentially with the number of state variables. To get rid of this obstacle, various approximate methods have been proposed, and a detailed and comprehensive description can be found in the literature.

[0005] Another obstacle we face is the non-Markovian property of the problem. The payoff of the well is associated with the operation of valves and the successive

measurements. It can be difficult/wrong to encode the payoff as a function of only current measurements, when the payoff depends on the system trajectory or history, and not on the current state alone. In other words, the payoff not only depends on the current

measurements and valves setting, but also on previous measurements and valve settings. This non-Markovian property of the problem poses a major difficulty in the valuation problem, since previous actions enter into the problem as the states for later periods, exacerbating the curse of dimensionality effect. Theoretically, to exactly solve a non- Markovian problem, we need to enumerate all possible settings for FCVs and, under each possible setting, we generate the evolution of geophysical properties and revenue by simulation. After all possible results have been obtained, we can search for the optimal value using standard backward induction. While this method is straightforward and accurate, it is hardly feasible in reality due to its immense computational demand. Consider the case in Figure 1. Even for a simplified case where the FCVs are adjusted only three times throughout the life of the well, it took us a couple of months to generate all those simulations on a small Linux cluster. In fact, if we use eight Eclipse grids, three time periods, four settings for each valve, fixed setting for one valve, two aquifer strength samples, three oil-water contact samples, we need a total of 8 x 4 x (42 )3 x 2 x 3 =

786,432 Eclipse simulations. If each simulation takes an average of four minutes, it would require 2184 days to complete all simulations on a single computer. Detailed discussion of the optimal policy in this three-period model is presented in Section 2.

Figures

[0006] Figures 1 A and IB (PRIOR ART) provide an example of previous methods.

[0007] Figure 2 is a plot of data from a training set.

[0008] Figure 3 is an example of valuation of a single flow control valve. This example illustrates that using smaller bins does not necessarily earn the operator higher values. This Figure 3 relates to Figure 9 below.

[0009] Figure 4 is a plot of payoff as a function of time.

[00010] Figures 5 A and 5B illustrate the performance of the rolling-static policy under different bin sizes. We use hierarchical clustering to group measurements together. We use three measurements in the computation: FOPR, FWPR, and FGPR. When the bin size is 1, the optimal value is $423.18M while the value generated by the rolling-static policy is $421.56M.

[00011] Figures 6A, 6B, and 6C provide histograms of measurements t=l under rolling- static strategy.

[00012] Figures 7A, 7B, 7C, and 7D compare different measurements under the rolling- static strategy. Figure 7D shows the percentage of learning value captured under different measurements. The percentage of learning value captured is defined as (V rs -V s )/(V°-V s ).

[00013] Figures 8 A and 8B show the performance of the rolling-flexible policy under different bin sizes. We use hierarchical clustering to group measurements together. We use three measurements in the computation: FOPR, FWPR, and FGPR. When the bin size is 1, the optimal value is $423.18M while the value generated by the rolling-flexible policy is $422.39M.

[00014] Figures 9 A and 9B compare flexible valuation, optimal valuation, and 1- neighbor approximate valuation. Here we plot the different valuation strategies for different bin sizes. Figure 9 A shows the results with learning from FOPR. Figure 9B shows the results with learning from FGPR. In the 1 -Neighbor approximation approach, the set T requires 12,288 simulation scenarios. The scenarios are chosen such that s 22 = 3-s 21 , s 32 = 3-s 3 i, and s 23 =3-s 22 . The total number of simulations needed by the 1 -Neighbor policy is 49,152, including the T-set simulations. This is 93.5 percent fewer than in optimal valuation.

[00015] Figure 10 provides 1 -Neighbor approximation valuation with small bins. In this 1-Neighbor approximation approach, the set T contains 12,288 simulation scenarios. The scenarios are constructed such that s 22 = 3-s 21 , s 32 = 3. The total number of simulations needed by the 1-Neighbor policy is 49,152, including the T-set simulations. This is 93.5 percent fewer than in optimal valuation.

Summary

[00016] Embodiments herein relate to apparatus and methods for controlling equipment to recover hydrocarbons from a reservoir including constructing a collection of reservoir models wherein each model represents a realization of the reservoir and comprises a subterranean formation measurement, estimating the measurement for the model collection, and controlling a device wherein the controlling comprises the measurement estimate wherein the constructing, estimating, and/or controlling includes a rolling flexible approach and/or a nearest neighbor approach.

[00017] Some embodiments use a simulator for estimating and/or an optimizer for controlling the device. In some embodiments, the optimizer resets the measurements and operates in a rolling fashion.

[00018] In some embodiments, the controlling flow rates includes a decision resolution. In some embodiments, the estimating includes the decision resolution, a basis-function regression and/or a k-neighbor approach.

[00019] In some embodiments, the geophysical measurements are surface sensors, downhole sensors, temporary sensors, permanent sensors, well logs, fluid production, well tests, electromagnetic surveys, gravity surveys, nuclear surveys, tiltmeter surveys, seismic surveys, water, oil, or gas flow measurements, and/or separated or combined flow measurements.

[00020] Some embodiments also include flooding with oil, gas, water, or carbon dioxide, EOR, static or controllable downhole valves, well placement, platform type and placement, drilling, heating the formation, or geosteering.

Detailed Description

[00021] The long-term expected oil production from a well can be optimized through real-time flow rate control. Ideally, operators can dynamically adjust the flow rates by setting downhole flow control valves conditional on information about geophysical properties of the well. The valuation of flow-control valves must take into account both the optimization problem and the future measurements that will be used to guide valve settings. The optimization of flow rate can be modeled as a dynamic programming problem.

However, it is impractical to solve for the optimal policy in this model due to the long time horizon in reality and the exponentially growing state space. To tackle the problem, we use several approximate approaches and demonstrate the performance of these approaches in a three-period model. We present the standard dynamic programming approach to derive the optimal policy below. Approximate policies are also discussed below, where our focus is on the discussion of two approximate approaches.

[00022] We test these policies under various situations and show there is a significant value in adopting approximate approaches. Furthermore, we compare these approaches under different situations and show under which condition approximate approaches can achieve near-optimal performance. Among all approaches discussed, the rolling-flexible approach and the nearest neighbor approach stand out for their computational efficiency and performance.

[00023] The valuation of production optimization through real-time flow control can be formulated as a dynamic programming problem. However, the numerical solution of this problem is nearly always computationally intractable because of the huge computational complexity for problems of realistic size. In order to solve this problem, we studied a set of approximate optimization policies with application to an example FCV problem whose size allowed the optimal solution to be computed for comparison with the various

approximations. Among these strategies, the rolling-flexible and the 1 -neighbor approximation policies are most effective with respect to our example problem. The rolling-flexible policy achieves nearly optimal results for a broad range of bin sizes with a 92 percent reduction in required simulations over the optimal policy. The 1 -neighbor policy has at 93.8 percent reduction in required simulations over the optimal policy, but demonstrated acceptable accuracy only when the bin size was very small.

[00024] Other findings are summarized as follows and are provided in more detail below.

• Using smaller bins (higher decision resolution) generally, but not always, leads to higher valuation.

• In the k-neighbor policy, setting k=l usually results in the best performance.

• The 1 -neighbor policy with learning from a single measurement outperforms the fixed valuation for most scenarios.

• Using more measurements results in equal or higher values. This is true for both optimal valuation and approximate valuation. The valuation is highest when all three measurements FOPR/FWPR/FGPR are taken into account.

• The 1 -Neighbor approach provides the lower bound of the optimal value.

[00025] In order to solve this predicament, we use several approaches to approximately derive the value of an FCV installation in an efficient and time-manageable manner. In terms of the usage of measurements, these approaches can be divided into two groups, those using measurements and those not using measurements. The first group of approaches does not involve any learning. Approaches in this group include the wide-open policy, the static policy and the optimal non-learning policy. The second group of approaches involves learning from measurement information, including the rolling-static policy, the rolling-flexible policy, the nearest neighbor policy and the feature -based policy. The rolling-static and rolling-flexible policies are based on their non-learning counterparts, the static and optimal nonlearning policies. The nearest-neighbor and the feature-based approaches are more advanced methods. While these two approaches are different in implementation, they are driven by the same motivation; instead of searching for the optimal FCV settings, π, by enumerating all possible simulation scenarios in set L, we generate a significantly smaller set of simulation scenarios T. We search for the optimal FCV control strategy ft in this smaller set T of scenarios and apply this strategy to value the FCV installation. In other words, we estimate the optimal strategy using incomplete data. The two approaches vary in the structure of the estimator. The first approach is non- parametric and is based on the K-Neighbor method. In the second approach, we approximate the optimal setting by a linear combination of basis functions.

[00026] In terms of the target of approximation, there are two streams of approximate dynamic programming methods: value function approximation and policy approximation. The value function approximation usually tries to decompose the value function as a linear combination of a few basis functions, thus overcoming the curse of dimensionality. Our approximate method employs the policy approximation instead of value function approximation. The reason is that the policy approximation method yields a lower bound for simulation-based valuation and facilitates comparison among different approaches. Furthermore, our tests of the value function approximation method show that its performance is not as promising.

[00027] There are two approaches among the approaches mentioned above that merit closer attention, the rolling-flexible approach and the nearest-neighbor approach. In the application of the rolling-flexible approach, we first fix the FCV settings across different periods and run the optimizer to find the best setting. We apply the best setting we found for the current period and use the simulator to generate the measurement for the next period. Given the measurement, we reset the settings by running the optimizer again. In other words, we run the optimizer in a rolling fashion. This process continues until we reach the last period. This rolling-flexible approach features the following aspects.

[00028] First, instead of solving the dynamic programming problem in a backward fashion, it optimizes in a forward manner. While there are forward approaches in dynamic programming in previous literature, these approaches assume that we are fully aware of the dynamics of the state. However, in our approach, we do not have to use any information about these dynamics. In the numerical part, we replace the optimizer by using the optimized results from full enumeration that had previously been computed in order to evaluate the optimal dynamic programming policy most widely used distance in classifiers is the Euclidean distance. However, when there is a large number of data points, finding the nearest neighbors in terms of Euclidean distance can be extremely cumbersome. In this paper, we propose a novel way to find nearest neighbors based on a specific definition of distance.

[00029] The nearest-neighbor policy approximation is essentially a classifier-based approximation. Lagoudakis and Parr use a support vector machine (SVM) as the classifier for Markov decision processes. Langford and Zadrozny describe a classifier-based algorithm for general reinforcement learning problems. The basic idea is to leverage a modern classifier to predict the value/policy for unknown states. However, none of these references explore the nearest-neighbor approach. Moreover, our nearest-neighbor approach depends on a special way of defining the distance between states.

[00030] According to this method, we rank states by fist coding them as multi-digit numbers and then applying the comparison rule of multi-digit numbers. The distance between two states is defined as the difference of indexes in the table. This method does not involve complex calculations and nearest neighbors can be found very easily.

Numerical study indicates this nearest neighbor approach provides excellent approximation in some situations. Clearly, this nearest-neighbor approximate approach can be extended to other dynamic programming problems.

[00031] An important step in solving the problem is the binning of measurements.

Although measurement variables are typically continuous, we need to bin them to evaluate expectations when making decisions. Valves are adjusted based on the measurements, but presumably one would not change settings based on an infinitesimal change in the measurements. The measurement change must be large enough to motivate the decision maker to adjust valve settings. This change threshold is called the decision resolution. It is impacted in part by the measurement resolution, but also by the "inertia" against making a change. This decision resolution determines the bin size to be used in our approach. Valve - control decisions are not directly based on the absolute measurements, but on the bins the measurements fall in. Smaller bins mean that we are more confident in making valve changes based on smaller changes in the measurements. We investigate how the bin size affects the valuation under different strategies.

[00032] The rest of the application is organized as follows. First, we illustrate the backward induction approach for valuation which is only suitable when we can afford to simulate valves for all uncertainties and control states. The value thus derived is the optimal value and serves as the benchmark for our approximate approaches. Next, we describe several approaches that derive the value approximately, including one approach that is based on basis-function regression and another that utilizes the k-neighbor approach in machine learning. Finally, we test these methods, compare their performances, and summarize the results.

Backward Induction

[00033] This section describes the standard method used for valuation, which works not only for a complete enumeration of the simulation state space but for an incomplete set as well. For the complete set, the value derived is the true optimal value and will be used as the benchmark subsequently. Understanding how the simulation results are used under this methodology may also help us to design a better optimization.

We study an (N +l)-period model with w FCVs installed in a well. Each FCV has g possible settings. The FCVs are set at t = 0, 1 , ... , N- 1 , measurements are taken at t = 1 , ... , N -1 , and final payoff of the well is realized at t = N. Note that no information about uncertainty has been disclosed when the FCVs are set at time 0. We use a vector St = [sit, .. . , SwtJT to denote the setting decisions of all w FCVs at t, and a vector Aft to denote all measurements collected at t. Further, let H. = {(S 0 , Mi, Si, ... Mt)} denote the set of historical settings and measurements up to t. Decision S t is made conditional on a specific history h t E H t . The optimal strategy π is a mapping, π: H t → S t . Let U denote the uncertainty factors in simulation, including oil-water contact, aquifer strength, and the simulation grids containing samples of porosity and permeability.

A dynamic programming algorithm is developed to maximize the expected value of the well using backward induction. Let V t (h t ) denote the expected final payoff conditional on history h t at time t. The algorithm follows.

• At time N -1 , given history h N _ £ we search for the optimal setting S / such that the ex ected value at N- 1 is maximized, where h N = (h N .j, S N .j) and V N (h N ) is the final value generated by simulation for the scenario h N at N.

• At t, given h t £ H t , we search for the optimal setting S * t such that the expected value at t is maximized,

where h t+ i = (h t , S t , M t+ i) and the function V t+ i (h t+ i) has been already obtained from the last step in induction.

• Finally at t = 0, we search for the optimal setting So such that the expected value is maximized,

where hi = (S 0 , Mi).

[00034] As we can see, the above method is a general method. It can compute the optimal control strategy and optimal value for any data set. When the data set is complete, it yields the true optimal value; when the data set is incomplete, it yields the control strategy and value for the incomplete set, which may be suboptimal for the complete set. Approximation Policies

[00035] To compute the exact optimal value is time-consuming because simulations under all possible settings are required. We first consider some basic approximate approaches. Later, we consider two advanced approximate approaches. Consistent throughout this numerical study, our test case is based on a three-period model, with eight Eclipse grids, three time periods, four settings for each valve, fixed setting for one valve, two aquifer strength samples, three oil-water contact samples. Thus, in order to derive the exact value, we need a total of 8 x 4 x ( 4 2 ) 3 x 2 x 3 = 786,432 Eclipse simulations to sample each state.

Policy 1: Wide Open Policy (No Learning)

[00036] We optimize under the condition that all FCVs are wide open throughout the life of the well. We need to run 48 simulations to obtain the value of vwo:

Policy 2: Static Policy (No Learning)

[00037] We optimize the expected payoff under the condition S 0 = Si = S 2 , i.e., the settings are static throughout the time, but can differ between valves. To derive the static policy, we need to run 48 x 64 = 3072 simulations to fully evaluate all relevant states, or an optimizer may be used to reduce the number of evaluations. Denote the value of V s by

Policy 3: Flexible Policy (No Learning)

[00038] Different from the static policy, where the settings remain the same through- out the time, the flexible policy allows the settings to change from period to period. But the setting is fixed within a single period. To derive the flexible policy, we need to run all 786,432 simulations, or an optimizer may be used to reduce the number of evaluations. Denote the value V^by the following. Policy 4: Rolling-Static Policy (Learning)

[00039] We dynamically adjust the static settings in order to account for learning from measurements. At t = 0, we solve the problem by searching for the optimal setting So under the condition So = Si = S 2 . This is the same optimization as in the static (no learning) policy. At t = 1, conditional on the setting of So and the measurements forecast at t = 1 by the simulator, we re -optimize for the remaining two periods under the static condition Sj =S 2 . Finally at t = 2, conditional on previous settings and measurements forecast up to t = 2, we search for the optimal setting of S 2 . The number of simulations required depends on how we bin the measurements. We can derive an upper bound on the number of required simulations under the condition of no binning as 48 x 64 + 48 x 16 + 48 x 16 = 4608 simulations.

Denote the rolling-static valuation, V s , by

denoting the optimal setting as ao ;

VT - mass Bp Mi , St > % ¾}!!¾ - ¾ - ¾ 5 denoting the optimal setting as &i ;

- max E?j F( |, Mi , ÷ , ifo ¾}!¾ - ¾ j » denoting the optimal setting as a 2 * ;

Policy 5: Rolling-Flexible Policy (Learning)

[00040] Here, we dynamically update the flexible policy to account for learning from future measurements. At t = 0, we solve the problem by searching for the optimal setting So as in the flexible policy. At t = 1 , conditional on the setting So and measurements forecast at t = 1 by the simulator, we re-optimize for the remaining two periods according to the flexible policy. Finally, at t = 2, conditional on previous settings and the

measurements forecast up to t = 2, we search for the optimal setting of S 2 . An upper bound on the required number of simulations is 786,432, equal to simulating all possibilities in the state space. In practice, an optimizer would be used to carry out each optimization and re- optimization step, thus reducing the number of required simulations at the expense of perhaps missing the globally optimal solution at each step. Denote the rolling-flexible valuation, V s , by

denoting the optimal setting as a 0 * ;

; - max Er; V(^ t M \ ¾ < M 2 i ¾ - ¾ 1 5

d imng the Q mm timg a

denotin the optimal, sct o as ® ;

[00041] The optimal policy is based on backward induction, which provides the exact solution to the valuation problem. Unfortunately, this policy is computationally impractical ("Curse of Dimensionality") because it requires an enumeration over the entire state space, resulting in the expensive reservoir simulator being ran over every possible uncertainty case and valve setting. In our limited example, this required 786,432 simulations but the limitations imposed by the need to make this a practical number of simulations made this example an impractical representation of the real-wor ld decision problem at hand. Even a modest improvement allowing 10 decision periods and 10 valve settings enlarged the number of simulation cases to over 1023, grossly impractical from a computational point of view. However, as a limiting case, this exact solution, denoted by V° , can be used to denote the maximum value we aim to achieve in our approximation policies.

[00042] The above approximate policies, excluding the optimal policy, can be divided into two categories: those with re-optimization and those without re-optimization. The wide-open policy, static policy, and flexible policies are in the former category, and the rolling policies (that re-optimize in each period conditional on new information) are in the latter.

Lemma 1 We have the following relationships among different values:

Lemma 2 The expected payoff generated by the rolling-static policy never decreases, e.g., the expected final payoff conditional on the second {resp. third) period information is no less than the payoff conditional on the first {resp. second) period payoff.

Proof. The proofs of Lemmas 1 and 2 follow directly from the definitions of these policies. Two Advanced Approximation Methods

[00043] In this section, we describe two advanced approximation approaches based on the notion that in order to estimate the FCV control strategy for all simulations, one can derive a strategy derived from a small set of states (simulations) and then apply this strategy to the full set of states. We assume H denotes the set of simulations under all possible states and lr denotes the optimal strategy for adjusting the valve settings. Lot

J \„ £s denote the set of simulations we have already obtained and that will be used to estimate future decisions. We derive a strategy ft from the set T and then approximate π from ft. If ||T|| « \\H\\, we will be able to significantly reduce the number of required simulations.

[00044] Specifically, suppose we have obtained the set T of m scenarios by simulation. What we need to do is to find some strategy ft from the above scenarios, perhaps by using backward induction, and then use it to approximate π from which we can approximate the optimal solution using backward induction. Assume a new (m + l)th scenario h ,m+i = (S 0 ,m+i, ··· , M N _i iin +i) has been generated and our objective is to find optimal setting S*N -i, m + i (hN-i,m+i) from our approximate strategy ft. There are possible settings for S N -i,m+ 1 that would need to be considered, and the conventional backward induction method requires that we find the optimal setting by enumerating all settings. In the approximate approach, we choose the optimal well setting according to S N-i, m - i

where/is the estimator function that estimates the optimal control SN-i, m+ i based on T and ft. There are various ways to design the estimator Here we propose two different estimators, a feature -based approach and a non-parametric approach.

Feature-Based Approach

[00045] Several references provide a detailed description about the feature-based approach. In a typical dynamic programming problem, the size of a state space normally grows exponentially with the number of state variables. Known as the curse of dimensionality, this phenomenon renders dynamic programming intractable in the face of problems of practical scale. One approach to dealing with this difficulty is to generate an approximation within a parameterized class of functions or features, in a spirit similar to that of statistical regression. In particular, to approximate a function V* mapping the state space to reals, one could design a parameterized class of functions V , and then compute a parameter vector r to fit the cost-to-go function, so that V (., r) ~ V*(.).

[00046] The method described above is the conventional approach. Different from the conventional approach where the cost-to-go function is approximated by linear combination of basis functions, we approximate the decision instead. The reason is that value should be obtained from simulation rather than approximation. In other words, linear approximation is employed and / can be written as where each ø is a "basis function" and the parameters rl , ... , r represent ba- sis function weights. Given the linear approximation scheme, we just need to simulate for certain decisions and derive the weights n through least square method. Then, the decisions for other scenarios can be approximated by a linear combination of basis functions. Possible choices of basis functions include polynomial, Laguerre, Hermite, Legendre, and Jacobi polynomial.

The Non-Parametric Approach

[00047] This method requires no model to be fit. Given a query scenario IIN _ i,m + 1 ? we approximate SN-Ι, ΠΙ +Ι from the optimal decisions made on the k nearest scenarios in T. To begin with, let us focus on decisions at the last period t =N- 1. For a given history

--ΐ ½ ^ i -i there are 16 possible settings for the two active valves:

. This approximation approach is schematically illustrated in

Figure 2. Each point in the figure represents a distinct scenario. The red points mark the optimal decisions made for scenarios in T. If a point falls into a square, it means that the optimal setting S is given by the horizontal and vertical axes of the square. The blue points correspond to approximate solutions that were identified based on the optimal solutions of their k nearest neighbors in T. In other words, we know the history for each red point and its optimal decision S * N-I and, based on what we know about the red points, we need to develop a strategy to value all the blue points. For the blue points, instead of testing all 16 possible settings, we run the simulation for the chosen setting directly. Now the number of simulations required is about 1/16 of the original enumeration method. A natural question is how to define the distance between two scenarios hm.i and ½_ / Such details are discussed in more detail below.

[00048] In the non-parametric approach, the mapping/is treated non-parametrically. Here we focus on the local regression method, where f(S 0 , M 1? S 1? M 2 ) is fitted by using those observations close to the target point (S 0 , M 1? Si, Mj). This method, in a general form, is as follows. where S 2, indicates the optimal setting for the i-th points in T, and Wi is the weight of that setting. The weight is determined by a kernel method, i.e., for points x 0 = (SoMiSiMi) and Xi= (S0M1S1M2), the kernel is

K(x 0 , Xi) = Z)(|x 0 - Xi|), (6) where |x 0 - x;| is the distance between the two points, and D() is a function of the distance. The weights are then defined by

Cross Validation

[00049] In the description of both methods, we take some parameters as exogenous, e.g., the set of the basis functions and the number of neighbors used. In a robust algorithm, instead of using exogenous parameters, we should fit those parameters to the model.

Further, given a small set of simulation results, we would like to estimate how accurately our method can recover the optimal policy. The simplest and most widely used approach to addressing these two issues is cross-validation.

[00050] Ideally, if we have enough data, we would set aside a validation set and use it to assess the performance of the valuation model. In the K-fold cross-validation, we split the training set T into K roughly equal-sized parts. For the k-th part, we fit the model to the other K - 1 parts of the data and calculate the prediction error of the fitted model when predicting the k-th part of the data. We do this for k= 1, 2, ... , K and combine the K estimates of the prediction error. We choose the weights such that the prediction error is minimized. Please refer to the literature for detailed description of cross validation.

Numerical Results

Data

[00051 ] The simulation data set is generated by a simplified three-period model. We use eight Eclipse grids, three time periods, four settings for each FCV, fixed setting for one specific FCV after the first period, two aquifer strength samples, three oil-water contact samples. The data set consists of a complete enumeration and valuation of the state space, namely 8 x 4 x (4 2 ) 3 x 2 x 3 = 786,432 Eclipse simulations, with 62 data entries in each scenario. Among the entries in each scenario, one element is the scenario index, three elements represent the simulation environment, seven elements represent the control, and the remaining are the simulation measurements. The measurements are taken at three dates after well operation starts: 600 days, 1200 days, and 2400 days, while the FCVs are set at time 0, 600 days and 1200 days. Note that at time 0, no information has been disclosed when valves are set. For notational convenience, we use t £ {0, 1, 2, 3} to represent 0, 600, 1200 and 2400 days after operation starts. Valves are set at t = 0, 1, 2 immediately after measurements are made, except for t = 0. The i-th (i = 1, 2, 3) valve at time t has four possible settings, s it £ {0, 1, 2, 3}, where 0 means closed,

3 means fully open, and 1 and 2 are intermediate states. To reduce the state space, we have imposed the simplification that Sn = s 12 = s 13 , i.e., once valve 1 is set in period 1, it will remain in that setting at all later times. We use a vector S t (t = 0, 1, 2) to denote the aggregate setting of all three valves at t, and a vector M t (t = 1, 2, 3) to denote all measurements taken at t.

Methodology

[00052] We employ the following valuation strategies that were initially described above: the wide-open policy, the static policy, the flexible policy, the rolling-static policy, the rolling-flexible policy, the optimal dynamic policy, the k-nearest-neighbor policy and the feature -based policy. The difference between the valuation achieved with the optimal dynamic policy (a learning- based approach) and the flexible (non-learning) policy represents the value of learning. The approximate learning-based approaches are presented here as more practical proxies for the optimal dynamic policy, with the goal of

demonstrating that an approximate policy can achieve valuation results close to the optimal value.

Measurement Binning

[00053] Although measurement values typically belong to the class of real numbers, we discretize each measurement by aggregating similar measurement values into bins such that all measurements in the same bin are considered to be equal. We then compute the valuation based on these binned measurements.

[00054] Another aspect of binning is the connection between the number of states in a bin and the remaining uncertainty at that juncture of the decision tree. If a particular bin contains only one measurement at t = t*, then the sequence of decisions and measurements for t < t* have completely resolved all uncertainty for that state for all t > t*. This complete solution of uncertainty based on a limited set of measurements is artificial in the sense that it is only possible because of the finite set of states being used to represent all uncertainty in the problem.

[00055] Here, we consider two approaches for doing this binning. The simplest approach is to divide the space of measurements into equal-sized intervals and then assigning all measurements within each interval to the same bin. A disadvantage of this approach is that when measurements possess a natural clustering pattern, the measurements composing a cluster may be artificially divided into different bins, even if the bin size is large enough to accommodate all of the cluster within a single bin. An alternative approach is to perform cluster analysis, in which the measurement values are divided into natural clusters with respect to proximity and the maximum range of measurement values within each bin.

When decisions are made based on multiple measurements, cluster analysis is done on each measurement separately. We use hierarchical clustering analysis to bin the measurements according to the given decision resolution. Specifically, we do cluster analysis on the measurements and adjust the clusters until the size of the biggest cluster is smaller than the bin size.

[00056] We demonstrate that the concept of using smaller bins always leads to higher valuation is not true through the following counter-example. We consider the valuation of a single valve based on the three scenarios shown in Figure 3. There are three possible measurement values {10, 15, 17} . The valve has two possible settings S £ {1, 2} . For each setting, the payoff is shown at the end of the branch. Consider two possible bin sizes, 4 and 6. If bin size is 4, then the three scenarios can be grouped in terms of the measurement as {10} and {15, 17} . The optimal setting is 1 for both

{10} and {15, 17} . Taking the three scenarios as being equally likely, the

expected payoff from these three scenarios is 700/3. If the bin size is 6, then the scenarios can be grouped as {10, 15} and {17} . The optimal setting is

2 for {10, 15} and the optimal setting is 1 for {17} . The expected payoff is 800/3 . Hence the payoff under bin size 6 is higher than the payoff under bin size 4. It is easy to see that other groupings are possible for the above two bin sizes in this example, and these lead to different payoffs.

[00057] Attention to the binning issue is important for achieving consistent valuation. Bins essentially specify a partition H = Hi U H 2 U ... U H m of the state space. As indicated by the counterexample, a new partition H = ΗΊ U H' 2 U ... U H' n with n >m does not necessarily correspond to higher valuation. However, if the new partition U n ; = iH'; is a refinement of U m i= ι¾ (i.e., every element of {¾ } i= i n is a subset of some element in {H'i} n i=i), then it does lead to higher value. The use of appropriate clustering algorithms that lump together measurements with a proximity priority should serve to preserve this refinement condition, thus leading to more consistent valuation.

Advanced Approximation Methods

[00058] In both k-neighbor and the feature-based valuation, we need to choose simulation scenarios to construct the set T. We then derive a strategy π based on T. The construction of T is a critical step since a proper choice of T can result in better approximation. To derive the optimal value, we still need to generate some (but not all) scenarios out of H. Specifically, for a given history (S 0 ,Mi,Si,M 2 ), there are 16 possible settings for S 2 . To compute the optimal value in the conventional approach, we must obtain all 16 scenarios corresponding to the 16 different settings. In the approximation approach, we just pick one setting S' 2 and run a single simulation (S 0 , Mi, Si, M 2 , S' 2 )■ How S' 2 is chosen is based on what we know about S 0 ,Mi,Si,M 2 ), T and π. The number of scenarios we need is ||Γ|| + ||H— T\\/16. Also note that, by definition, the approximate value is always lower than the optimal value and serves as a lower bound. The k-neighbor algorithm is outlined in Table 1. [00059] The critical issue in the approximation approach is how to define the "distance" among the simulation scenarios. We first arrange simulation scenarios in an ordered table according to the following rule. A scenario (S 0 , Mi, Si, M 2 , S 2 ) is treated like a "multi- digit number" with S 0 being the first digit and S 2 being the last digit. We compare another scenario (SO, ΜΊ, S'i, M' 2 , S' ¾ ) to (S 0 , Mi, Si, M 2 , S 2 ) in the spirit of number comparison: ifS o > S' 0 , then we say (S 0 , M Si, M 2 , S 2 ) > fS' 0 , ΜΊ, S'i, M' 2 , S and insert (S' 0 , ΜΊ, S'i, M'2, S before fS 0 , M h S u M 2 , S 2 ; if S 0 < SO, do

the opposite. If S 0 = SO, we move forward and compare the next digit Mi and ΜΊ. This procedure is repeated until the ordering relationship is determined. After the table is obtained, the "distance" between two scenarios is then defined as the difference between their positions in the table. This definition of "distance" assigns more weights to early settings and measurements. A natural question is, when we face a decision-making problem in later periods, how can we use scenarios with similar elements in early periods to estimate the setting? We demonstrate below that not only does this strategy work well, but there is also a sound explanation behind it.

Results

[00060] The valuation results of the 7 policies are summarized in Table 2. The numbering is in order of increasing valuation, with the first three policies being non- learning policies of increasing control complexity, and the last four policies benefiting from learning from the three co-mingled measurements of FOPR, FGPR, and FWPR. Note that these latter four policies may all be thought of as providing approximations of the optimal learning value, with the approximation complexity (number of required

simulations) increasing with the policy number. In this section, we describe the results of the latter four policies in more detail.

Rolling-Static Policy

[00061] While the optimal policy requires that all possible model states be simulated in order to perform the valuation using the backward-induction algorithm, the rolling-static policy requires only forward optimization with a static forward model of the future valve states. This greatly reduces the number of required simulations, in this case to < 4608 simulations. The number of simulations has its maximal value when the state space is exhaustively searched for the optimal value, but further savings can be achieved when an optimizer is used to seek the optimal value using fewer simulations.

[00062] Figure 4 shows the performance of the rolling-static policy on each of the 48 prior models as a progression over the first and second period time steps. Note that the optimal value is achieved on the first time step on many of the models. For the remaining models, the value at each step improves monotonically with successive time steps, consistent with Lemma 2.

[00063] The performance of the rolling- static policy versus bin size, when learning from the measurements FOPR, FGPR and FWPR, is illustrated in Fig- ure 5. For comparison, valuation curves are provided for the static and optimal policies. Note that the rolling- static valuation generally, but not strictly, increases with decreasing bin size. As an approximation of the optimal policy, the rolling-static policy recovers between about 50% and 80% of the value of the optimal policy, depending on bin size, with a better level of approximation provided with smaller bin sizes.

[00064] So far, we have examined the validity of the rolling-static approximation versus bin size. Another aspect of valuation is to determine which measurements add the most value to the FCV installation. Figure 6 shows the histograms of the reservoir simulator output parameters FOPR, FWPR, and FGPR under the rolling-static policy at t = 1 when there is no binning. The prior uncertainty in the model is described by the 48 reservoir model configurations discussed previously. Under the rolling-static policy at t = 1, the optimum So has already been set, resulting in 48 possible measurements

at t = 1. Measurements that vary widely at early times with respect to the

prior model uncertainty are better at resolving model uncertainty because each

measurement bin will contain only a few models, meaning that there is less uncertainty in the next step of the algorithm. Conversely, measurements whose values cluster tightly into a few small bins have resolved little model uncertainty. Since the distribution of FGPR, shown in Figure 6, is less concentrated compared to FOPR and FWPR, it should contribute more value to the FCV installation, and thus is the measurement upon which to focus. [00065] The valuation of the individual measurements using the rolling-static policy is illustrated in Figure 7 along with valuations for the flexible, rolling- flexible and optimal policies. As anticipated, the FGPR measurement achieves the highest valuation under the rolling-static policy. The rolling-static policy also predicts that FOPR provides no additional value above that predicted by the non-learning flexible policy and provides an intermediate valuation for FWPR. However, an examination of the optimal valuation curves for these three measurements shows that the measurement valuation provided by the rolling-static policy is spurious, even when considered in a relative sense. -With the optimal policy (the exact solution), all three measurements add about $4.5 x 106 to the non-learning valuation. This indicates that the rolling-static policy cannot be trusted to provide accurate measurement valuation, even in a relative sense.

Rolling-Flexible Policy

[00066] The rolling-flexible policy is an extension of the rolling-static policy that allows the optimizer a bit more freedom in choosing the best valve-adjustment strategy based on learning. While in the rolling-static policy the optimizer holds all of the future valve states to be equal to the valve states chosen for the current time step, the rolling-flexible policy allows these future valve states to be free to be adjusted to achieve the best possible valuation. The resulting valuation for single measurements versus bin size is plotted in Figure 7. The rolling-flexible policy surmounts all of the deficiencies identified above in the rolling-static policy, and captures most of the value in the optimal policy. The rolling- flexible valuation for three combined measurements versus bin size is further explored in Figure 8, where it is clear that this policy captures most of the value of the optimal policy over a broad range of bin sizes.

[00067] The rolling-flexible policy is clearly superior to the rolling-static policy in all but one aspect, namely, that it requires many more simulations than the rolling-static policy. In the worst-case scenario in which the optimization is done using full enumeration of the state space, the rolling-flexible policy requires full enumeration of the entire state space (768,432 simulations), while the rolling-static policy enumerates a reduced state space (4,608 simulations). In practice, one would use an optimizer that explores the state space more efficiently, and thus the actual number of simulations incurred during optimization would be much smaller. However, this reduction is achieved with the possible consequence of finding a suboptimal solution.

[00068] An alternative to the rolling-flexible policy that reduces the state space to be explored during optimization is what we call a rolling-flexible-k policy. In this policy, only valve states up to k steps in future are allowed to be flexible during optimization. This is a generalization that encompasses both the rolling-state and rolling-flexible policies. The rolling-static policy is equivalent to a rolling-flexible-0 policy because the valve states in future steps are not flexible and are set to be equal to the states in the current time step. The rolling-flexible policy is equivalent to a rolling-flexible-0 policy because the valve states in all future steps are allowed to be flexible. Although no valuation results were produced in this study for these rolling-flexible-k policies, we have examined the reduced size of the resulting state space. A rolling-flexible- 1 policy requires 62,208 simulations for full enumeration, a 92% reduction is state-space size. This reduction grows exponentially with the number of periods in the problem.

1 -Neighbor Approximation Policy

[00069] Our numerical tests indicate that setting k = \ usually leads to the best performance in the k-neighbor approach. Figure 9 plots the performance of different valuation strategies under different bin sizes with learning from FOPR and FGPR, respectively. The 1-neighbor approximation policy required 12,288 simulations to be run to construct the set T, and a total of 49,152 simulations to be run to complete the optimization. This is a reduction 93.8% compared to the 786,432 simulations required by the optimal policy. The flexible policy value does not depend on bin size by definition and is a constant $418.3 x 106. Consistent with the discussion above, the optimal value is generally monotonically increasing with respect to smaller bin sizes. For both panels, the best performance of the optimal/approximation approach is achieved at the smallest bin size considered, where the optimal values are $422.6 x 106 and $422.7 x 106 respectively and the 1-neighbor approximate values are $420.2 x 106 and $421.2 x 106.

[00070] A comparison of these 1-neighbor approximation values (Figure 9) with the rolling-flexible valuations in Figure 7 for FOPR and FGPR shows that the rolling-flexible policy significantly outperforms the 1 -neighbor policy in the quality of the valuation approximation, while the required number of simulations is nearly the same. The quality of the 1 -neighbor approximation for small bin sizes is illustrated in Figure 10, where the accuracy of the approximation is seen to improves significantly for very small bin sizes. This is a consequence of the high degree of clustering in the measurements. Table 3 shows a portion of the complete measurement table organized in the "multi-digit" comparison way described above. The optimal setting 82 (the last two columns) displays a significant clustering structure. Clustering is not obvious for some scenarios. But for a majority of measurements, clustering is strong. The 1 -Neighbor approach exploits this clustering property to achieve near-optimal performance, but only for small bin sizes where the rolling-flexible policy also achieves good performance.

[00071] Overall, these results support a recommendation of the rolling-flexible policy in this example. In the case of very small bin size, the 1 -neighbor policy becomes

competitive.