Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
RAPID AND REALISTIC THREE-DIMENSIONAL STRATIGRAPHIC MODEL GENERATOR CONDITIONED ON REFERENCE WELL LOG DATA
Document Type and Number:
WIPO Patent Application WO/2023/044146
Kind Code:
A1
Abstract:
Platforms and methods for generating 3D stratigraphic models are proved wherein the platform utilizes a Motion and Content decomposed GAN that is trained using actual data or generated data. The generation of the stratigraphic models can also include using Wasserstein training loss for improved model diversity.

Inventors:
TILKE PETER (US)
ETCHEBES MARIE (FR)
LEFRANC MARIE EMELINE CECILE (NO)
ZHU LINGCHEN (US)
Application Number:
PCT/US2022/044087
Publication Date:
March 23, 2023
Filing Date:
September 20, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SCHLUMBERGER TECHNOLOGY CORP (US)
SCHLUMBERGER CA LTD (CA)
SERVICES PETROLIERS SCHLUMBERGER (FR)
SCHLUMBERGER TECHNOLOGY BV (NL)
International Classes:
G06N3/08; G06N3/04; G06T17/00
Foreign References:
US20200204822A12020-06-25
US10422924B22019-09-24
KR102279772B12021-07-19
Other References:
KAHEMBWE EMMANUEL; RAMAMOORTHY SUBRAMANIAN: "Lower dimensional kernels for video discriminators", NEURAL NETWORKS., ELSEVIER SCIENCE PUBLISHERS, BARKING., GB, vol. 132, 26 September 2020 (2020-09-26), GB , pages 506 - 520, XP086341374, ISSN: 0893-6080, DOI: 10.1016/j.neunet.2020.09.016
YUSHCHENKO VLADYSLAV; ARASLANOV NIKITA; ROTH STEFAN: "Markov Decision Process for Video Generation", 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP (ICCVW), IEEE, 27 October 2019 (2019-10-27), pages 1523 - 1532, XP033732701, DOI: 10.1109/ICCVW.2019.00190
Attorney, Agent or Firm:
FLYNN, Michael L. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of training a motion and content decomposed generative adversarial network comprising: training a video discriminator and an image discriminator with a neural network in an image generator using at least two batches of data, wherein the data batch sent to train the image discriminator comprises randomly selected data from one of actual data and generated data; freezing neural networks for the video discriminator and the image discriminator; and training, using at least one batch of data, the image generator with the neural networks for the video discriminator and the image discriminator.

2. The method according to claim 1 , wherein the data is actual data from a field location.

3. The method according to claim 1 , wherein the data is generated data from a computer.

4. The method according to claim 1 , wherein each data batch sent to train video discriminator contains T consecutive frames from data.

5. The method according to claim 1 , wherein the generative adversarial network is two neural networks,

6. The method according to claim 5, wherein at least one of the two neural networks is a generator network.

7. The method according to claim 5, wherein at least one of the two neural networks is a discriminator network.

8. The method according to claim 1 , wherein the data contains high-resolution three- dimensional stratigraphic modeling data.

9. The method according to claim 1 , wherein artificial intelligence is used in the training.

10. The method according to claim 1 , wherein the freezing of the neural networks for the video discriminator and the image discriminator includes fixing a number of weights in computations.

11. A computer program product, comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for generating a report, said method comprising: training a video discriminator and an image discriminator with a neural network in an image generator using at least two batches of data, wherein the data batch sent to train the image discriminator comprises randomly selected data from actual data or generated data; freezing neural networks for the video discriminator and the image discriminator; and training, using at least one batch of data, the image generator with the neural networks for the video discriminator and the image discriminator.

12. The computer program product according to claim 11 , wherein the computer is one of a server, a personal computer, a cellular telephone, and a cloud based computing arrangement.

Description:
PATENT

RAPID AND REALISTIC THREE-DIMENSIONAL STRATIGRAPHIC MODEL

GENERATOR CONDITIONED ON REFERENCE WELL LOG DATA

CROSS-REFERENCE TO RELATED APPLICATIONS

[001] The present application claims priority to United States Provisional Application 63/246097, filed September 20, 2021 , the entirety of which is incorporated by reference.

FIELD OF THE DISCLOSURE

[002] The present disclosure generally relates to synthesis data platform ("SDP"), and the use of synthetic data generated from the SDP to train artificial intelligence and machine learning solutions ("AIML"), and building stratigraphic models in an automated fashion using trained AIML solutions. In one or more embodiments, the SDP can be used to create models to train and can be used to create models to train a Gan-based solution, and the Gan-based solution can be used to efficiently create models conditioned to observed logs.

BACKGROUND INFORMATION

[003] Stratigraphic processes in three-dimensional modeling is the best way to avoid generating unrealistic representations of the subsurface leading to inaccurate forecasting of reservoir production, carbon capture, utilization and storage capacity, or fluid migration for geothermal energy production. However, the generation of accurate three- dimensional modeling is time intensive and often does not take account of observed well measurements during the model creation.

[004] Synthetic data generated using a SDP can be used to build robust AIML solutions that fit the business needs of subsurface stratigraphic interpretation use cases. AIML solutions are very useful in the training of a generator and discriminator for a generative adversarial network ("GAN"). Furthermore, a GAN solution of a platform that is lightweight and capable of video generation is developed to create a robust three-dimensional stratigraphic models. [005] The disclosed platforms and use of synthetic data overcome the difficulties associated with building stratigraphic models and training AIML solutions to allow for accurate forecasting of reservoir production, carbon capture, utilization and storage capacity, and fluid migration for geothermal energy production.

SUMMARY

[006] So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized below, may be had by reference to embodiments, some of which are illustrated in the drawings. It is to be noted that the drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments without specific recitation. Accordingly, the following summary provides just a few aspects of the description and should not be used to limit the described embodiments to a single concept.

[007] In one example embodiment, a method of training a motion and content decomposed generative adversarial network is disclosed. The method may comprise training a video discriminator and an image discriminator with a neural network in an image generator using at least two batches of data, wherein the data batch sent to train the image discriminator comprises randomly selected data from one of actual data and generated data. The method may further comprise freezing neural networks for the video discriminator and the image discriminator. The method may also further comprise training, using at least one batch of data, the image generator with the neural networks for the video discriminator and the image discriminator.

[008] In another example embodiment, a computer program product is disclosed. The product may comprise a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for generating a report, said method comprising: training a video discriminator and an image discriminator with a neural network in an image generator using at least two batches of data, wherein the data batch sent to train the image discriminator comprises randomly selected data from actual data or generated data. The method may further comprise freezing neural networks for the video discriminator and the image discriminator and training, using at least one batch of data, the image generator with the neural networks for the video discriminator and the image discriminator.

BRIEF DESCRIPTION OF THE FIGURES

[009] Certain embodiments, features, aspects, and advantages of the disclosure will hereafter be described with reference to the accompanying drawings, wherein like reference numerals denote like elements. It should be understood that the accompanying figures illustrate the various implementations described herein and are not meant to limit the scope of various technologies described herein.

[010] FIG. 1 depicts a schematic of a Motion and Content decomposed GAN.

[011] FIG. 2 depicts an unconditional high-resolution three dimensional stratigraphic model example, according to an embodiment of the disclosure.

[012] FIG. 3 depicts reference depofacies logs for conditioning.

[013] FIG. 4 depicts depofacies logs sampled from a generated constrained model.

[014] FIG. 5 depicts a conditioned high-resolution 3D stratigraphic model example whose sampled logs correspond to FIG. 4. DETAILED DESCRIPTION

[015] In the following description, numerous details are set forth to provide an understanding of some embodiments of the present disclosure. It is to be understood that the following disclosure provides many different embodiments, or examples, for implementing different features of various embodiments. Specific examples of components and arrangements are described below to simplify the disclosure. These are, of course, merely examples and are not intended to be limiting. However, it will be understood by those of ordinary skill in the art that the system and/or methodology may be practiced without these details and that numerous variations or modifications from the described embodiments are possible. This description is not to be taken in a limiting sense, but rather made merely for the purpose of describing general principles of the implementations. The scope of the described implementations should be ascertained with reference to the issued claims.

[016] As used herein, the terms "connect", "connection", "connected", "in connection with", and "connecting" are used to mean "in direct connection with" or "in connection with via one or more elements"; and the term "set" is used to mean "one element" or "more than one element". Further, the terms "couple", "coupling", "coupled", "coupled together", and "coupled with" are used to mean "directly coupled together" or "coupled together via one or more elements". As used herein, the terms "up" and "down"; "upper" and "lower"; "top" and "bottom"; and other like terms indicating relative positions to a given point or element are utilized to more clearly describe some elements. Commonly, these terms relate to a reference point at the surface from which drilling operations are initiated as being the top point and the total depth being the lowest point, wherein the well (e.g., wellbore, borehole) is vertical, horizontal or slanted relative to the surface.

[017] Data is crucial for implementing, training and applying Artificial Intelligence and Machine Learning (AIML) solutions for subsurface interpretation. There are two types of data: real and synthetic. While real data is often desirable, it is generally sparse, unlabeled, and biased. To overcome these challenges of getting the appropriate data a synthetic data platform ("SDP") is disclosed, the SDP is a Geology "process mimicking" platform.

[018] The SDP uses simulation-based data synthesis. Synthetic data is not real data but it approximates the real data regarding the physics attributes, and as such has many advantages over any AIML approach based explicitly or implicitly on real data. Synthetically generated data is automatically labeled. The full spectrum of physically plausible states can be modeled, thereby managing class bias that is ubiquitous in real data. Vast amounts of synthetic data can be generated using parallelized cloud computing at a cost which is much lower than acquiring real data. There are essentially no privacy or security concerns with this type of data. Synthetic data can be relatively simplistic, but this can generally be addressed by increasing the sophistication of the modeling engine as required. The synthetic data can also be created to include noise and bias matching real data as required for accurate training.

[019] The SDP can be used to create a library of 3D geological models that can be used in a variety of novel AIML geological workflows. For example, the 3D geological models can be used with a AIML based querying application to rapidly search the library for analogs that can explain observations or insight into planned measurements. The SDP can also used to train an AIML based solution to generate in an automated manner a plurality of physically valid three-dimensional geological models conditioned to observations ("Model Generator"). Any now know or future known SDP can be used. One skilled in the art with the aid of this disclosure would know what SDP can be used.

[020] A stratigraphic forward modeler (SFM) that can be physics based can be used to generate 3D stratigraphic models. The SFM can be part of the SDP or a stand-alone SFM. The SFM can be physics based or any other SFM that is now known or future known. The SFM can be used to generate a library of 3D stratigraphic models with the help of high-performance computing (HPC). A 3D stratigraphic model can be regarded as a video stacking L frames of 2D images and each image represents a snapshot of the deposited rock facies properties from a certain geologic time step [3], These models can be built by a physics based SFM, layer by layer like 3D printing. Although the SFM can create accurate models based on physics laws, it may take hours for one single high-resolution 3D stratigraphic model, or multiple models with HPC. Nevertheless, observed well measurements are not taken into accounts during the model creation process.

[021] To accelerate the model creation process and fully utilize observed well measurements, an AIML technique called generative adversarial networks (GAN) whose training set is created by the SFM using HPC can be used. The purpose of GAN is to learn the underlying distribution of the training data X and embed it to a latent vector z in an unsupervised manner. GAN consists of two neural networks, a generator G and a discriminator D. The objective of G is to generate data from z to look like training data sampled from a distribution p x while the objective of D is to distinguish between the training data X~p x and the generated data G(z). G and D are trained by solving the following optimization problem in Equation (1 ) until they reach a Nash equilibrium:

[022] GAN enables the generation of novel data samples like the training data very quickly after both of its generator and discriminator have been well trained. GAN was originally proposed to generate 2D images. The challenge of extending this approach to 3D was demonstrated in prior art, but directly creating large models directly using 3D GAN proved infeasible. However, it is recognized that a 3D volume of geological sediment is a sequential composition of 2D deposition and erosion, we could treat the problem of building the 3D model as analogous to 3D printing or additive manufacturing. To support this concept, a stratigraphic forward modeler that constructs the 3D model in this manner to generate training data. Recognizing that these training data are analogous to videos of the geology evolving in geological time; therefore, a more lightweight but advanced GAN solution called Motion and Content decomposed GAN (MoCoGAN). FIG. 1 illustrates the overall framework of MoCoGAN.

[023] The MoCoGAN generator G { converts a sequence of latent vectors to a video using 2D convolutional neural network (CNN): where the value of L determines the length of video. As shown in Equation (2), MoCoGAN decomposes each latent vector z® into two components: a fixed content component z c that models object appearance in the video and a motion component that models the moving trajectory of the object. To generate geological videos, the content latent vector z c models motion-independent part of stratigraphic models such as the fluvial channel, while the motion latent vector models changes of fluvial channels over time, like the changes of channel curvatures, the creation and deposition of oxbow lakes and crevasse splays, etc. We can sample z c from a Gaussian distribution as where I c is the content latent vector length. The motion trajectory path are the sequential outputs from a recurrent neural network cell such as Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM) by providing a series of independent and identically distributed (i.i.d.) Gaussian vector at each timestep, where I M is the motion latent vector length. [024] The MoCoGAN has two discriminators, a video discriminator D v and an image discriminator D I D v equips 3D CNN and is trained to determine if a video it sees is from a real video from training set X~p x or is generated by the generator On the other hand, D l equips 2D CNN and is trained to criticize if any image frame is sampled from X, or from

[025] The training of MoCoGAN still follows the alternating gradient update algorithm proposed with GAN. First D v and D I are trained while the neural network in G, is frozen for a few batches, and then G, is trained for one batch while the neural networks in D v and D I are frozen. Each video sample sent to train D v contains T consecutive frames from either the video training set or a generated video and each image sample sent to train D I is a one single frame randomly selected from a training or generated video. Having the notation of T-frame video sampler as S T (T ≥ 1), we can denote the MoCoGAN training loss as follows: where

[026] The optimal values of the original MoCoGAN training loss defined in Equations (3)-(5) are quite challenging to obtain because they are actually the Jensen-Shannon (JS) divergence between the training data distribution p x and the generated data distribution p x . The JS divergence is not continuous everywhere, and hence its gradients are prone to huge oscillation. Instead, the Wasserstein-1 distance between p x and p x are better formed to have continuous gradients everywhere and hence it is easy to train with. Hence, we redefine the MoCoGAN training losses using the Wasserstein loss: where the output values of D l and D v are unbounded real-valued numbers, rather than further activated by the sigmoid function to restrict them between [0, 1],

[027] Since a trained MoCoGAN generator converts each latent variable z (l) = t o an ima 9 e f rame °f a video, we can fine-tune the content of each z (l) to get a correspondingly desired frame that matches some criterion. Such method can be regarded as constrained image generation. The disclosed workflow searches for a series of latent vectors that can generate a 3D stratigraphic model where depth-wise logs sampled at certain well positions match some reference measurements as best as possible.

[028] These latent vectors can be optimized from random initializations via the gradient backpropagation method. The gradient is defined on the log matching loss with respect to the input latent vectors while the weights in G h D v , and D, are all frozen in the computational graph. For the constrained image generation problem under the context of GAN, we define the log matching loss L m = L c (z\y,M) + w • L p (z) as a weighted sum of the context loss L c (z\y, M~) and prior loss L p (z) with a weighting factor w. Their definitions are: is the difference between the known measurements y and the logs at the generated model masked by a binary mask M in which 1 means the location with a well and 0 means otherwise. is the second term in the video discriminator training loss of Equation (7) that penalizes the unrealistic models. Without such term, forcing the log in the generated model to approach the known measurements may result in the physically unplausible models. Then we can use stochastic gradient method to find the optimal by applying to z iteratively.

[029] It can become more challenging to train and obtain high-quality 3D stratigraphic models if their resolutions become high due to the curse of dimensionality. To address this problem, the workflow uses dimensionality reduction, by allowing MoCoGAN to train on the low-resolution samples X E encoded from the high-resolution 3D stratigraphic models X of the training set.

[030] Variational AutoEncoder (VAE) is a well-suited dimensionality reduction methodology to embed a high-dimensional distribution into a low-dimensional posteriori distribution using an encoder-decoder architecture. After training, the VAE encoder yields the mean and the log standard deviation that parameterize the low-dimensional posteriori distribution while the VAE decoder samples from it and reconstruct the input distribution to its best extent.

[031] Aspects herein adopt a fully convolutional VAE to compress image frames in case they are too large to directly train MoCoGAN. The first step of this workflow is to train such VAE’s encoder and decoder with all image frames in the high-resolution video training set with distribution p x and embed all of them to a low-resolution 2D latent space with posteriori distribution In this approach as a preprocessing step, the entire video training set is encoded into a new compressed one. Each compressed video sample has the same number of frames as the native sample X~p x but each frame has a lower spatial resolution and a new number of channels. The MoCoGAN is trained as before, but now on the compressed data x . The trained MoCoGAN generator now generates video of the same size as any X E in the encoded domain. The encoded video is then decoded by the VAE decoder, as a postprocessing step, back to the original domain to be a high-resolution 3D stratigraphic model. This combination of VAE with MoCoGAN bridges the gap between noisy input latent vectors and high- resolution 3D stratigraphic models.

Examples

[032] In one example embodiment, the video training set contains a few hundreds of 3D stratigraphic models created by our physics based fluvial channel SFM with different petrophysical and rock facies configurations and random seeds. These models are created as 256 x 256 maps in spatial x and y dimensions with 200 years of geological depositions. Since our SFM determines a deposition sampling rate of 2 years and 4 different kinds of rock facies, each model can be regarded as a video containing L = 101 frames of 4-channel images with each channel of size 256 x 256. If we denote the number of channels per image frame as C and the image size as N x N, each video sample X is a 4D tensor of shape (L, C, N, N) .

[033] As the first step to reduce the dimensionality, VAE is trained to reduce the size of each image frame from (C, N, N) to (C',/V',/V') for all videos, where C = 8 and N' = 64. Therefore, the size of the compressed video training set X E ~p XE \ x is just 1/8 of the original training set X~p x . MoCoGAN is then trained on X E ~p XE \ x with 30000 batches with each batch containing 128 image frames and 32 videos. Training of VAE takes about half a day on a Nvidia RTX2080 GPU and training of MoCoGAN takes about 4 days on the same accelerator. [034] After completing the train of both VAE and MoCoGAN, aspects of this disclosure are able to create unconditional high-resolution 3D stratigraphic models of shape (L',C,N,N') where L' < L. FIG. 2 shows a few generated models of L' = 20 frames and they are created in less than one second. All rock facies may be seen/identified such as levee, overbank, point bar, and mud plug are correctly created at the right locations and the temporal correlation across consecutive 40 years of depositions are accurately maintained. Particularly, we can see mud plugs are created and deposited when a wide meander of the fluvial channel is cut off. All such spatial and temporal features existed in the video training set are learned by MoCoGAN. With the help of the Wasserstein training loss, we also make sure the generated models have sufficient diversity in appearances.

[035] The workflow can also use finetuning latent vectors z to create constrained 3D stratigraphic models whose logs at particular well locations approach the arbitrarily given reference well measurements. In this test example, two well measurements regarding rock facies sampled at two well locations are provided, as shown in FIG. 3.

[036] The disclosed workflow in one or more embodiments, iteratively optimizes z by recreating models, sampling logs to calculate log matching loss L m and applying until L m reaches its minimum. After 10000 iterations of optimizations, FIG. 4 shows the facies log sampled from a constrained generated model. We can see that the overbank, levee, and mug plug are mostly recovered. FIG. 5 shows the corresponding conditioned 3D stratigraphic model from which the facies logs in FIG. 4 are sampled from red dots (left for Well 1 and right for Well 2).

[037] The disclosed framework can be used to rapidly create high-resolution 3D stratigraphic models. It takes advantage of GAN architectures for video as exemplified by MoCoGAN along with many customized improvements, such as the Wasserstein training loss for improved model diversity and a prior VAE for efficient model compression. After training, aspects of the disclosure create unconditional models that look similar to those in the training set within seconds without the need of waiting for a couple of hours on the physics-based forward modeler. More importantly, since all the generated models are converted from a series of latent vectors, the disclosed workflows and platform can finetune these vectors to create constrained and realistic models whose logs sampled at specific well locations match the reference measurements. The disclosed platforms and workflows enable high-resolution 3D stratigraphic modeling driven by artificial intelligence. Its swiftness and compatibility enable a seamless integration to any reservoir modeling workflow.

[038] In one example embodiment, a method of training a motion and content decomposed generative adversarial network is disclosed. The method may comprise training a video discriminator and an image discriminator with a neural network in an image generator using at least two batches of data, wherein the data batch sent to train the image discriminator comprises randomly selected data from one of actual data and generated data. The method may further comprise freezing neural networks for the video discriminator and the image discriminator. The method may also further comprise training, using at least one batch of data, the image generator with the neural networks for the video discriminator and the image discriminator.

[039] In another example embodiment, the method may be performed wherein the data is actual data from a field location.

[040] In another example embodiment, the method may be performed wherein the data is generated data from a computer.

[041] In another example embodiment, the method may be performed wherein each data batch sent to train video discriminator contains T consecutive frames from data. [042] In another example embodiment, the method may be performed wherein the generative adversarial network is two neural networks,

[043] In another example embodiment, the method may be performed wherein at least one of the two neural networks is a generator network.

[044] In another example embodiment, the method may be performed wherein at least one of the two neural networks is a discriminator network.

[045] In another example embodiment, the method may be performed wherein the data contains high-resolution three- dimensional stratigraphic modeling data.

[046] In another example embodiment, the method may be performed wherein artificial intelligence is used in the training.

[047] In another example embodiment, the method may be performed wherein the freezing of the neural networks for the video discriminator and the image discriminator includes fixing a number of weights in computations.

[048] In another example embodiment, a computer program product is disclosed. The product may comprise a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for generating a report, said method comprising: training a video discriminator and an image discriminator with a neural network in an image generator using at least two batches of data, wherein the data batch sent to train the image discriminator comprises randomly selected data from actual data or generated data. The method may further comprise freezing neural networks for the video discriminator and the image discriminator and training, using at least one batch of data, the image generator with the neural networks for the video discriminator and the image discriminator.

[049] In another example embodiment, the computer program product may be configured such that the computer is one of a server, a personal computer, a cellular telephone, and a cloud based computing arrangement.

[050] Language of degree used herein, such as the terms "approximately," "about," "generally," and "substantially" as used herein represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms "approximately," "about," "generally," and "substantially" may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1 % of, within less than 0.1 % of, and/or within less than 0.01 % of the stated amount. As another example, in certain embodiments, the terms "generally parallel" and "substantially parallel" or "generally perpendicular" and "substantially perpendicular" refer to a value, amount, or characteristic that departs from exactly parallel or perpendicular, respectively, by less than or equal to 15 degrees, 10 degrees, 5 degrees, 3 degrees, 1 degree, or 0.1 degree.

[051] Although a few embodiments of the disclosure have been described in detail above, those of ordinary skill in the art will readily appreciate that many modifications are possible without materially departing from the teachings of this disclosure. Accordingly, such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is also contemplated that various combinations or subcombinations of the specific features and aspects of the embodiments described may be made and still fall within the scope of the disclosure. It should be understood that various features and aspects of the disclosed embodiments can be combined with, or substituted for, one another in order to form varying modes of the embodiments of the disclosure. Thus, it is intended that the scope of the disclosure herein should not be limited by the particular embodiments described above.