A stochastic simplex approximate gradient (StoSAG) for optimization under uncertainty

Journal Article (2016)
Author(s)

R. M. Fonseca (TU Delft - Reservoir Engineering)

B. Chen (University of Tulsa)

Jan Dirk Jansen (TU Delft - Geoscience and Engineering, TU Delft - Civil Engineering & Geosciences)

Albert C. Reynolds (University of Tulsa)

Department
Geoscience and Engineering
Copyright
© 2016 R.M. Fonseca, B Chen, J.D. Jansen, Albert C. Reynolds
DOI related publication
https://doi.org/10.1002/nme.5342
More Info
expand_more
Publication Year
2016
Language
English
Copyright
© 2016 R.M. Fonseca, B Chen, J.D. Jansen, Albert C. Reynolds
Department
Geoscience and Engineering
Issue number
13
Volume number
109
Pages (from-to)
1756-1776
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We consider a technique to estimate an approximate gradient using an ensemble of randomly chosen control vectors, known as Ensemble Optimization (EnOpt) in the oil and gas reservoir simulation community. In particular, we address how to obtain accurate approximate gradients when the underlying numerical mod- els contain uncertain parameters because of geological uncertainties. In that case, ‘robust optimization’ is performed by optimizing the expected value of the objective function over an ensemble of geological mod- els. In earlier publications, based on the pioneering work of Chen et al. (2009), it has been suggested that a straightforward one-to-one combination of random control vectors and random geological models is capa- ble of generating sufficiently accurate approximate gradients. However, this form of EnOpt does not always yield satisfactory results. In a recent article, Fonseca et al. (2015) formulate a modified EnOpt algorithm, referred to here as a Stochastic Simplex Approximate Gradient (StoSAG; in earlier publications referred to as ‘modified robust EnOpt’) and show, via computational experiments, that StoSAG generally yields significantly better gradient approximations than the standard EnOpt algorithm. Here, we provide theoreti- cal arguments to show why StoSAG is superior to EnOpt