Improved sampling strategies for ensemble-based optimization

Journal Article (2020)
Author(s)

K. R. Ramaswamy (Eindhoven University of Technology)

R. M. Fonseca (TNO)

Olwijn Leeuwenburgh (TNO, TU Delft - Reservoir Engineering)

M.M. Siraj (Eindhoven University of Technology)

P.M.J. van den Hof (Eindhoven University of Technology)

Research Group
Reservoir Engineering
Copyright
© 2020 K. R. Ramaswamy, R. M. Fonseca, O. Leeuwenburgh, M.M. Siraj, P.M.J. Van den Hof
DOI related publication
https://doi.org/10.1007/s10596-019-09914-8
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 K. R. Ramaswamy, R. M. Fonseca, O. Leeuwenburgh, M.M. Siraj, P.M.J. Van den Hof
Research Group
Reservoir Engineering
Issue number
3
Volume number
24
Pages (from-to)
1057–1069
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We are concerned with the efficiency of stochastic gradient estimation methods for large-scale nonlinear optimization in the presence of uncertainty. These methods aim to estimate an approximate gradient from a limited number of random input vector samples and corresponding objective function values. Ensemble methods usually employ Gaussian sampling to generate the input samples. It is known from the optimal design theory that the quality of sample-based approximations is affected by the distribution of the samples. We therefore evaluate six different sampling strategies to optimization of a high-dimensional analytical benchmark optimization problem, and, in a second example, to optimization of oil reservoir management strategies with and without geological uncertainty. The effectiveness of the sampling strategies is analyzed based on the quality of the estimated gradient, the final objective function value, the rate of the convergence, and the robustness of the gradient estimate. Based on the results, an improved version of the stochastic simplex approximate gradient method is proposed based on UE(s2) sampling designs for supersaturated cases that outperforms all alternative approaches. We additionally introduce two new strategies that outperform the UE(s2) designs previously suggested in the literature.