Quantification of the impact of data in reservoir modeling

More Info
expand_more

Abstract

Global energy use is increasing. As societies advance, they will continue to need energy to power residential and commercial buildings, in the industrial sector, for transportation and other vital services. To satisfy this rising demand, liquid, natural gas, coal, nuclear power and renewable fuel sources are extensively developed. Particularly fossil fuels (i.e. oil, natural gas and coal) remain the largest source of energy for the world. Petroleum exploration and production companies continuously develop new and enhance current production technologies to increase recovery from the existing fields. These companies rely on various tools to support their production and development decisions. Reservoir modeling is a standard tool used in the decision making process allowing analysis and prediction of the reservoir flow behavior, identification of beneficial production strategies and evaluation of the associated risks. The models used for reservoir simulation contain a large number of imperfectly known parameters characterizing the reservoir flow, e.g. permeability and porosity of the reservoir rock. Therefore the predictive value of such models is limited and tends to deteriorate in time. History matching is employed to update the values of poorly known model parameters in time with the help of the production data which become available during the production life of the reservoir, i.e. to adapt parameters such that simulated results are consistent with measured production data. Such an approach generally improves estimates of the model parameters and the predictive capability of the model. Remarkably, the information extracted from the measurements in the history matching phase is repeatedly found as not enough to provide well-calibrated model with a high predictive value. Hence, consideration of additional data can be of particular help. To optimize the costs and effort associated with collection of new data and computations, up-front selection of the most influential measurements and their locations is desirable. Methods to assess the impact of measurements on model parameter updating are therefore needed. The research objective of this thesis was to develop efficient tools for quantifying the impact of measured data on the outcome of history matching of reservoir models, i.e. tools that provide a meaningful quantification of the impact of observations, while requiring limited time and effort to be incorporated in the history matching algorithms. This research addressed history matching two-dimensional two-phase reservoir model representing water flood with production data (bottom hole pressure at injection well and oil and water flow rates at production wells). First, the applicability and implementation of a number of history matching algorithms were investigated. The representer method (RM) has been considered as an example of variational techniques. The algorithm’s key feature is the computation of a set of so-called representers describing the influence of a certain measurement on an estimation of the state and/or parameter. The RM was found to provide a reasonable parameter estimate, although it is computationally inefficient for dealing with large data sets. This fact yielded testing of the accelerated representer method (ARM), where direct computation of representers is avoided. The results indicate that the accuracy of the ARM can be controlled to provide an outcome of the same accuracy as the RM, and that the ARM outperforms the classical RM in terms of computational speed when the number of assimilated measurements increases. In this thesis we developed a strategy to evaluate the number of operations performed by the methods to assess the amount of data for which the ARM becomes beneficial to use. The RM and the ARM require the model adjoint and are not intended for continuous (sequential) history matching, namely for incorporating obtained data in the model on the fly. Instead they perform history matching over a rather long time window using all available observations. The ensemble Kalman filter (EnKF) has been discussed as it is the algorithm for continuous history matching. The EnKF schemes do not require the model adjoint, which makes them very attractive for data assimilation with complex non-linear models. The use of the EnKF in reservoir engineering however is prone to producing physically unreasonable values of the state variables. The problem can be overcome by including a so-called confirmation step in the algorithm. The EnKF, particularly with a confirmation step, is often computationally demanding for large-scale applications. The asynchronous EnKF (AEnKF) is a modification of the EnKF which offers a practical way to perform history matching in such cases by updating the system with batches of measurements collected at the times different to the time of the update. Hence, all observations collected during a certain time-window can be history-matched at once at the end of observational period. This allows for comparison of the influence of the observations collected at different times. Furthermore, it does not rely on an adjoint model, though it resembles the approach usually followed in variational methods. Both the EnKF and the AEnKF demonstrated considerable improvement of the model parameter estimates compared to the prior and gave acceptable history matches. Since the AEnKF allows for history matching all the data gathered throughout the observational period at once, it permits comparison of the effect of observations collected at different time instances. The equivalence of the AEnKF to variational techniques (e.g. the RM) yields the possibility to evaluate if ensemble Kalman filtering and variational methods utilize the observations in a similar manner. The representer method and the AEnKF were selected to be used as platforms for quantification of the measurements impact on history matching. Secondly, in this thesis we developed a tool to quantify the impact of measured data on the outcome of history matching. The method has been inspired by the recent advancements in meteorology and oceanography, and is based on a so-called sensitivity matrix. This matrix can be used to evaluate the amount of information extracted from available data during the data assimilation phase and identify the observations that have contributed to the parameter update the most. In particular, we used the diagonal elements of the matrix, known as self-sensitivities, as a quantitative measure of the influence of observed measurements on predicted measurements. Additionally, we have proposed a way to use the norm of the sensitivity matrix for assessing the magnitude of possible change in the accuracy of the model due to the respective change in the accuracy of collected observations. The observation sensitivity matrix is fast and easy to compute both for adjoint-based and EnKF types of history matching algorithms. The analysis performed with the aid of the observation sensitivity matrix has confirmed that the RM and the AEnKF utilize the data with comparable effectiveness. Remarkably, for a simple test case the global averaged influence of the observed measurements is only 4%. This is a rather low value compared to the 96% global averaged influence of the prior. The observation sensitivity matrix can be also used to investigate the dependency between the measurement location / type and its importance to history matching.