"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:b3d264ce-e7dc-4e67-b0e1-94f3cc7831ca","http://resolver.tudelft.nl/uuid:b3d264ce-e7dc-4e67-b0e1-94f3cc7831ca","Geomechanical Study of Underground Hydrogen Storage","Ramesh Kumar, K. (TU Delft Reservoir Engineering)","Hajibeygi, H. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2023","With the rise of renewable energy and the drive to achieve net-zero emissions, energy storage has become a crucial component of the energy sector to address the challenges of intermittency. The vast subsurface environment offers significant storage potential, capable of accommodating terawatt-hour (TWh) capacities. One approach to leverage this storage capacity involves converting renewable energy into hydrogen and storing it underground within salt caverns and depleted porous reservoirs. This stored hydrogen can then be utilized as needed. However, this cyclic injection and production of hydrogen will exert repeated stress on the subsurface, resulting in periodic changes in pressure.
One critical aspect that requires investigation for the safe storage of hydrogen (H2) is the field of geomechanics, which becomes essential in both salt caverns and depleted reservoirs. To gain a better understanding of this, a comprehensive review of the geomechanics involved in underground hydrogen storage was conducted to examine existing knowledge and identify research gaps. To delve deeper into the influence of geomechanics, particularly regarding the inelastic creep deformation of rocks in salt caverns and depleted porous reservoirs, numerical simulations were employed. Given the potential costliness of fine-scale simulations, multiscale simulations were carried out using algebraic multiscale methods. Constitutive models were utilized to analyze deformation patterns in and around the reservoir, assessing their impact on subsidence or uplift.
In order to further comprehend the effects of cyclic loading on rocks, constitutive models were developed based on extensive experimental data obtained from sandstone rocks subjected to long-term stress conditions. These models aided in uncovering the underlying physics of rock behavior when exposed to different stress regimes during prolonged cyclic loading. Subsequently, these models were integrated into finite element method (FEM) simulations to observe their impact on field-scale scenarios, with a synthetic Bergermeer case study serving as an example.
To enhance the computational efficiency of multiscale methods, unsupervised machine learning techniques were applied to optimize the formation of computational grids, utilizing graph theory techniques such as Louvain and random walk algorithms. These optimized grids were then compared with the grids generated from METIS to evaluate the computational performance of pressure solvers in a commercial scale simulator.","","en","doctoral thesis","","978-94-6366-759-3","","","","","","2023-11-01","","","Reservoir Engineering","","",""
"uuid:013727ea-91b4-47cb-959c-b83a1f2cbbdd","http://resolver.tudelft.nl/uuid:013727ea-91b4-47cb-959c-b83a1f2cbbdd","Material Fingerprinting: Understanding how differences in geology impact metallurgical plant performance","van Duijvenbode, J.R. (TU Delft Resource Engineering)","Buxton, M.W.N. (promotor); Jansen, J.D. (promotor); Soleymani Shishvan, M. (copromotor); Delft University of Technology (degree granting institution)","2023","To extract raw materials responsibly and sustainably, the minerals industry has to continuously optimise the mine-to-metal process and requires an entirely different valuation model. Currently, most operational decisions (e.g., ore-waste boundaries, short-term scheduling, blending policies, dispatch decisions) are evaluated using a revenue-based model. Such a model derives the value of the material from its estimated metal content (grade ⇥ tonnage). However, metallurgical attributes which largely define the processing costs (revenue losses) are left out of the equation since they are either missing or unreliable. In a future optimisation step, it should be possible to offset the anticipated revenue against an aggregated metallurgical cost based on, for example, energy consumption, throughput, recovery and reagent consumption. This drives the need for an improved method to describe the to-be-processed material, which determines the influence of geological behaviour (the material type) on the processing performance (associated with the costs)....","Material fingerprinting; geometallurgy; material tracking; metallurgical plant performance","en","doctoral thesis","","978-94-6384-430-7","","","","","","","","","Resource Engineering","","",""
"uuid:710856a6-4f0e-49f4-a9f2-b3b75cb72570","http://resolver.tudelft.nl/uuid:710856a6-4f0e-49f4-a9f2-b3b75cb72570","Sensing and data fusion opportunities for raw material characterisation in mining: Technology and data-driven approach","Desta, F.S. (TU Delft Resource Engineering)","Buxton, M.W.N. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2021","The rising demands for mined products lead to the extraction of materials in geologically complex regions. This calls for mining process changes and interventions driven by technology and advanced data analytics. The dynamic development of state-of-the-art sensor technologies and their potential use in mining is projected to significantly reduce costs in the industry. However, despite rapid advances in sensor technologies, there is still a demand for novel data analytical approaches to enable accurate characterisation of material along the mining value chain, as advanced data analytics is key to gain knowledge from the complex sensor-derived data. Therefore, sensor technology, coupled with advanced data analytics is crucial for the rapid and accurate characterisation of material in mining operations. Access to rapid and accurate data on the key geological attributes (e.g., mineralogy and geochemistry) along the mining value chain has significant implications for the production process efficiency in commercial mines. Such data would greatly assist the improvement of deposit models, optimise ore processing, specify product quality and improve operational decision-making. Sensor technologies operate over a specific range of the electromagnetic spectrum and provide information on certain aspects of material properties that are of potential interest for mining extraction. However, a single sensor might not provide a sufficiently comprehensive description of a material’s composition. This introduces uncertainty into both resource estimation and requirements definition for mineral processing. Thus, it is necessary to utilise strategic sensor combinations to improve accuracy, minimise uncertainty, and enhance specific insights of material compositions. Combinations of sensors can be implemented using a data fusion approach. The fusion of sensed data can be realised at different levels: low-, mid-, and high-level, when the integration occurs at the data level, features level and decision level, respectively. This research aims to develop methods for the characterisation of raw materials using multiple sensor technologies and sensor combinations concept (data fusion at different levels), that can be potentially applicable to mining operations. The study involved the multispectral and hyperspectral imaging techniques, such as red-green-blue (RGB) imaging, visible and near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral imaging, and point spectroscopic techniques, such as mid-wave infrared (MWIR), long-wave infrared (LWIR) and Raman spectroscopy to acquire spectral information over a wider range of the electromagnetic spectrum. First, an investigation was conducted on the usability of the individual sensor technologies coupled with data analytics for the characterisation of a polymetallic sulphide deposit at different levels. The different levels of material characterisation aimed to allow mineral mapping, ore–waste discrimination, fragmentation analysis, and semi-quantitative analysis of elements and minerals. The positive outcomes of the use of the individual techniques led to the development of a data fusion framework that enables data integration (including multi-scale and multi-resolution data) at different levels (e.g., low-level and mid-level). The developed data fusion concept was implemented and validated using different test scenarios...","","en","doctoral thesis","","978-94-6423-318-6","","","","","","","","","Resource Engineering","","",""
"uuid:5f0f9b80-a7d6-488d-9bd2-d68b9d7b4b87","http://resolver.tudelft.nl/uuid:5f0f9b80-a7d6-488d-9bd2-d68b9d7b4b87","Delft Advanced Research Terra Simulator: General Purpose Reservoir Simulator with Operator-Based Linearization","Khait, M. (TU Delft Reservoir Engineering)","Voskov, D.V. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2019","Numerical simulation is based on space and time discretization providing a trade-off between accuracy and computational performance. The operator-based Linearization (OBL) approach introduces an additional discretization domain for the physical description of fluid and rock. Delft Advanced Research Terra Simulator (DARTS) is a general-purpose reservoir simulation platform entirely built around the OBL approach. DARTS provides unique flexibility capabilities, allowing to customize physical description and control simulation process using the high-level Python programming language. Fully Implicit Method provides unconditional simulation stability, while the partial derivatives required for Jacobian assembly are computed analytically through OBL. At the same time, DARTS ensures exceptional simulation performance addressing it at three levels. Inexpensive linearization combined with a reduction in the nonlinearity is controlled by OBL discretization resolution form algorithmic level. Efficient C++ backend for critical kernels and state-of-the-art linear solver with two-stage CPR preconditioner composes the software level. Finally, the ability to perform the entire simulation loop on the GPU platform owing to the OBL approach constitutes the hardware level. The DARTS framework has already served as a platform for several academic and industrial research projects for different geo-energy applications including geothermal, CO2 sequestration and petroleum.","operator-based linearization; mass and energy transport; GPU; multiphase flow; compositional formulation","en","doctoral thesis","","978-94-6366-229-1","","","","","","","","","Reservoir Engineering","","",""
"uuid:30966f68-cea2-4669-93da-23a477d0978b","http://resolver.tudelft.nl/uuid:30966f68-cea2-4669-93da-23a477d0978b","Detection of factors that determine the quality of industrial minerals: An infrared sensor-based approach for mining and process control","Guatame-Garcia, Adriana (TU Delft Resource Engineering)","Buxton, M.W.N. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2019","Industrial minerals are essential to human activity. The products derived from them make an integral part of a wide range of materials that are ubiquitously present in our daily lives. The performance and attributes of these materials depend significantly on the properties and quality of the industrial minerals and the products generated from them. These characteristics are ensured by the selection and mining of adequate ores, and by using various beneficiation and processing strategies to modify or enhance the original properties of the minerals.
One example of these strategies is calcination, in which the minerals are subject to thermal treatment. The success of the generation of high-quality products by using this technique partly depends on the capability of the plant to detect the factors that can degrade the quality of the raw ore, feed for calcination and final product. It also depends on its ability to inform and adapt the operations according to the presence of such factors. A possible approach for doing this is to characterise the minerals and materials with sensor technologies that can generate information on-site and in real-time, focusing on the identification of the degrading factors. Their timely detection can give operational feedback to the process and aid in the generation of high-quality products.
This Thesis aims to develop methods for the detection of factors that determine the quality of industrial mineral products by using data derived from infrared sensors, which have the potential to be implemented in mining and process control. For doing this, kaolin, perlite and diatomite have been selected as commodities that are relevant to the market and that represent different applications. This research shows the capacity of infrared sensor-based technologies to retrieve information, directly or indirectly, about the factors that affect the quality of industrial minerals at a lower cost and with comparable efficiency to other analytical methods.
If all of the system information is contained in the POD basis, the deflation method converges in one iteration. This behavior was compared with the usual choices of deflation vectors, which require more than 18 iterations for the same number of deflation vectors. If only part of this information is obtained, the POD-based deflation method gives a good initial solution, after one iteration the error of the solution is of order 10^{-4}. The applicability of the POD-based deflation method does not depend on the test case. It is implemented for reservoir simulation problems, but it can be implemented for any time-varying problem. Furthermore, we study its applicability for various 2L-PCG methods, but it can also be implemented together with many other linear solvers, e.g., multigrid, multilevel, and domain decomposition techniques. The implementation can also be extended to include various preconditioners.","Deflation; POD; Reservoir Simulation; Krylov Methods; Linear Solvers","en","doctoral thesis","","978-94-6380-284-0","","","","","","","","","Numerical Analysis","","",""
"uuid:31b1847e-e32c-482e-9e8f-286de866e751","http://resolver.tudelft.nl/uuid:31b1847e-e32c-482e-9e8f-286de866e751","Multiscale Analytical Derivative Formulations for Improved Reservoir Management","Jesus de Moraes, R. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); Hajibeygi, H. (copromotor); Delft University of Technology (degree granting institution)","2018","The exploitation of subsurface resources is, inevitably, surrounded by uncertainty. Limited knowledge on the economical, operational, and geological setting are just a few instances of sources of uncertainty. From the geological point of view, the currently available technology is not able to provide the description of the fluids and rock properties at the necessary level of detail required by the mathematical models utilized in the exploitation decision-making process. However, even if a full, accurate description of the subsurface was available, the outcome of such hypothetical mathematical model would likely be computationally too expensive to be evaluated considering the currently available computational power, hindering the decision making process.
Under this reality, geoscientists are consistently making effort to improve the mathematical models, while being inherently constrained by uncertainty, and to find more efficient ways to computationally solve these models.
Closed-loop Reservoir Management (CLRM) is a workflow that allows the continuous update of the subsurface models based on production data from different sources. It relies on computationally demanding optimization algorithms (for the assimilation of production data and control optimization) which require multiple simulations of the subsurface model. One important aspect for the successful application of the CLRM workflow is the definition of a model that can both be run multiple times in a reasonable timespan and still reasonably represent the underlying physics.
Multiscale (MS) methods, a reservoir simulation technique that solves a coarser simulation model, thus increasing the computational speed up, while still utilizing the fine-scale representation of the reservoir, figures as an accurate and efficient simulation strategy.
This thesis focuses on the development of efficient algorithms for subsurface models optimization by taking advantage of multiscale simulation strategies. It presents (1) multiscale analytical derivative computation strategies to efficiently and accurately address the optimization algorithms employed in the CLRM workflow and (2) novel strategies to handle the mathematical modeling of subsurface management studies from a multiscale perspective. On the latter, we specifically address a more fundamental multiscale aspect of data assimilation studies: the assimilation of observations from a distinct spatial representation compared to the simulation model scale.
As a result, this thesis discusses in detail the development of mathematical models and algorithms for the derivative computation of subsurface model responses and their application into gradient-based optimization algorithms employed in the data assimilation and life-cycle optimization steps of CLRM. The advantages are improved computational efficiency with accuracy maintenance and the ability to address the subsurface management from a multiscale view point not only from the forward simulation perspective, but also from the inverse modeling side.","multiscale simulation; analytical derivative computation; adjoint method; life-cycle optimization; data assimilation","en","doctoral thesis","","978-94-6186-990-6","","","","","","","","","Reservoir Engineering","","",""
"uuid:70a1e180-ef0c-4226-9af3-7e9dc3938c7f","http://resolver.tudelft.nl/uuid:70a1e180-ef0c-4226-9af3-7e9dc3938c7f","Sensor-based sorting opportunities for hydrothermal ore deposits: Raw material beneficiation in mining","Dalm, M. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Buxton, M.W.N. (copromotor); Delft University of Technology (degree granting institution)","2018","Sensor-based particle-by-particle sorting is a technique in which singular particles are mechanically separated on certain physical and/or chemical properties after determining these properties with a sensor. Sensor-based sorting machines can be incorporated into mineral processing operations in order to remove waste or sub-economic ore prior to conventional treatment. This has potential to reduce the consumption of energy and water during mineral processing and thereby decrease processing costs. Furthermore, sensor-based sorting can be used to separate different ore types in order to enhance control of the feed to mineral processing facilities and improve processing efficiency. For most ore types no sensors are known that can be used to detect the grade of ore particles. This is because many ores are polyminerallic rocks in which the economically important minerals occur in relatively low concentrations and in small grain sizes. However, the deposition of ore minerals during the formation of hydrothermal ore deposits is often related to specific hydrothermal alteration zones. This means that it might be possible to characterise the grade of such an ore by using sensors that are capable of detecting differences in hydrothermal alteration mineralogy. Sensors can be applied throughout the entire mining value chain to collect information on the characteristics of the mined ore in real-time. The information that sensors provide can be used to improve deposit models, improve ore quality control and optimise mineral processing. However, the applicability of real-time sensor technologies has not yet been assessed for many types of ore deposits. The aim of the study was to explore the opportunities and potential benefits of using sensors for real-time raw material characterisation in mining and investigate the opportunities for sensor-based particle-by-particle sorting at hydrothermal ore deposits. Investigating sorting opportunities was aimed at researching the applicability of real-time sensors to segment waste particles from ore particles and to distinguish between ore particles that represent different ore types. This is based on samples taken from the Los Bronces porphyry copper-molybdenum deposit, the Lagunas Norte epithermal gold-silver deposit, and the Cortez Hills carlin-style gold deposit. For all the deposits included in the study, a fraction of the waste could be segmented by using a Visible to Near-InfraRed (VNIR) and Short-Wavelength InfraRed (SWIR) spectral sensor to detect the hydrothermal alteration mineralogy. For Lagunas Norte and Cortez Hills, this sensor could also be used to distinguish between different ore types. The ability to segment waste was based on indirect relationships between certain alteration mineral assemblages and the copper or gold grade. Since these relationships correspond to the alteration-mineralisation relationships that generally occur at each deposit type, there is potential that sensors can also be used to segment waste at other porphyry, epithermal or carlin-style deposits. For all three deposits additional research is required to investigate whether it is economically feasible to use the discrimination capabilities of the VNIR-SWIR spectral sensor for sensorbased particle-by-particle sorting. The feasibility may be limited by surface contaminations of the ore particles feeding the sorter, the influence of water on the discrimination capabilities of the VNIR-SWIR sensor, and the sorting efficiency resulting from misclassification.","","en","doctoral thesis","","978-94-6186-946-3","","","","","","","","","Resource Engineering","","",""
"uuid:0ce45ddb-4932-4808-aa92-bd13deb85fa9","http://resolver.tudelft.nl/uuid:0ce45ddb-4932-4808-aa92-bd13deb85fa9","Algebraic Multiscale Framework for Fractured Reservoir Simulation","Tene, M. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); Hajibeygi, H. (copromotor); Delft University of Technology (degree granting institution)","2018","Despite welcome increases in the adoption of renewable energy sources, oil and natural gas are likely to remain the main ingredient in the global energy diet for the decades to come. Therefore, the efficient exploitation of existing suburface reserves is essential for the well-being of society. This has stimulated recent developments in computer models able to provide critical insight into the evolution of the flow of water, gas and hydrocarbons through rock pores. Any such endeavour, however, has to tackle a number of challenges, including the considerable size of the domain, the highly heterogeneous spatial distribution of geological properties, as well as the intrinsic uncertainty and limitations associated with field data acquisition. In addition, the naturally-formed or artificially induced networks of fractures, present in the rock, require special treatment, due to their complex geometry and crucial impact on fluid flow patterns.
From a numerical point of view, a reservoir simulator’s operation entails the solution of a series linear systems, as dictated by the spatial and temporal discretization of the governing equations. The difficulty lies in the properties of these systems, which are large, ill-conditioned and often have an irregular sparsity pattern. Therefore, a brute-force approach, where the solutions are directly computed at the original fine-scale resolution, is often an impractically expensive venture, despite recent advances in parallel computing hardware. On the other hand, switching to a coarser resolution to obtain faster results, runs the risk of omitting important features of the flow, which is especially true in the case of fractured porous media.
This thesis describes an algebraic multiscale approach for fractured reservoir simulation. Its purpose is to offer a middle-ground, by delivering results at the
original resolution, while solving the equations on the coarse-scale. This is made possible by the so-called basis functions – a set of locally-supported cross-scale interpolators, conforming to the heterogeneities in the domain. The novelty of the work lies in the extension of these methods to capture the effect of fractures. Importantly, this is done in fully algebraic fashion, i.e. without making any assumptions regarding geometry or conductivity properties.
In order to elicit the generality of the proposed approach, a series of sensitivity studies are conducted on a proof-of-concept implementation. The results, which include both CPU times and convergence behaviour, are discussed and compared to those obtained using an industrial-grade AMG package. They serve as benchmarks, recommending the inclusion of multiscale methods in next-generation commercial reservoir simulators.","algebraic multiscale methods; naturally fractured porous media; conductivity contrasts; compressible flow; multiphase transport","en","doctoral thesis","","978-94-6186-956-2","","","","","","","","","Reservoir Engineering","","",""
"uuid:cae98392-a0d2-4809-9473-c742d0424f33","http://resolver.tudelft.nl/uuid:cae98392-a0d2-4809-9473-c742d0424f33","Simulation-based optimization for decision making under uncertainty in opencast mines","Soleymani Shishvan, M. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Benndorf, J. (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6186-920-3","","","","","","","","","Resource Engineering","","",""
"uuid:3acfe30a-1c01-4851-b491-ca20b3b459ce","http://resolver.tudelft.nl/uuid:3acfe30a-1c01-4851-b491-ca20b3b459ce","Data assimilation in the minerals industry: Real-time updating of spatial models using online production data","Wambeke, T. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Benndorf, J. (promotor); Delft University of Technology (degree granting institution)","2018","Declining ore grades, extraction at greater depths and longer hauling distances put pressure on maturing mines. Not enough new mines will be commissioned on time to compensate for the resulting shortages. Ore-body replacement rates are relatively low due to a reduced appetite for exploration. Development times are generally increasing and most new projects are remote, possibly pushing costs further upwards.
To reverse these trends, the industry must collect, analyse and act on information to extract and process material more productively (i.e. maximize resource efficiency). This paradigm shift, driven by digital innovations, aims to (partly) eliminate the external variability that has made mining unique. The external variability results from the nature of the resource being mined. This type of variability can only be controlled if the resource base is sufficiently characterized and understood.
Recent developments in sensor technology enable the online characterization of raw material characteristics and equipment performance. To date, such measurements are mainly utilized in forward loops for downstream process control. A backward integration of sensor information into the resource model does not yet occur. Obviously, such a backward integration would significantly contribute to the progressive characterization of the resource base.
This dissertation presents a practical updating algorithm to continuously assimilate recently acquired data into an already existing resource model. The updating algorithm addresses the following practical considerations. (a) At each point in time, the latest solution implicitly accounts for all previously integrated data (sequential approach). During the next update, the already existing resource model is further adjusted to honour the newly obtained observations as well. (b) Due to the nature of a mining operation, it is nearly impossible to formulate closed-form analytical expressions de- scribing the relationship between observations and resource blocks. Rather, the relevant relationships are merely inferred from the inputs (the resource model realizations) and outputs (distribution of predicted observations) of a forward simulator. (c) The updating algorithm is able to assimilate noisy observations made on a blend of material originating from multiple sources and locations. Differences in scale of support are dealt with automatically.
The developed algorithm integrates concepts from several existing (geo)statistical techniques. Co-Kriging approaches for example are designed to integrate both direct and indirect measurements and are well capable to handle differences in accuracy and sampling volume. However, they do fail to extract information from blended measurements and can not sequentially incorporate new observations into an already existing resource model. To overcome the latter issue, the co-Kriging equations are merged into a sequential linear estimator. Existing resource models can now be improved using a weighted sum of differences between observations and model-based predictions (forward simulator output). The covariances, necessary to compute the weights, are empirically derived from two sets of Monte Carlo samples (another sta- tistical technique); the resource model realizations (input forward simulator) and the observation realizations (output forward simulator). This approach removes the need to formulate analytical functions modelling spatial correlations, blending and difference in scale of support.
The resulting mathematical framework bears some resemblances to that of a dy- namic filter (Ensemble Kalman filter), used in other research areas, althoughthe under- lying philosophy differs significantly. Weather forecasting and reservoir modelling, for example, consider dynamic systems repetitively sampled at the same locations. Each observation characterizes a volume surrounding the sample locations. Mineral resource modelling, on the other hand, focuses on static systems gradually sampled at different locations. Each observation is characteristic for a blend of material originating from multiple sources and locations. Each part of the material stream is sampled only once, the moment it passes the sensor.
Various options are implemented around the mathematical framework to either reduce computation time, memory requirements or numerical inaccuracies. (a) A Gaussian anamorphosis is included to deal with suboptimal conditions related to non- Gaussian distributions. The algorithm structure ensures that the sensor precision (mea- surement error) can be defined on its original units and does not need to be translated into a normal score equivalent. (b) An interconnected parallel updating sequence (double helix) can be configured to avoid a covariance collapse (filter inbreeding). This occurs as degrees of freedom are lost over time due to the empirical calculation of the covariances. (c) A neighbourhood option is implemented to constrain computation time and memory requirements. Different neighborhoods need to be considered simul- taneously as material streams are blended. (d) Two covariance correction options are implemented to further inhibit the propagation of statistical sampling errors originating from the empirical computation of covariances.
A case specific forward simulator is built and run parallel to the more generally applicable updating code. The forward simulator is used to translate resource model realizations (input) into observation realizations (output). Empirical covariances are subsequently lifted from both realization sets and mathematically describe the link between sensor observations and individual blocks in the model. This numerical inference avoids the cumbersome task of formulating, linearising and inverting an analytical forward observation model. The application of a forward simulator further ensures that the distribution of the Monte Carlo samples already reflect the support of the concerned random values. As a result, the necessary covariances, derived from these Monte Carlo samples, inherently account for differences in scale of support.
A synthetic experiment is conducted to showcase that the algorithm is capable of assimilating inaccurate observations, made on blended material streams, into an already existing resource model. The experiment is executed in an artificial environment, representing a mining environment with two extraction points of unequal production rate. A visual inspection of cross-sections shows that the model converges towards the ”true but unknown reality”. Global assessment statistics quantitatively confirm this observation. Local assessment statistics further indicate that the global improvements mainly result from correcting local estimation biases.
Another 125 artificial experiments are conducted to study the effects of variations in measurement volume, blending ratio and sensor precision. The experiments investigate whether and how the resource model and the predicted observations improve over time. Based on the outcome, recommendations are formulated to optimally design and operate a monitoring system.
This work further describes the pilot testing of the updating algorithm at the Tropi- cana Gold Mine (Australia). The pilot aims to evaluate whether the updating algorithm can automatically reconcile ball mill performance data against the spatial Work Index estimates of the GeoMet model. The focus here lies on the ball mill since it usually is the single largest energy consumer at the mine site. The spatial Work Index estimates are used to predict a ball mill’s throughput. In order to maximize mill throughput and optimize energy utilization, it is important to get the Work Index estimates right. At the Tropicana Gold Mine, Work Index estimates, derived from X-Ray Fluorescence and Hyperspectral scanning of grade control samples, are used to construct spatial GeoMetallurgical models (GeoMet). Inaccuracies in the block estimates exist due to limited calibration between grade control derived and laboratory Work Index values. To improve the calibration, the updating algorithm was tested at the mine during a pilot study. Deviations between predicted and actual mill performance are monitored and used to locally improve the Work Index estimates in the GeoMet model. While assim- ilating about a week of mill performance data, the spatial GeoMet model converged towards a previously unknown reality. The updating algorithm improved the spatial Work Index estimates, resulting in a real-time reconciliation of already extracted blocks and a recalibration of future scheduled blocks. The case study shows that historic and future production estimates improve on average by about 72% and 26%.","Geostatistics; Data Assimilation; geometallurgy; resource engineering; mining; Discrete event simulation; material tracking","en","doctoral thesis","","978-94-6186-904-3","","","","","","","","","Resource Engineering","","",""
"uuid:c64d0e63-9e2f-406c-930d-ae33cc077edb","http://resolver.tudelft.nl/uuid:c64d0e63-9e2f-406c-930d-ae33cc077edb","Numerical simulation of foam flow in porous media","van der Meer, J.M. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); Möller, M. (copromotor); Kraaijevanger, J.F.B.M. (copromotor); Delft University of Technology (degree granting institution)","2018","If secondary hydrocarbon recovery methods, like water flooding, fail because of the occurrence of viscous fingering one can turn to an enhanced oil recovery method (EOR) like the injection of foam. The generation of foam in a porousmedium can be described by a set of partial differential equations with strongly non-linear functions, which impose challenges for the numerical modeling. Former studies [1–3] show the occurrence of strongly temporally oscillating solutions when using forward simulation models, that are entirely due to discretization artifacts. We describe the foam process by an immiscible two-phase flow model where gas is injected in a porousmedium filled with a mixture of water and surfactants. The change from pure gas into foam is incorporated in the model through a reduction in the gas mobility. Hence, the two-phase description of the flow stays intact. Since the total pressure drop in the reservoir is small, both fluids can be considered incompressible [3]. However, whereas the fractional flow function for a gas-flooding process is a smooth function of water saturation, the generation of foam will cause a rapid increase of the flux function over a very small saturation scale. Consequently, the derivatives of the flux function can become extremely large and impose a severe constraint on the time step. We address the stability issues of the foam model, by numerous numerical approaches that improve the accuracy of the solutions. First, we study several averaging schemes and introduce a novel way of approximating the foam mobility functions on the grid interfaces in a finite volume framework. This will lead to solutions that are significantly smoother than can be achieved with standard averaging schemes. Next, we discuss several novel discretization schemes where the discontinuity is incorporated in the numerical fluxes for a simplified compressible flow model. These include the indirect addition of an extra grid interface at the location of the discontinuity, to preserve monotonicity of the solutions in time. Variations on this method, are the addition of an extra grid cell around the highly non-linear phase transition and the adaption of the flux terms based on the location of the discontinuity or non-linearity in the grid. As a practical example to demonstrate these techniques we study a simplified model for foam flow in porous media. The model is then extended to a two-dimensional reservoir, where the accuracy of the solutions is a main concern. The two-dimensional simulator that is used for this, was build and tested for the foam model. It includes higher-order hyperbolic Riemann solvers, and flux correction schemes to compute the saturation of the different fluid phases in the model. The elliptic solver for the pressure equation is also adapted to the stiffness of the problem. With this simulator we perform a quantitative study of the stability characteristics of the flow, to gain more insight in the important wave-lengths and scales of the foam model. This insight forms an essential step towards the design of a suitable computational solver that captures all the appropriate scales, while retaining computational efficiency. In addition, we present a qualitative analysis of the effect of different reservoir and fluid properties on the foam fingering behavior. In particular, we consider the effect of heterogeneity of the reservoir, injection rates, and foam quality. This leads to interesting observations about the influence of the different foam parameters on the stability of the solutions, and we are able to predict the flow stability for different foam qualities. Finally, we discuss several other approaches that were addressed during this PhD-project to increase the understanding of solving highly non-linear flow problems in a porous medium.","Foam flow in porous media; Local-equilibrium models; Finite volume methods; Stability analysis; Reservoir simulation","en","doctoral thesis","","978-94-6233-863-0","","","","","","","","","Reservoir Engineering","","",""
"uuid:9667dc41-c736-47e6-b818-78c7c50fb08d","http://resolver.tudelft.nl/uuid:9667dc41-c736-47e6-b818-78c7c50fb08d","Value of information in closed-loop reservoir management","Goncalves Dias De Barros, E. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); Van den Hof, Paul M.J. (promotor); Delft University of Technology (degree granting institution)","2018","Over the past decades, many technological advances have unlocked new opportunities to boost efficiency in the oil and gas industry (e.g., complex well drilling, injection of advanced chemicals, sophisticated instrumentation). The real engineering challenge is to apply these technologies in the best possible way for each particular case. This leads to very difficult decisions to be made, mainly because every oil and gas field is one of its kind and our knowledge of the subsurface is very limited. Many efforts have been made to develop tools to support these decisions by applying a more systematic approach to determine smart exploitation strategies. Yet, very little has been done on the optimization of reservoir surveillance plans to establish the best observations to monitor de field response to the exploitation strategies, which, in turn, can also contribute to a better exploitation of the reservoir.
In this thesis we propose a methodology to assess the value of future measurements as a first step towards the development of a framework to optimize the design of reservoir surveillance plans. We also investigate alternatives to improve current reservoir management approaches by recommending actions which anticipate the availability of future information and account for the impact of immediate decisions on the decisions to be made in the future.
Throughout the chapters, we discuss how to combine a variety of topics (e.g., model-based optimization, data assimilation, uncertainty quantification) with other unusual ingredients (e.g., plausible truths, clairvoyance, flexible plans) to develop a methodology which can be applied in many problems involving decision making and learning. Despite being motivated by a real application, this research addresses abstract concepts such as value and information, but always from an engineering perspective. This makes us approach the problem in a different way, which, we hope, will inspire innovative solutions in the future.","value of information; closed-loop reservoir management; reservoir surveillance; geological uncertainty; robust optimization; data assimilation; plausible truths; representative models; clustering; stochastic programming","en","doctoral thesis","","978-94-6366-009-9","","","","","","","","","Reservoir Engineering","","",""
"uuid:1572a346-95c9-43a5-bf81-81d1fbfde2e9","http://resolver.tudelft.nl/uuid:1572a346-95c9-43a5-bf81-81d1fbfde2e9","Real-time resource model updating in continuous mining environment utilizing online sensor data","Yuksel-Pelk, C. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Benndorf, J. (promotor); Buxton, M.W.N. (copromotor); Delft University of Technology (degree granting institution)","2017","In mining, modelling of the deposit geology is the basis for many actions to be taken in the future, such as predictions of quality attributes, mineral resources and ore reserves, as well as mine design and long-term production planning. The essential knowledge about the raw materialproduct is based on this model-based prediction, which comes with a certaindegree of uncertainty. This uncertainty causes one of the most common problems in the mining industry, predictions on a small scale such as a train load or daily production are exhibiting strong deviations from reality.Some of the most important challenges faced by the lignite mining industry are impurities located in the lignite deposit. Most of the times, these high ash values cannot be captured completely by exploration data and in the predicted deposit models. This lack of information affects the operational process.","mining; online sensor data","en","doctoral thesis","","978-94-6233-803-6","","","","","","","","","Resource Engineering","","",""
"uuid:3bcb57b0-379c-4a13-a297-ffa9e9ce0910","http://resolver.tudelft.nl/uuid:3bcb57b0-379c-4a13-a297-ffa9e9ce0910","Identification of flow-relevant structural features in history matching","Kahrobaei, S.S. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); van den Hof, P.M.J. (promotor); Delft University of Technology (degree granting institution)","2016","","history matching; ill-posed problems; inverse problems; transfer function; identifiability; identification; structural features; flow barrier","en","doctoral thesis","","978-94-6233-321-5","","","","","","","","","Reservoir Engineering","","",""
"uuid:e66b1e00-b4c2-43b8-91fa-c57773fcf24b","http://resolver.tudelft.nl/uuid:e66b1e00-b4c2-43b8-91fa-c57773fcf24b","A Modified Gradient Formulation for Ensemble Optimization under Geological Uncertainty","Fonseca, R.M.","Jansen, J.D. (promotor); Van den Hof, P.M.J. (promotor)","2015","In this dissertation we have investigated theoretical and numerical aspects of the Ensemble Optimization (EnOpt) technique for model based production optimization. We have proposed a modified gradient formulation for robust optimization which we show to be theoretically more robust than the earlier existing formulation. Through a series of numerical experiments we illustrate the impact of ensemble size on the quality of an ensemble gradient and illustrate the superior performance of the modified gradient formulation. We also show that this modified gradient formulation hereafter referred to as Stochastic Simplex Approximate Gradient (StoSAG) shows comparable performance to an Adjoint based robust optimization. Additionally we have investigated the impact of a Covariance Matrix Adaption procedure to improve the EnOpt technique. This new CMA-EnOpt was shown to improve robustness of the method to an initial user defined choice of the covariance matrix. Most real world problems need multiple objectives to be optimized, in this dissertation we have investigated the applicability of EnOpt for multi-objective optimization and generation of Pareto trade-off curves. Finally many of the new proposed modifications were applied to a sector model of a real field case where we demonstrate the flexibility of EnOpt (StoSAG) as well as the significant practical value which can be achieved when using Ensemble Optimization for model based production optimization especially under geological uncertainty.","Ensemble Optimization; Geological Uncertainty; Multi-objective optimization; Robust optimization; Pareto Fronts; CMA-EnOpt; Gradient Quality","en","doctoral thesis","","","","","","","","2016-01-20","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:7b1cdc6f-3fee-4ada-bd59-fb608bf0ca42","http://resolver.tudelft.nl/uuid:7b1cdc6f-3fee-4ada-bd59-fb608bf0ca42","Model-based Optimization of Oil Recovery: Robust Operational Strategies","Van Essen, G.M.","Jansen, J.D. (promotor); Van den Hof, P.M.J. (promotor)","2015","The process of depleting an oil reservoir can be poured into an optimal control problem with the objective to maximize economic performance over the life of the ?eld. Despite its large potential, life-cycle optimization has not yet found its way into operational environments. The objective of this thesis is to improve operational applicability of model-based optimization of oil recovery. The reluctance of oil and gas companies to adopt this technology in their operational environments can mainly be contributed to the large uncertainties that come into play when optimizing production over the entire life of a ?eld and - in effect - the lack of faith that exists in the available methods and models. These uncertainties are of varying nature and originate from different sources. This leads to the main research question of this thesis: Can the performance of model-based life-cycle optimization of oil and gas production in realistic circumstances be improved by addressing un-certainty in the optimization problem? In this thesis, two approaches to address this research question are presented, related to the choice for a ?xed or adaptive operational strategy. For a ?xed strategy, three methods are described: hierarchical optimization, robust optimization, and integrated dynamic optimization and feedback control. For adaptive operational strategies, two aspects are investigated in a more exploratory setting: the combination of different data sources and the frequency of sequential model updating and re-optimization. The methods laid out in this thesis provide improved economic life-cycle performance under uncertainty in a number of examples. While presented as separate methods, they are not mutually exclusive and could be combined into a single work?ow. Although all the examples involve water?ooding as recovery mechanism, the scope for life-cycle optimization may be larger for enhanced (tertiary) oil recovery methods because of the generally higher up- and downside potential of these techniques. Application of the methods on a real petroleum reservoir is still required to evaluate their merit in a truly realistic environment.","oil recovery; optimization; waterflooding; reservoir simulation","en","doctoral thesis","","","","","","","","2015-08-05","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:d43e4340-e43b-4c3c-bb7f-7a3d0d59b8fa","http://resolver.tudelft.nl/uuid:d43e4340-e43b-4c3c-bb7f-7a3d0d59b8fa","Using Distributed Fiber-Optic Sensing Systems to Estimate Inflow and Reservoir Properties","Farshbaf Zinati, F.","Jansen, J.D. (promotor); Luthi, S.M. (promotor)","2014","Recent developments in the deployment of distributed fiber-optic sensing systems in horizontal wells carry the promise to lead to a new, cheap and reliable way of monitoring production and reservoir performance. Practical applicability of distributed pressure sensing for quantitative inflow detection will strongly depend on the specifications of the sensors, details of which are currently not yet publicly available. We therefore theoretically examined the possibility to identify reservoir inflow from distributed measurements in the well. The first chapter gives a common definition of ‘smart wells’ concept, as used in hydrocarbon production. Conventional and newly-emerging well monitoring and reservoir surveillance techniques are briefly reviewed and the advantages of recent fiber-optic sensing systems are addressed. The significance of filling the gap between advanced monitoring and control technology by means of robust interpretation methods is discussed. In the second chapter a single-phase transient model for the fluid flow in the wellbore is used to investigate the time span in which dynamic phenomena in the wellbore occur. The model is based on a numerical method utilizing a flux splitting scheme and standard first-order-accurate upstream discretization is presented. Moreover the most important parameters influencing the pressure drop over a long horizontal well are investigated. The results suggested that the dynamics of the wellbore are significantly faster than the dynamics of the reservoir. Therefore in coupling a (numerical/analytical) reservoir simulator with a wellbore model, the dynamics of the wellbore can be neglected for the sake of simplicity and higher computational speed. Furthermore, the presented numerical experiments illustrated that the duration of transient-state in the wellbore was affected by the fluid compressibility. The well length, wellbore diameter and total production rate were the most important parameters to influence the total pressure-drop over the entire length of the well. The third chapter the possibility to identify reservoir inflow from distributed pressure measurements in the well is theoretically examined. The wellbore and near-wellbore are described by semi-analytical steady state models, and a gradient-based inversion method is applied to estimate the specific productivity index (SPI) as a function of along-well position. To obtain the gradients the adjoint method is used and results in a computationally very efficient inversion scheme. With the aid of two numerical experiments, the effects of well and reservoir parameters, sensor spacing, sensor resolution and measurement noise on the quality of the inversion results are investigated. The results showed that under single-phase steady-state conditions in the reservoir and the wellbore, SPIs and the associated inflow profile can be estimated from distributed pressure sensors. However, the inversion results are affected by sensor resolution, measurements noise and by the number of measurements compared to the number of unknown parameters. The negative effects of measurement noise and low sensor resolution are strongest in those areas of the well where the influx is smallest i.e. usually close to the toe. This is mainly due to the small pressure gradients along the wellbore which makes estimation of the flow rate and thus of the specific influx and of the SPI very inaccurate. The low computational time required for the proposed inversion methodology is of potential importance for applications in the real-time control of smart wells, e.g. to control coning behavior using measurements of gas or water influx. In the chapter four, the gradient-based minimization technique utilizing the adjoint method, as described in chapter three, is extended to the transient problem. Transient semi-analytical reservoir models are combined with adjoint-based minimization algorithms to estimate reservoir properties from dynamic recording of distributed pressure sensors in the well. Methods of instantaneous sink-source functions along with principle of superposition are employed to create a dynamic forward model for the coupled well-reservoir system. Aanalyzing measurements taken by distributed pressure sensor systems under dynamic conditions enables identifying properties of reservoir zones i.e. permeability and reservoir dimensions. By applying the proposed inversion methodology to transient pressure measurements, reservoir properties that influence the specific productivity index of each individual zone are independently estimated. In chapter five, the inversion methodology of chapter three is extended to multi-phase fluid flow condition. Resistivity measurements in addition to distributed pressure measurements are employed to estimate water and oil inflow of reservoir zones. In this approach, semi-analytical two-phase oil and water flow in the reservoir and wellbore is used to estimate two-phase specific productivity indices. Through several synthetic examples the effects of measurement noise and wellbore-reservoir geometry on the inversion results are investigated. Under steady-state conditions, the SPIs corresponding to oil and water phases and the associated oil and water inflow profiles can be estimated from distributed pressure and resistivity sensors. Combination of pressure and resistivity measurements leads to a fairly accurate estimation of the location and amount of different phases entering the wellbore from the reservoir.","distributed pressure sensing; parameter estimation; inflow estimation; DPS; production optimization; adjoint method; downhole monitoring; fiber optic sensors","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:deffd661-aa01-43f2-bbac-acff15e7ccc6","http://resolver.tudelft.nl/uuid:deffd661-aa01-43f2-bbac-acff15e7ccc6","Quantification of the impact of data in reservoir modeling","Krymskaya, M.V.","Heemink, A.W. (promotor); Jansen, J.D. (promotor)","2013","Global energy use is increasing. As societies advance, they will continue to need energy to power residential and commercial buildings, in the industrial sector, for transportation and other vital services. To satisfy this rising demand, liquid, natural gas, coal, nuclear power and renewable fuel sources are extensively developed. Particularly fossil fuels (i.e. oil, natural gas and coal) remain the largest source of energy for the world. Petroleum exploration and production companies continuously develop new and enhance current production technologies to increase recovery from the existing fields. These companies rely on various tools to support their production and development decisions. Reservoir modeling is a standard tool used in the decision making process allowing analysis and prediction of the reservoir flow behavior, identification of beneficial production strategies and evaluation of the associated risks. The models used for reservoir simulation contain a large number of imperfectly known parameters characterizing the reservoir flow, e.g. permeability and porosity of the reservoir rock. Therefore the predictive value of such models is limited and tends to deteriorate in time. History matching is employed to update the values of poorly known model parameters in time with the help of the production data which become available during the production life of the reservoir, i.e. to adapt parameters such that simulated results are consistent with measured production data. Such an approach generally improves estimates of the model parameters and the predictive capability of the model. Remarkably, the information extracted from the measurements in the history matching phase is repeatedly found as not enough to provide well-calibrated model with a high predictive value. Hence, consideration of additional data can be of particular help. To optimize the costs and effort associated with collection of new data and computations, up-front selection of the most influential measurements and their locations is desirable. Methods to assess the impact of measurements on model parameter updating are therefore needed. The research objective of this thesis was to develop efficient tools for quantifying the impact of measured data on the outcome of history matching of reservoir models, i.e. tools that provide a meaningful quantification of the impact of observations, while requiring limited time and effort to be incorporated in the history matching algorithms. This research addressed history matching two-dimensional two-phase reservoir model representing water flood with production data (bottom hole pressure at injection well and oil and water flow rates at production wells). First, the applicability and implementation of a number of history matching algorithms were investigated. The representer method (RM) has been considered as an example of variational techniques. The algorithm’s key feature is the computation of a set of so-called representers describing the influence of a certain measurement on an estimation of the state and/or parameter. The RM was found to provide a reasonable parameter estimate, although it is computationally inefficient for dealing with large data sets. This fact yielded testing of the accelerated representer method (ARM), where direct computation of representers is avoided. The results indicate that the accuracy of the ARM can be controlled to provide an outcome of the same accuracy as the RM, and that the ARM outperforms the classical RM in terms of computational speed when the number of assimilated measurements increases. In this thesis we developed a strategy to evaluate the number of operations performed by the methods to assess the amount of data for which the ARM becomes beneficial to use. The RM and the ARM require the model adjoint and are not intended for continuous (sequential) history matching, namely for incorporating obtained data in the model on the fly. Instead they perform history matching over a rather long time window using all available observations. The ensemble Kalman filter (EnKF) has been discussed as it is the algorithm for continuous history matching. The EnKF schemes do not require the model adjoint, which makes them very attractive for data assimilation with complex non-linear models. The use of the EnKF in reservoir engineering however is prone to producing physically unreasonable values of the state variables. The problem can be overcome by including a so-called confirmation step in the algorithm. The EnKF, particularly with a confirmation step, is often computationally demanding for large-scale applications. The asynchronous EnKF (AEnKF) is a modification of the EnKF which offers a practical way to perform history matching in such cases by updating the system with batches of measurements collected at the times different to the time of the update. Hence, all observations collected during a certain time-window can be history-matched at once at the end of observational period. This allows for comparison of the influence of the observations collected at different times. Furthermore, it does not rely on an adjoint model, though it resembles the approach usually followed in variational methods. Both the EnKF and the AEnKF demonstrated considerable improvement of the model parameter estimates compared to the prior and gave acceptable history matches. Since the AEnKF allows for history matching all the data gathered throughout the observational period at once, it permits comparison of the effect of observations collected at different time instances. The equivalence of the AEnKF to variational techniques (e.g. the RM) yields the possibility to evaluate if ensemble Kalman filtering and variational methods utilize the observations in a similar manner. The representer method and the AEnKF were selected to be used as platforms for quantification of the measurements impact on history matching. Secondly, in this thesis we developed a tool to quantify the impact of measured data on the outcome of history matching. The method has been inspired by the recent advancements in meteorology and oceanography, and is based on a so-called sensitivity matrix. This matrix can be used to evaluate the amount of information extracted from available data during the data assimilation phase and identify the observations that have contributed to the parameter update the most. In particular, we used the diagonal elements of the matrix, known as self-sensitivities, as a quantitative measure of the influence of observed measurements on predicted measurements. Additionally, we have proposed a way to use the norm of the sensitivity matrix for assessing the magnitude of possible change in the accuracy of the model due to the respective change in the accuracy of collected observations. The observation sensitivity matrix is fast and easy to compute both for adjoint-based and EnKF types of history matching algorithms. The analysis performed with the aid of the observation sensitivity matrix has confirmed that the RM and the AEnKF utilize the data with comparable effectiveness. Remarkably, for a simple test case the global averaged influence of the observed measurements is only 4%. This is a rather low value compared to the 96% global averaged influence of the prior. The observation sensitivity matrix can be also used to investigate the dependency between the measurement location / type and its importance to history matching.","history matching; data assimilation; petroleum reservoir models; observation sensitivity","en","doctoral thesis","","","","","","","","2013-05-22","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","","",""
"uuid:23f828ab-1cfe-4da4-9c53-daeebd1908cf","http://resolver.tudelft.nl/uuid:23f828ab-1cfe-4da4-9c53-daeebd1908cf","Feature-based estimation for applications in geosciences","Lawniczak, W.","Heemink, A.W. (promotor); Jansen, J.D. (promotor)","2012","A reservoir simulator mimics the movement of fluids in the presence of each other through a porous medium under some specified conditions. It is a numerical model of a real-life physical process, therefore, subject to uncertainty. Some uncertainties can be lowered by improving model-parameter estimates. This is where data assimilation plays an important role. Automated data assimilation, using sophisticated techniques, is a widely researched topic in today's applied science. We investigated two research topics in data assimilation that are closely connected to the area of image processing. Images are an integral part of reservoir engineering application in the form of property or variable fields. Reservoir engineering, image processing and data assimilation are the leading themes here. First, we applied an ensemble multiscale filter as a permeability estimator and concluded that the filter can be an efficient localizing tool especially for spatially large observations. Second, we developed a grid deformation technique inspired by grid generation and image warping methods. We presented two- and three-dimensional versions of the method in reservoir and groundwater flow models, and concluded that the grid distortion proved cost efficient and effective.","feature-based estimation","en","doctoral thesis","Uitgeverij BOXPress","","","","","","","2012-12-10","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","","",""
"uuid:80b844d4-02ec-4c94-b132-e38c62e613e5","http://resolver.tudelft.nl/uuid:80b844d4-02ec-4c94-b132-e38c62e613e5","Simulation and Optimization of Foam EOR Processes","Namdar Zanganeh, M.","Rossen, W.R. (promotor); Jansen, J.D. (promotor)","2011","Chemical enhanced oil recovery (EOR) is relatively expensive due to the high cost of the injected chemicals such as surfactants. Excessive use of these chemicals leads to processes that are not economically feasible. Therefore, optimizing the volume of these injected chemicals is of extreme importance. We intend to maximize the long-term cumulative oil production (Qo,cum) through optimizing the volume of the injected surfactant (represented by the switching time between surfactant and gas slugs) in a surfactant-alternating-gas (SAG) process in a 3D reservoir using a commercial simulator. Evaluating the correctness and accuracy of the numerical simulator is an essential step towards achieving reliable results. However, since no analytical solution exists for a real 3D displacement (with gravity), the performance of the simulator in 1D is evaluated against the exact analytical solutions provided by the method of characteristics (MOC). The MOC has proved useful in highlighting key mechanisms and strategies for improving foam performance. We extended the MOC to foam flow with oil and examined the effects of foam quality, initial oil saturation So(I), and foam sensitivity to high oil saturation (So) and low water saturation (Sw) on oil recovery in 1D. In the cases examined, our analysis revealed the following insights. Regardless of whether foam is sensitive to Sw, if foam is destroyed by oil at the initial condition, the displacement is nearly as inefficient as if no foam were present at all. In real foams, foam bubbles collapse at the residual water saturation (Swr) because of high capillary pressure. The failure to represent this mechanism properly in models leads to misleading prediction of success in SAG foam processes. Incorporating foam collapse at Swr results in the failure of a gas-injection cycle of a SAG process, regardless of the reservoir initial condition and foam sensitivity to Sw and So, for the relative-permeability models we examined. A foam flood is successful for any initial condition if foam is only weakened (not killed) by low Sw and not affected by So. Based on this study, it is not recommended to start foam EOR at early stages of the reservoir life for a foam formulation that is sensitive to high oil saturation, because high So(I) causes the foam EOR process to fail. Thus, the effect of low Sw and high So on foam must be well understood and represented accurately to avoid spurious decisions leading to failure based on unrealistic foam models and parameter values. The MOC solutions developed earlier are utilized to evaluate the performance of the simulator in 1D. In finding an accurate numerical solution that matches the MOC solution, some displacements were found to be more sensitive to the choice of time-step (?t) and gridblock size (?x) than others. For instance, if a part of the solution (e.g., rarefaction wave, constant-state region) is in the proximity of the foam/no-foam boundary at which drastic changes in gas mobility occur, the simulator may exhibit oscillations across the boundary with an improper choice of ?t and ?x and fail to find to the correct solution. Moreover, an inappropriate choice of ?t and ?x leads to erroneous results that might be hard to identify in 3D in the absence of the MOC solutions. One needs to look for symptoms, such as gridblocks with unexpected high/low saturation/pressure, to identify artifacts and find a proper choice of ?t and ?x by performing sensitivity analysis on these parameters. Insights achieved from this analysis led to applying simpler physics for the foam model in the 3D simulations to ensure finding the correct solution. The effect of the switching time (ts) between surfactant and gas slugs on Qo,cum was examined for 3D simulations of a SAG process in scenarios varying in the active constraint on the injection well and the end-time constraint. For all the scenarios, the highest oil recovery was obtained at a value of ts for which the foam front was on the verge of breaking through to the production well, but has not yet broken through, at the end of the simulation. Moreover, the cumulative oil production was impaired once foam appeared in the production well. Therefore, if foam can be destroyed in the proximity of the production well, the optimal oil recovery increases. On the other hand, for an injection well operating at a constant prescribed bottomhole pressure, injecting surfactant into the reservoir did not necessarily lead to improved Qo,cum over a gas flood. Further, increasing ts did not result in higher Qo,cum under certain conditions. In addition, injecting less gas as a result of increasing ts did not lower Qo,cum in many occasions. An investigation was conducted on the capability of a gradient-based optimization routine applied to foam EOR processes. We concluded that an inappropriate choice of the relative tolerance for the adjoint linear solver is the source of getting wrong gradients in our problem, and a very tight relative tolerance was required for the simulator to obtain accurate gradients in certain problems. We applied two types of foam models in this investigation: a linear model introducing gradual changes in gas mobility and a nonlinear model leading to abrupt changes in gas mobility. For the linear foam model (both in 1D and 3D simulations), the local and global trends of the objective function (Qo,cum) were analogous and the optimization routine was capable of finding the optimum switching time (ts,opt). However, replacing the linear foam model with the nonlinear foam model introduced inconsistencies between the local and global trends of the objective function and fluctuations in the adjoint gradient, in both 1D and 3D simulations. For the nonlinear foam model, the local and global trends were analogous and the adjoint gradient was free of fluctuations only at switching times for which the entire reservoir was swept by foam within the simulation period. For the 1D-nonlinear foam model, the gradient-based optimization routine was not suitable for finding ts,opt, unless the initial guess is larger than ts,opt. For the 3D-nonlinear foam model, there were major differences between the local and global trends of the objective function in the neighborhood of the optima that would seriously challenge the performance of the optimization routine. As a result, a gradient-based optimization routine was not suitable for finding ts,opt. Overall, it is shown that accurate representation of the physics of the process in the simulation model and also careful examination of the mechanisms controlling the displacement process elucidate many valuable aspects of the foam EOR processes. Their inaccurate representation in simulations or neglecting them may result in a prediction of success for a process that will be unsuccessful in a real reservoir. Moreover, formation of foam may introduce abrupt changes in gas mobility that might challenge the performance of the simulator and also the gradient-based optimization routines.","Enhanced Oil Recovery (EOR); Foam; Surfactant Alternating Gas (SAG); Method of Characteristics (MOC); Gradient-Based Optimization; Adjoint-Based Optimization","en","doctoral thesis","Proefschriftmaken.nl || Printyourthesis.com","","","","","","","2012-01-01","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:636ac4f8-125b-4104-a720-27e3338ccd09","http://resolver.tudelft.nl/uuid:636ac4f8-125b-4104-a720-27e3338ccd09","Model-reduced gradient-based history matching","Kaleta, M.P.","Heemink, A.W. (promotor); Jansen, J.D. (promotor)","2011","Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data are used to improve the geological model, while the improved model is used to increase the recovery from a field. Both problems can be formulated as optimization problem, i.e. history matching identifies the parameter values that minimize an objective function that represents the mismatch between modeled and observed data while production optimization identifies wells controls that maximize the total oil recovery or monetary profit. One of the most efficient class of methods to solve history matching and production optimization problems are gradient-based methods where the gradients are calculated with the use of an adjoint method. The implementation of the adjoint method for parameter estimation and control optimization is, however, very difficult if no Jacobians of the model are available. This implies that there is a need for gradient-based, but adjoint-free optimization methods. A requirement becomes even more pressing if reservoir simulation is combined with another simulation, e.g. simulation of geomechanics or rock physics, with a code for which no Jacobians are available. The research objective of this thesis was to evaluate the performance of a model-reduced gradient-based history matching routine that does not require a difficult implementation and involves the reduction of the reservoir system. Additionally, the use of model-reduced method for production optimization of a reservoir operating under induced fracturing conditions was considered. In history matching problems one deals with a large number of uncertain parameters and very sparse observations, while in the production optimization one controls a large dimensional system by adjusting a limited number of controls. Consequently, the values of many model parameters cannot be verified with measurements due to a relatively few information content present in them, while in the production optimization only a limited part of the system can be indeed controlled. In this thesis we proposed a new method inspired by the results in reduced order modeling (ROM) and system-theoretical concepts of controllability and observability of the reservoir system. The new approach assumes that the reservoir dynamics relevant for history matching or production optimization can be represented accurately by a much smaller number of variables than the number of grid cells used in the simulation model. Consequently, the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and the adjoint of the tangent linear approximation of the original forward model is replaced by the adjoint of a linear reduced-order forward model. The reduced-order model is constructed by means of the Proper Orthogonal Decomposition (POD) method or Balanced Proper Orthogonal Decomposition (BPOD) method. The reduced-order model is not, however, obtained by the projection of the nonlinear system of equations as in the conventional projection-based ROM techniques, but instead it is approximated in the reduced subspace. The conventional POD method requires the availability of the high-order tangent model, i.e. of the Jacobians with respect to the states which are not available. The model-reduced method obtains a reduced-order approximation of the tangent linear model directly by computing approximate derivatives of the reduced-order model. Then due to the linear character of the reduced model, the corresponding adjoint model is easily obtained. The gradient of the objective function is approximated and the minimization problem is solved in the reduced space; the procedure is iterated with the updated estimate of the parameters if necessary. The POD-based approach is adjoint-free and can be used with any reservoir simulator, while the BPOD-based approach requires an adjoint model but does not require the Jacobians of the model with respect to uncertain parameters or controls. At first the model-reduced method was applied to history matching problems and was evaluated based on its computational efficiency and robustness. In order to make a valuable judgment this approach was compared to the classical adjoint-based method, which was available for the estimation of the permeability field. Permeabilities are described at each cell of the model, and therefore they need to be re-parameterized. The KL-expansion was used to reduce the parameters space. The significant reduction of the dimension of the dynamic reservoir model and parameter space made the approximation of the reduced-order system feasible in acceptable computation time. The pressure field required relatively low number of patterns which modeled mostly the changes around the wells. The saturation field required much more patterns and they modeled mostly the moving front of the saturation field. In the first studies simplistic reservoir models were used, for which the model-reduced approach showed to perform very well. The obtained estimates of the permeability field significantly improved compared to the prior fields and gave the acceptable history-matches; the quality of the prediction capabilities of the estimated models were very high and comparable to those obtained by the classical adjoint-based approach. The POD-based method was approximately twice as expensive as the classical approach, but the BPOD-based method was comparable to the adjoint-based method. Moreover, both methods were considerably cheaper than the finite difference approach. These preliminary results were the first applications of the model-order reduction to history matching problems. After this proof of concept, further studies were carried on more complex and larger models. The proposed method was capable to obtain satisfactory match with a computational efficiency about five times lower than the adjoint-based method. Similarly, an improvement in the prediction was obtained. The second problem considered in this research was to apply the adjoint-free methods to production optimization of the reservoir operating under special conditions that required coupling of two simulators and for which the adjoint code is not available. The model-reduced method could not be applied because of a low accuracy of the simulation solution which in case of long time simulations resulted in large approximation errors. Therefore, simultaneous perturbation stochastic algorithm (SPSA) was applied together with the finite difference gradient-based method to solve the production optimization problem. SPSA is a gradient-based method where the gradients are approximated by random perturbations of all controls in once, while the finite difference method approximates the gradients by perturbation of each control separately. Both approaches were very simple to implement, they resulted in the improvement of the production, but they were computationally relatively expensive.","reduced order modeling; history matching; data assimilation; adjoint-free; petroleum reservoir models","en","doctoral thesis","Wohrmann Print Service","","","","","","","2012-07-04","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","","",""
"uuid:98fcf37a-4d51-4d74-88fc-995e581ddb88","http://resolver.tudelft.nl/uuid:98fcf37a-4d51-4d74-88fc-995e581ddb88","Control-Relevant Upscaling","Vakili Ghahani, S.A.","Jansen, J.D. (promotor)","2010","An ‘upscaling/order-reduction’ solution transfers the relevant features of a geological model to a flow simulation model such that cost-efficient simulation, prediction and control of the fluid flow in an oil reservoir become feasible. In addition to the computational issues, in most reservoir applications and for a given configuration of wells, there is only a limited amount of information (output) that can be observed from production data, while there is also a limited amount of control (input) that can be exercised by adjusting the well parameters. From a system-theoretical point of view, this means that a large number of combinations of the state variables (pressure and saturation values) are not actually controllable and observable from the wells, and accordingly, they are not affecting the input-output behavior of the system. In this research, therefore, we aim at adjusting (reducing) the level of model complexity (order) to the level of relevant dynamics in terms of input-output behaviour. In particular, we present a multi-level selective (i.e. non-uniform) grid coarsening method, in which the criterion for grid size adaptation is based on the spatial quantification of the controllability and observability properties of the reservoir system. Based on the numerical examples, this method can accurately reproduce the flow response of the fine scale models.","upscaling; reservoir simulation; controllability and observability","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","","",""
"uuid:2cfad6e4-e705-4483-806e-cea838d03479","http://resolver.tudelft.nl/uuid:2cfad6e4-e705-4483-806e-cea838d03479","Model Structure Analysis of Model-based Operation of Petroleum Reservoirs","Van Doren, J.F.M.","Van den Hof, P.M.J. (promotor); Jansen, J.D. (promotor)","2010","The demand for petroleum is expected to increase in the coming decades, while the production of petroleum from subsurface reservoirs is becoming increasingly complex. To meet the demand petroleum reservoirs should be operated more efficiently. Physics-based petroleum reservoir models that describe the flow in subsurface porous media can play an important role here. In this thesis possibilities are investigated to determine on one hand models with a complexity that is suitable for model-based operation (i.e. the relevant dynamic processes can be adequately described), and on the other hand models that only contain parameters that can be validated by measurements (in this thesis the pressure and phase-rate measurements in the wells). The most relevant dynamics of the model are determined by the controllability and observability properties. These indicate that reservoir models behave as models of much lower order than the currently used models, and that reduced-order reservoir models should focus for fixed well positions on correctly modeling the fluid front(s). In the second part identifiability and structural identifiability have been quantified and used to determine which (physical) model parameters can be reliably estimated from measurement data. From the analysis it was concluded that the parameters of reservoir models are not identifiable from production measurements and that they are largely based on qualitative geological information. Pressure measurements only contain information about grid block permeabilities in an area close to the wells in which is measured, and phase-rate measurements contain after water breakthrough only information about grid block permeabilities in the area between the injection and production wells. This supports the need to use information of other measurement types, such that better model-based decisions can be taken to make the operation of petroleum reservoirs more efficient.","petroleum reservoir engineering; controllability; observability; identifiability","en","doctoral thesis","","","","","","","","2011-06-14","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","",""
"uuid:2d0316df-3b66-451a-8de1-dac5946602c5","http://resolver.tudelft.nl/uuid:2d0316df-3b66-451a-8de1-dac5946602c5","Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic","Przybysz-Jarnut, J.K.","Jansen, J.D. (promotor); Gisolf, A. (promotor)","2010","The numerical simulation of hydrocarbon reservoir flow is necessarily an approximation of the flow in the real reservoir. The knowledge about the reservoir is limited and some of the processes occurring are either not taken into account or not described in an adequate way. The parameters influencing the flow are usually not known, except in a few well locations where they can be measured quite accurately. This knowledge, however, is too limited to describe the key reservoir properties in the domain of interest. Because of those imperfections the data gathered from a real field usually do not agree with the numerical simulation results. These data may therefore be used as input for an inversion process to update the most uncertain parameters of the numerical model, a process known as computer-assisted history matching. Typical uncertain parameters are (grid block) permeabilities and porosities, fault transmissbilities, aquifer strength or other reservoir or fluid properties. Different data sets can be used for the purpose of parameter estimation. Production data obtained from wells were shown to have a limited resolving power. They provide some information about parameters in the neighborhood of wells, but not further away from them. However, due to developments in geophysics, especially in the field of seismic, a new data set becomes available, namely time-lapse seismic, that can be used together with production data in the history matching process. This thesis focuses mainly on the incorporation of interpreted time-lapse seismic data in the form of time-lapse seismic density changes in computer-assisted history matching. For this purpose a particular variational data assimilation method was chosen, namely the representer method. Two-dimensional two-phase (oil-water) flow is considered in this thesis and the uncertain parameters are formed by the grid block permeabilities. The influence of seismic data on the final parameter estimate is analyzed for two synthetic examples. Although, seismic data give full field coverage, not all seismic measurements are used in the inversion process. An a-priori choice is made of the locations and number of seismic data used in the assimilation and only data at the saturation front moving over time are utilized, as they are considered to be the most informative ones. To investigate the influence of the prior knowledge on the estimation results, different correlation structures of the uncertain parameters are imposed and their impact on the final estimates is assessed. Additionally some attention is paid to the cases in which wrong prior information is utilized during assimilation. The estimation results are assessed in terms of the quality of the history match (mismatch between ‘true’ and simulated measurements), and in terms of predictions of water breakthrough time and water flow rates in producers after the history matching period.","data assimilation; time-lapse seismic; history matching","en","doctoral thesis","","","","","","","","2011-06-01","Applied Sciences","Department Imaging Science & Technology","","","",""
"uuid:6879a75b-ac5c-4ebd-b4ac-d958fa03003a","http://resolver.tudelft.nl/uuid:6879a75b-ac5c-4ebd-b4ac-d958fa03003a","System-Theoretical Model Reduction for Reservoir Simulation and optimization","Markovinovic, R.","Jansen, J.D. (promotor)","2009","This thesis is concerned with low-order modelling of heterogeneous reservoir systems for the purpose of efficient simulation and optimization of flooding processes with multiple injection and production (smart) wells. Typically, one is initially equipped with a physics-based ('white-box') model consisting of O(103-106) equations and parameters representing a (coupled) system of discretized PDEs defined on a geometric grid. The model-order reduction (MOR) methodology undertaken in this research is fundamentally different from the traditional, 'grid-coarsing' approximation methods, in that no coarse-grid approximation of the fine-grid problem is employed at all. Instead, the reduced-order models are here based on 'system-theoretic' and dynamically intrinsic properties of the fine-scale system. In single-phase flow problems that can be modelled as linear time-invariant state-space systems these properties are, e.g., the system's transfer function in the Laplace domain, the eigenstructure of the system matrix, or controllability and observability of the (particular state-space realization of the) system. For multi-phase flow problems resulting in nonlinear state-space models, intrinsic information needs to be sought in data obtained by simulating the fine-scale model. The contribution of this thesis can be divided into three themes: 1) Standard 'projection-based' MOR: assessment of the performance of modal truncation, singular perturbation, balanced truncation, transfer function moments maching (inc. Krylov-subspaces), and proper orthogonal decomposition (POD), 2) Acceleration of solving the fine-scale problem: use of MOR as a 'shadow simulation' to determine an improved fine-scale initial guess, and 3) Acceleration of waterflooding optimization: use of POD in the inner-loop of an adjoint-based optimization scheme.","petroleum; reservoir engineering; systems and control theory; model reduction; simulation-optimization; iterative numerical analysis","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:ab6bf390-e38a-417d-853e-889bb0303446","http://resolver.tudelft.nl/uuid:ab6bf390-e38a-417d-853e-889bb0303446","Data assimilation in reservoir management","Rommelse, J.R.","Jansen, J.D. (promotor); Heemink, A.W. (promotor)","2009","The research presented in this thesis aims at improving computer models that allow simulations of water, oil and gas flows in subsurface petroleum reservoirs. This is done by integrating, or assimilating, measurements into physics-bases models. In recent years petroleum technology has developed rapidly. Nowadays wells can be drilled to a depth of up to 10 km, not just vertically, but also at an angle, horizontally or with branches. Moreover, downhole valves can be installed which can be opened or closed from the surface and advanced sensors can be placed in the subsurface. This technology has the potential to drain petroleum reservoirs much more efficiently. In order to do so, the technology needs to be used sensibly, which requires adequate knowledge of subsurface physical processes. Large amounts of measurements can contribute to this, but conventional methods are often ad hoc and not suited to handle the large amounts of data that are available nowadays. Good ""data assimilation"" methods are very important to ensure that the growing demand for energy in the near future can be met. The objective of this thesis is to apply data assimilation techniques, invented and developed in other areas of research, to petroleum reservoir engineering, to modify them to be better suited for their new application, and to investigate how they can help to integrate both production data and seismic data to support decision-making in petroleum reservoir management.","petroleum reservoir management; history-matching; data assimilation; filters and variational methods; production and seismic data","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:20b5a4b5-6419-4593-a668-48074982bcb3","http://resolver.tudelft.nl/uuid:20b5a4b5-6419-4593-a668-48074982bcb3","Model-based lifecycle optimization of well locations and production settings in petroleum reservoirs","Zandvliet, M.J.","Bosgra, O.H. (promotor); Jansen, J.D. (promotor)","2008","The coming years there is a need to increase production from petroleum reservoirs, and there is an enormous potential to do so by increasing the recovery factor. This is possible by making better use of recent technological developments, such as horizontal wells, downhole valves and sensors. However, actually making better use of these improved capabilities is difficult because of many open problems in reservoir management and production operations processes. Consequently, there is significant scope to increase the recovery factor of oil and gas fields by tailoring tools from the systems and control community to efficiently perform dynamic optimization of wells (e.g. number, locations) and their production settings (e.g. bottom-hole pressures, flow rates, valve settings) based on uncertain reservoir models, in the sense that they lead to good decisions while requiring limited time from the user. This thesis aims at developing these tools, and the main contributions are as follows. Many production setting optimization problems can be written as optimal control problems that are linear in the control. If the only constraints are upper and lower bounds on the control, these problems can be expected to have pure bang-bang optimal solutions. The adjoint method to derive gradients of a cost function with respect to production settings can be combined with robust optimization to efficiently compute settings that are robust against uncertainty in reservoir models. The gradients used in production setting optimization can be used to efficiently compute directions in which to iteratively improve upon an initial well configuration by surrounding the to-be-placed wells by pseudo wells (i.e. wells that operate at a negligible rate). The controllability and observability properties of single-phase flow reservoir model are analyzed. It is shown that pressures near wells in which we can control the flow rate or bottom-hole pressure are controllable, whereas pressures near wells in which we can measure the flow rate or bottom-hole pressure are observable. Finally, a new method of regularization in history matching is presented, based on this controllability and observability analysis.","petroleum; reservoir engineering; systems and control; optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","",""