1 

Timberconcrete composite floor systems
Timberconcrete composite (tcc) beams may be used for the renovation of old timber floors. Although these systems are a simple and practical solution, they are not widely adopted. One of the reasons for this is the lack of uniform design mies. In this research programme shear tests on four different fastener types were performed as well as bending tests on tcc beams, manufactured with these fasteners. A simulation model was built that is able to perform a Monte Carlo simulation on single tcc beams and tcc floor systems. The model was succesfullv verified with the bending tests before other simulations were performed. Several geometries were simulated resulting in a statistical distribution of the loadcarrying capacity of each geometry. The 5percéntile characteristic loadcarrying capacity of each geometrv was determined and it was found that the 5percentile loadcarrying capacity can also be calculated using a linear model. To do so the mean values for MoE and fastener stiffness and the 5percentile bending strength of the timber beam should be used.

[Abstract]

2 

Numerical investigation of turbomolecular pumps using the direct simulation Monte Carlo method with moving surfaces
A new approach for performing numerical direct simulation Monte Carlo (DSMC) simulations on turbomolecular pumps in the free molecular and transitional flow regimes is described. The chosen approach is to use surfaces that move relative to the grid to model the effect of rotors and stators on a gas flow. The current article describes the method and compares the results to experimental and theoretical data by Sawada [Bull. JSME 22, 362 (1979)]. The agreement between our results and Sawada's results are excellent. © 2009 American Vacuum Society.

[Abstract]

3 

A Quantized Analog Delay for an irUWB Quadrature Downconversion Autocorrelation Receiver
A quantized analog delay is designed as a requirement for the autocorrelation function in the quadrature downconversion autocorrelation receiver (QDAR). The quantized analog delay is comprised of a quantizer, multiple binary delay lines and an adder circuit. Being the foremost element, the quantizer consists of a series of comparators, each one comparing the input signal to a unique reference voltage. The comparator outputs connect to binary delay lines, which are a cascade of synchronized Dlatches. The outputs available at each line are linked together to reconstruct the incoming signal using an adder circuit. For a delay time of 550 ps, simulation results in IBM's CMOS 0.12 μm technology show that the quantized analog delay requires a total current of 36.7 mA at a 1.6 V power supply. Furthermore, delays in the range of several nanoseconds are feasible at the expense of power. After a Monte Carlo simulation it becomes evident that the response of the quantized analog delay does not suffer drastically from neither process nor component mismatch variations.

[Abstract]

4 

Assessing reasonable worstcase fullshift exposure levels from data of variable quality
Exposure assessors involved in regulatory risk assessments often need to estimate a reasonable worstcase fullshift exposure level from very limited exposure information. Fullshift exposure data of very high quality are rare. A fullshift value can also be calculated from (short term) taskbased values, either derived from measured data or from models. The most simple option is to use the task based exposure levels as the fullshift value. A second option is to calculate a timeweighted average (TWA), using (reasonable worst case) estimates of the duration and the exposure level of the relevant tasks. The third option is to use a Monte Carlo analysis with estimated input distributions for exposure level and duration of exposure. If an estimated distribution of respiratory volume is also included, this leads to a distribution of inhaled amounts. The 90th percentile of such a distribution is generally substantially lower than the fixed point estimates calculated using high end values for each parameter. This technique can thus prevent unnecessary conservative estimates in risk assessment. The output distribution can also be used as valuable input to the risk management process, because it provides information on probabilities of exposure levels, that can influence the costbenefit analysis of the risk management process. Finally, the sensitivity analysis of Monte Carlo simulation can give guidance for further studies to increase the accuracy of the exposure assessment.

[Abstract]

5 

Full size testing of sheet pile walls
Azobé (Lophira alata) is widely used in timber sheet pile walls in the Netherlands. The boards in these walls are coupled and therefore loadsharing can be expected. A simulation model based on the finite element method DIANA (DIANA, 1992) was developed and loadsharing could be calculated. To check these simulations full size tests on single boards and on sheet pile walls containing five boards were performed. The test programme so far has shown that loadsharing exists and that the characteristic strength of the sheet pile walls tested is about 23% higher then the characteristic strength of the single boards. The simulation model predicted a loadsharing factor of 1.15 based on brittle material behaviour. Tests on single boards showed a more plastic behaviour. After completion of the tests and an adaption of the model other geometries will be calculated to see whether the loadsharing factor obtained from the tests is allo applicable for other gcometries and species.

[Abstract]

6 

SpeedUp of the Monte Carlo Method by Using a Physical Model of the DempsterShafer Theory
By using the Monte Carlo method, we can obtain the minimum value of a function V(r) that is generally associated with the potential energy. In this paper we present a method that makes it possible to speed up the classical Monte Carlo method. The new method is based on the observation that the Bolzmann transition probability and the concept of local thermodynamical equilibrium give rise to an initial state of maximum entropy, which is subsequently modified by using the information on the internal structure of the system. The classical thermodynamic model does not take into account any structures inside the system, and therefore in many cases does not accurately model the system itself. In an attempt to take into account the internal structure of the system, we propose a physical model of the belief measure as defined in the DempsterShafer theory. The recent discovery by Resconi of an algorithm to calculate the probability distribution that has previously been developed by Harmanec and Klir, and which is consistent with the belief measure, opens the way to utilizing the Bolzmann distribution not only with a uniform distribution of probability, but instead with an arbitrary distribution of probability to guide the Monte Carlo iterative method to obtain the global minimum value of the potential energy. Starting from local thermodynamic equilibrium (i.e., local symmetry), the algorithm computes a new distribution over subsystems, resulting in a nonuniform distribution and in symmetry breaking. In the general case one can start with a different initial distribution induced by other local symmetries, corresponding to specific differential equations (e.g., the FokkerPlanck equation) and calculate from this the global distribution corresponding to the breaking of local symmetry. © 1998 John Wiley & Sons, Inc.

[Abstract]

7 

Application of HLA in the Optimization of Rail Transport
The Dutch rail infrastructure manager – Prorail – utilizes simulation to perform research in a number of areas of railtransport.One area that is of particular interest is that of the analysis of Dynamic Traffic Management (DTM) of trains. The aim of DTM is to manage plan/timetable deviations from daily train operations effectively in order to improve overall performance of the train service. To perform this analysis two existing (legacy) systems, FRISO and TMS, have been connected via the High Level Architecture (HLA). FRISO is a train simulator that is used to investigate rail transport in the area of several (tens of) kilometers. TMS is a further development of the controller system COMBINE and is an advanced traffic control system that is used to predict and minimize route conflicts between trains in order to improve efficiency, reliability and quality in rail transport. Analysis involves the execution of stochastic (Monte Carlo) simulation, for which conservative Time Management is a critical issue. This paper focuses on the design of the TMSFRISO federation, the federation agreements made, the lessons learned in making both legacy systems HLA enabled and possible future improvements. Within this project TNO provided distributed simulation expertise and HLA software tools, like the TNORTI and the RCI middleware with code generator.

[PDF]
[Abstract]

8 

Timedependent inversion of surface subsidence due to dynamic reservoir compaction
We introduce a novel, timedependent inversion scheme for resolving temporal reservoir pressure drop from surface subsidence observations (from leveling or GPS data, InSAR, tiltmeter monitoring) in a single procedure. The theory is able to accommodate both the absence of surface subsidence estimates at sites at one or more epochs as well as the introduction of new sites at any arbitrary epoch. Thus, all observation sites with measurements from at least two epochs are utilized. The method uses both the prior model covariance matrix and the data covariance matrix, which incorporates the spatial and temporal correlations between model parameters and data, respectively. The incorporation of the model covariance implicitly guarantees smoothness of the model estimate, while maintaining specific geological features like sharp boundaries. Taking these relations into account through the model covariance matrix enhances the influence of the data on the inverted model estimate. This leads to a better defined and interpretable model estimate. The timedependent aspect of the method yields a better constrained model estimate and makes it possible to identify nonlinear acceleration or delay in reservoir compaction. The method is validated by a synthetic case study based on an existing gas reservoir with a highly variable transmissibility at the free water level. The prior model covariance matrix is based on a Monte Carlo simulation of the geological uncertainty in the transmissibility. © International Association for Mathematical Geology 2008.

[Abstract]

9 

Extending the COVAD toolbox to accommodate system nonlinearities
The COVAD toolbox is a MATLAB/Simulink based tool conceived and developed for the rapid analysis and simulation of stochastically driven dynamic systems. In addition to a generic Monte Carlo capability, the toolbox is also supported by traditional analytical techniques such as the adjoint and covariance analysis methods. However, these latter techniques only apply to linear systems. The objective of this paper is to explore the feasibility of extending COVAD to enhance its nonMonte Carlo capability for the analysis of nonlinear systems. The present investigation will focus on the use and implementation of a wellknown linearization technique known as the statistical linearization method. This technique has been used in the past in conjunction with the standard adjoint and covariance analysis method to provide sufficiently accurate solutions to certain classes of nonlinear problems. Calculating the miss distance statistics of the homing loop of a generic guided missile under acceleration limiting will then be used as an example to demonstrate the utility of the software. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.

[Abstract]

10 

A Delay Filter for an irUWB FrontEnd
A continuoustime analog delay is designed as a requirement for the autocorrelation function in the quadrature downconversion autocorrelation receiver (QDAR). An eightorder Fade approximation of its transfer function is selected to implement this delay. Subsequently, the orthonormal form is adopted, which is intrinsically semioptimized for dynamic range, has low sensitivity to component mismatch, high sparsity and whose coefficients can be physically implemented. Each coefficient in the statespace description of the orthonormal ladder filter is implemented at circuit level using a novel 2stage gm cell employing negative feedback. Simulation results in IBM's BiCMOS 0.12 μm technology show that this delay filter requires a total current of 70 mA at a 1.6 V power supply. The 1dB compression point of the delay is at 565 mV and the SNR is 47.5 dB. On performing a Monte Carlo simulation it becomes evident that the response of the frequency selective analog delay does not suffer drastically from neither process variations nor component mismatch

[Abstract]

11 

Evaluation of micronozzle performance through DSMC, navierstokes and coupled dsmc/navierstokes approaches
Both the particle based Direct Simulation Monte Carlo (DSMC) method and a compressible NavierStokes based continuum method are used to investigate the flow inside micronozzles and to predict the performance of such devices. For the NavierStokes approach, both slip and noslip boundary conditions are applied. Moreover, the two methods have been coupled to be used together in a hybrid particlecontinuum approach: the continuum domain was then investigated by solving the NavierStokes equations with slip wall boundary condition, whereas the region of rarefied regime was studied by DSMC. The section where the domain was split was shown to have a great influence in the prediction of the nozzle performance. © 2009 Springer Berlin Heidelberg.

[Abstract]

12 

A Bayesian modeling approach for estimation of a shapefree groundwater age distribution using multiple tracers
Due to the mixing of groundwaters with different ages in aquifers, groundwater age is more appropriately represented by a distribution rather than a scalar number. To infer a groundwater age distribution from environmental tracers, a mathematical form is often assumed for the shape of the distribution and the parameters of the mathematical distribution are estimated using deterministic or stochastic inverse methods. The prescription of the mathematical form limits the exploration of the age distribution to the shapes that can be described by the selected distribution. In this paper, the use of freeform histograms as groundwater age distributions is evaluated. A Bayesian Markov Chain Monte Carlo approach is used to estimate the fraction of groundwater in each histogram bin. The method was able to capture the shape of a hypothetical gamma distribution from the concentrations of four age tracers. The number of bins that can be considered in this approach is limited based on the number of tracers available. The histogram method was also tested on tracer data sets from Holten (The Netherlands; 3H, 3He, 85Kr, 39Ar) and the La Selva Biological Station (CostaRica; SF6, CFCs, 3H, 4He and 14C), and compared to a number of mathematical forms. According to standard Bayesian measures of model goodness, the best mathematical distribution performs better than the histogram distributions in terms of the ability to capture the observed tracer data relative to their complexity. Among the histogram distributions, the four bin histogram performs better in most of the cases. The Monte Carlo simulations showed strong correlations in the posterior estimates of bin contributions, indicating that these bins cannot be well constrained using the available age tracers. The fact that mathematical forms overall perform better than the freeform histogram does not undermine the benefit of the freeform approach, especially for the cases where a larger amount of observed data is available and when the real groundwater distribution is more complex than can be represented by simple mathematical forms.

[Abstract]

13 

Patient dosimetry in abdominal arteriography
This study aims at accurate quantification of xray exposure and effective dose to the patient in abdominal arteriography. Using an automatic monitoring system, all relevant exposure parameters were determined during 172 abdominal arteriographies. Common projections were extracted for a 'normal' reference group of procedures and used in Monte Carlo calculations of dosearea product to organ dose conversion coefficients. Dosearea product, organ doses and effective dose were quantified for intravenous and intraarterial procedures. The large data sets describing exposure could be condensed to a set of 28 common views. New coefficients to convert dosearea product to organ equivalent dose and effective dose were calculated for nine views contributing approximately 80% to the total dosearea product. The average dosearea product was 32 Gy cm2 in intravenous procedures and 47 Gy cm2 in intraarterial procedures. The corresponding average effective doses to the patient were 4 mSv and 6 mSv respectively (range 212 mSv, actual value depending on procedure type and gender). It is concluded that automatic monitoring of xray exposure parameters, complemented by the calculation of Monte Carlo organ dose conversion coefficients, is a feasible and promising approach to accurate dosimetry of complex arteriographic procedures.

[Abstract]

14 

A pseudostatistical approach to treat choice uncertainty: the example of partitioning allocation methods
Purpose: Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, oneatatime (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes simultaneously with data uncertainty through LCA. This study aims to develop, implement, and test such a method, for the particular example of the choice of partitioning methods for allocation in LCA, to be used in LCA calculations and software. Methods: Monte Carlo simulations were used jointly with the CMLCA software for propagating into distributions of LCA results, uncertainty due to the choice of allocation method together with uncertainty of unit process data. In this study, a methodological preference is assigned to each partitioning method, applicable to multifunctional processes in the system. The allocation methods are sampled per process according to these preferences. A case study on rapeseed oil focusing on three greenhouse gas (GHG) emissions and their global warming impacts is presented to illustrate the method developed. The results of the developed method are compared with those for the same case similarly quantifying uncertainty of unit process data but accompanied by separate scenarios for the different partitioning choices. Results and discussion: The median of the inventory flows (emissions) for separate scenarios varies due to the partitioning choices and unit process data uncertainties. Inventory variations are reflected in the global warming results. Results for the approach of this study vary with the methodological preference assigned to the different allocation methods per multifunctional process and with the continuous distribution of unit process data. The method proved feasible and implementable. However, absolute uncertainties only further increased. Therefore, it should be further researched to reflect relative uncertainties, more relevant for comparative LCAs. Conclusions: Propagation of uncertainties due to the choice of partitioning methods and to unit process data into LCA results is enabled by the proposed method, while capturing variability due to both sources. It is a practical proposal to tackle unresolved debates about partitioning choices increasing robustness and transparency of LCA results. Assigning a methodological preference to each allocation method of multifunctional processes in the system enables pseudostatistical propagation of uncertainty due to allocation. Involving stakeholders in determining these methodological preferences allows for participatory approaches. Eventually, this method could be expanded to also cover other ways of dealing with allocation and to other methodological choices in LCA. © 2015, The Author(s).

[Abstract]

15 

Effects of using Synthesized Driving Cycles on Vehicle Fuel Consumption
Creating a driving cycle (DC) for the design and validation of new vehicles is an important step that will influence the efficiency, functionality and performance of the final systems. In this work, a DC synthesis method is introduced, based on multidimensional Markov Chain, where both the velocity and road slope are investigated. Particularly, improvements on the DC synthesis method are proposed, to reach a more realistic slope profile and more accurate fuel consumption and CO2 emission estimates. The effects of using synthesized DCs on fuel consumption are investigated considering three different vehicle models: conventional ICE, and full hybrid and mild hybrid electric vehicles. Results show that short but representative synthetic DCs will results in more realistic fuel consumption estimates (e.g. in the 5%10% range) and in much faster simulations. Using the results of this proposed method also eliminates the need to use very simplified DCs, as the New European Driving Cycle(NEDC), or long, measured DCs.

[Abstract]

16 

Uncertainty propagation of arbitrary probability density functions applied to upscaling of transmissivities
In many fields of study, and certainly in hydrogeology, uncertainty propagation is a recurring subject. Usually, parametrized probability density functions (PDFs) are used to represent data uncertainty, which limits their use to particular distributions. Often, this problem is solved by Monte Carlo simulation, with the disadvantage that one needs a large number of calculations to achieve reliable results. In this paper, a method is proposed based on a piecewise linear approximation of PDFs. The uncertainty propagation with these discretized PDFs is distribution independent. The method is applied to the upscaling of transmissivity data, and carried out in two steps: the vertical upscaling of conductivity values from borehole data to aquifer scale, and the spatial interpolation of the transmissivities. The results of this first step are complete PDFs of the transmissivities at borehole locations reflecting the uncertainties of the conductivities and the layer thicknesses. The second step results in a spatially distributed transmissivity field with a complete PDF at every grid cell. We argue that the proposed method is applicable to a wide range of uncertainty propagation problems. © 2015, The Author(s).

[PDF]
[Abstract]

17 

Stochastic uncertainties and sensitivities of a regionalscale transport model of nitrate in groundwater
Groundwater quality management relies more and more on models in recent years. These models are used to predict the risk of groundwater contamination for various land uses. This paper presents an assessment of uncertainties and sensitivities to input parameters for a regional model. The model had been set up to improve and facilitate the decisionmaking process between stakeholders and in a groundwater quality conflict. The stochastic uncertainty and sensitivity analysis comprised a Monte Carlo simulation technique in combination with a Latin hypercube sampling procedure. The uncertainty of the calculated concentrations of nitrate leached into groundwater was assessed for the various combinations of land use, soil type, and depth of the groundwater table in a vulnerable, sandy region in The Netherlands. The uncertainties in the shallow groundwater were used to assess the uncertainty of the nitrate concentration in the abstracted groundwater. The confidence intervals of the calculated nitrate concentrations in shallow groundwater for agricultural land use functions did not overlap with those of nonagricultural land use such as nature, indicating significantly different nitrate leaching in these areas. The model results were sensitive for almost all input parameters analyzed. However, the NSS is considered pretty robust because no shifts in uncertainty between factors occurred between factors towards systematic changes in fertilizer and manure inputs of the scenarios. In view of these results, there is no need to collect more data to allow science based decisionmaking in this planning process. © 2008 Elsevier B.V. All rights reserved.

[Abstract]

18 

A universal throw model and its applications
A deterministic model has been developed that describes the throw of debris or fragments from a source with an arbitrary geometry and for arbitrary initial conditions. The initial conditions are defined by the distributions of mass, launch velocity and launch direction. The item density in an exposed area, i.e. the number of impacting debris or fragments per unit of area, has been expressed analytically in terms of these initial conditions. While existing models make use of the Monte Carlo technique, the present model uses the source function theorem, an underlying mathematical relation between the debris density and the initial distributions. This gives fundamental insight in the phenomenon of throw, and dramatically reduces the required number of trajectory calculations. The model has been formulated for four basic source geometries: a point source, a vertical cylinder, a horizontal cylinder, and a vertical plane. In combination with trajectory calculations the item density can be quantified. As an illustration of the model, analytical results are presented and compared for the vertical plane and the vertical cylinder geometry under simplified assumptions. If uncertainties exist in the initial conditions, the model can be used to investigate these initial conditions based on experimental data. This has been illustrated on the basis of a trial with 5 ton of ammunition stacked in an ISO container. In this case the model has been successfully applied to determine the debris launch angle and velocity distribution, by means of backward calculations. If, on the other hand, sufficient information on the initial conditions is available, the model can be used as an effect model in risk assessment methods, or for the requirements on protective measures. The model can be used to predict safety distances based on any desired criterion. © 2007 Elsevier Ltd. All rights reserved.

[Abstract]

19 

Nestedscale discharge and groundwater level monitoring to improve predictions of flow route discharges and nitrate loads
Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nestedscale discharge and groundwater level measurements for predictions of catchmentscale discharge and nitrate loads. In order to relate fieldsite measurements to the catchmentscale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDDcurve was measured for a single field site (0.009 km<sup>2</sup>) and simple process descriptions were applied to relate the groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with NashSutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale fieldsite measurements to catchment scales by inferring scalespecific GDDcurves from hydrographs of two nested catchments (0.4 and 6.5 km<sup>2</sup>). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 7679% at the fieldsite to 3461% and 2550% for both catchment scales. These results were validated by demonstrating that a model conditioned on nestedscale measurements simulates better nitrate loads and better predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only. © 2010 Author(s).

[Abstract]

20 

Development of a Matlab/Simulink tool to facilitate system analysis and simulation via the adjoint and covariance methods
The COVariance and ADjoint Analysis Tool (COVAD) is a specially designed software tool, written for the Matlab/Simulink environment, which allows the user the capability to carry out system analysis and simulation using the adjoint, covariance or Monte Carlo methods. This paper describes phase one of the COVAD evolution, which includes a userfriendly and flexible Graphical User Interface (GUI), a missile homing loop template, an adjoint construction module, a Monte Carlo simulation module and various analysis and plotting options. As an illustration, the application of the software to the preliminary analysis of a generic guided missile homing loop problem is included. The covariance analysis module is still under construction and will not be covered here. It is scheduled to appear in phase two of the COVAD development.

[Abstract]
