; DEFGHIJKLMOh+'0HP
$TU Delft Repository search results0TU Delft Repository search results (max. 1000)TU Delft LibraryTU Delft Library@yF@@yF@՜.+,0HPX`hp
x
WorksheetFeuilles de calcul
B=%r8X"1Calibri1Calibri1Calibri1
Calibri 83ffff̙̙3f3fff3f3f33333f33333.msTU Delft Repositoryg .uuidrepository linktitleauthorcontributorpublication yearabstract
subject topiclanguagepublication type publisherisbnissnpatent
patent statusbibliographic noteaccess restrictionembargo datefaculty
departmentresearch group programmeprojectcoordinates)uuid:6dd884051ecc4ec8b2af58d9a82f349bDhttp://resolver.tudelft.nl/uuid:6dd884051ecc4ec8b2af58d9a82f349b>Convex Modeling of Pumps in Order to Optimize Their Energy UseHorvth, K. (Eindhoven University of Technology); van Esch, B. (Eindhoven University of Technology); Vreeken, D. (Deltares); Pothof, I.W.M. (TU Delft Support Process and Energy; Deltares); Baayen, J (KISTERS Nederland B.V.){This study presents convex modeling of drainage pumps so that realtime control systems can be implemented to minimize their energy use. A convex model is built based on pump curves and then used in mixedinteger optimization to allow pumps to be turned on or off. It is implemented as an extension to the open source software package RTCTools. The formulation is such that the continuous relaxations of the mixedinteger problem are convex, hence branchandbound techniques may be used to find a global optimum. The formulation can be used for variablespeed and constantspeed pumps. There are several possible applications, such as optimization of polder systems, pumpedstorage systems, or certain water distribution networks. Finally, an example of the drainage pump is presented to compare the method to current methods and show that energy can be saved by using the proposed method.6channel; control; convex; drainage; optimization; pumpenjournal article)uuid:cbde185b76124915b13f47adb099b0b2Dhttp://resolver.tudelft.nl/uuid:cbde185b76124915b13f47adb099b0b2Integration of Genetic Algorithm and Monte Carlo Simulation for System Design and Cost Allocation Optimization in Complex NetworkjBaladeh, Aliakbar Eslami (MAPNA Group, Tehran); Khakzad Rostami, N. (TU Delft Safety and Security Science)Complex networks play a vital role in reliability analysis of realworld applications, demanding for precise and accurate analysis methods for optimal allocations of cost and reliability. Since the configuration of a system may change with every feasible solution of cost allocation optimization equation, finding the best arrangement of the system can become very challenging. This paper presents a novel methodology by combining Genetic Algorithm (GA) and Monte Carlo (MC) simulation approaches to simultaneously optimize cost allocation and system configuration in complex network. GA is used to generate configurationcost pairs while MC is used to evaluate the reliability of the system for each pair. The application of the developed methodology is demonstrated for power grids as an example of critical complex networks. The results show that the proposed methodology can be readily used in practice.gcomplex networks; cost allocation; genetic algorithm; Monte Carlo simulation; optimization; Reliabilityconference paper6Institute of Electrical and Electronics Engineers Inc.9781728102382)uuid:975b11c53b964e028129ec2171c0114bDhttp://resolver.tudelft.nl/uuid:975b11c53b964e028129ec2171c0114bNOptimal combined protonphoton therapy schemes based on the standard BED model2ten Eikelder, S.C.M. (Tilburg University); den Hertog, D (Tilburg University); Bortfeld, Thomas (Massachusetts General Hospital); Perko, Z. (TU Delft RST/Reactor Physics and Nuclear Materials; TU Delft RST/Fundamental Aspects of Materials and Energy; Physics Research Group; Massachusetts General Hospital)This paper investigates the potential of combined protonphoton therapy schemes in radiation oncology, with a special emphasis on fractionation. Several combined modality models, with and without fractionation, are discussed, and conditions under which combined modality treatments are of added value are demonstrated analytically and numerically. The combined modality optimal fractionation problem with multiple normal tissues is formulated based on the biologically effective dose (BED) model and teste< d on real patient data. Results indicate that for several patients a combined modality treatment gives better results in terms of biological dose (up to14.8% improvement) than single modality proton treatments. For several other patients, a combined modality treatment is found that offers an alternative to the optimal single modality proton treatment, being only marginally worse but using signifcantly fewer proton fractions, putting less pressure on the limited availability of proton slots. Overall, these results indicate that combined modality treatments can be a viable option, which is expected to become more important as proton therapy centers are spreading but the proton therapy price tag remains high.biologically effective dose (BED); intensitymodulated radiation therapy (IMRT); multimodality treatment; optimization; proton therapyAccepted Author Manuscript)uuid:57eb0947760b43a99826f96312bae7d0Dhttp://resolver.tudelft.nl/uuid:57eb0947760b43a99826f96312bae7d0Finding the relevance of staffbased vehicle relocations in oneway carsharing systems through the use of a simulationbased optimization toolSantos, Gonalo Gonalves Duarte (Lisbon Technical University; University of Coimbra); Homem de Almeida Correia, G. (TU Delft Transport and Planning; University of Coimbra):This paper proposes a realtime decision support tool based on the rollinghorizon principle that manages staff activities (relocations and maintenance) of a oneway carsharing system and considers carpooling the staff in the relocated carsharing vehicles for extra cost reduction. The decision support tool is composed of three elements: a forecasting model, an assignment model and a filter. Two assignment models are proposed and tested: rulebased and optimization. The rulebased model uses simple rules to respond to system status changes, and the optimization model is a mixed integer programing (MIP) model prepared to work in realtime. A simulator was designed to test the decision support tool and an application is done to the city of Lisbon, Portugal, showing that the benefits of staff relocations can be rather low. It was verified that the number of relocations that can physically be performed by each staff member in the case study provide only a small improvement in the revenues, which is unlikely to overcome the costs associated with hiring and staff activity.>Carsharing; maintenance; optimization; relocations; simulation)uuid:bdc7d9df33d4449ab3310c60c3b2cb18Dhttp://resolver.tudelft.nl/uuid:bdc7d9df33d4449ab3310c60c3b2cb18NReplacement optimization of ageing infrastructure under differential inflationvan den Boomen, M. (TU Delft Integral Design and Management); Leontaris, G. (TU Delft Integral Design and Management); Wolfert, A.R.M. (TU Delft Integral Design and Management)pAgeing public infrastructure assets necessitate economic replacement analysis. A common replacement problem concerns an existing asset challenged by a replacement option. Classic techniques obtained from the domain of engineering economics are the mainstream approach to replacement optimization in practice. However, the validity of these classic techniques is built on the assumption that life cycle cash flows of a replacement option are repetitive. Differential inflation undermines this assumption and therefore more advanced replacement optimization techniques are required under these circumstances. These techniques are found in the domain of operations research and require linear or dynamic programming (LP/DP). Since LP/DP techniques are complex and timeconsuming, the current study develops an alternative model for replacement optimizations under differential inflation. This approach builds on the classic capitalized equivalent replacement technique. The alternative model is validated by comparison with a DP model showing to be equally accurate for a case with characteristics that apply to many infrastructure assets.kReplacement decisions; asset management; differential inflation; optimization; public infrastructure assets)uuid:1b74778703194120be100640f344ec5< eDhttp://resolver.tudelft.nl/uuid:1b74778703194120be100640f344ec5eCA Graph Theoretic Approach to Optimal Firefighting in Oil Terminals:Khakzad Rostami, N. (TU Delft Safety and Security Science)"Effective firefighting of major fires in fuel storage plants can effectively prevent or delay fire spread (domino effect) and eventually extinguish the fire. If the number of firefighting crew and equipment is sufficient, firefighting will include the suppression of all the burning units and cooling of all the exposed units. However, when available resources are not adequate, fire brigades would need to optimally allocate their resources by answering the question which burning units to suppress first and which exposed units to cool first? until more resources become available from nearby industrial plants or residential communities. The present study is an attempt to answer the foregoing question by developing a graph theoretic methodology. It has been demonstrated that suppression and cooling of units with the highest outcloseness index will result in an optimum firefighting strategy. A comparison between the outcomes of the graph theoretic approach and an approach based on influence diagram has shown the efficiency of the graph approach.^oil storage plants; domino effect; firefighting; optimization; graph theory; influence diagram)uuid:f613079c90a147dcafcbf6833646ca5aDhttp://resolver.tudelft.nl/uuid:f613079c90a147dcafcbf6833646ca5aMLQG and Gaussian process techniques: For fixedstructure wind turbine control;Bijl, H.J. (TU Delft Numerics for Control & Identification)zVerhaegen, M.H.G. (promotor); van Wingerden, J.W. (promotor); Delft University of Technology (degree granting institution)^Wind turbines are growing bigger to becomemore costefficient. This does increase the severity of the vibrations that are present in the turbine blades, both due to predictable effects like wind shear and tower shadow, and due to less predictable effects like turbulence and flutter. If wind turbines are to become bigger and more costefficient, these vibrations need to be reduced. This can be done by installing trailingedge flaps to the blades. Because of the variety of circumstances which the turbine should operate in, this results in large uncertainties. As such, we need methods that can take stochastic effects into account. Preferably we develop an algorithmthat can learn from online data how the flaps affect the wind turbine and how to optimally control them. A simple prior analysis can be done using a linearized version of the system. In this case it is important to know not only the expected cost (damage) that will be incurred by the wind turbine in various situations, but also the spread of this cost. This can for instance be done by looking at the variance of the cost function. Various expressions are available to analytically calculate this variance. Alternatively, we can prescribe a degree of stability for the system. Due to the limitations of linear approximations of systems, it is more effective to apply nonlinear regression methods. A promising one is Gaussian Process (GP) regression. Given a training set (X, y) it can predict function values f (x) for test points x. It has its basis in Bayesian probability theory, which allows it to not only make this prediction, but also give information (the variance) about its accuracy. The usual way in which GP regression is applied has a few important limitations. Most importantly, it is computationally intensive, especially when applied to constantly growing data sets. In addition, it has difficulties dealing with noise present in the training input points x. There are methods to solve either of these issues, but these tricks generally do not work well together, or their combination requires many computational resources. However, by making the right approximations, like Taylor expansions and at times even linearizations, Gaussian process regression can be applied efficiently, in an online way, to data sets with noisy input points. This enables GP regression to be used for system ident< ification problems like online nonlinear blackbox modeling. Another limitation is that it can be difficult to find the optimum of a Gaussian process. The reason is that the optimum of a Gaussian process is not a fixed point but a random variable. The distribution of this optimum cannot be calculated analytically, but we can use particle methods to approximate it. We can subsequently use this principle to efficiently explore an unknown nonlinear function, trying to locate its optimum. To do so, we sample a point x from the optimum distribution, measure what the function value f (x) at this point is, update the Gaussian process approximation of the function, update the optimum distribution and repeat this process until the distribution has converged. Finding the optimum of a function like this has shown to have competitive performance at keeping the cumulative regret low, compared to similar algorithms. In addition, it allows wind turbines to tune the gains of a fixedstructure controller so as to optimize a nonlinear cost function like the damage equivalent load. All these improvements are a step forward in the application of Gaussian process regression to wind turbine applications. But as is always the case with research, there are still many things left to improve further.Gaussian processes; regression; machine learning; optimization; system identification; automatic control; wind energy; smart rotordoctoral thesis9789462995017)uuid:44dda417a65847d3998b48c082c9e989Dhttp://resolver.tudelft.nl/uuid:44dda417a65847d3998b48c082c9e989CA tensor approach to linear parameter varying system identification+Gunes, Bilal (TU Delft DataDriven Control)zvan Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)tensor; LPV; identification; datadriven; wind; turbine; statistics; subspace; optimization; tensor decompositions; multilinear algebra; SVD; MLSVD; HOSVD; tensor trains; tensor networks; polyadic; engineering; wind energy)uuid:6be0d3276da6419fa6a519fd44b1245dDhttp://resolver.tudelft.nl/uuid:6be0d3276da6419fa6a519fd44b1245dlOptimization of water allocation in the Shatt alArab River under different salinity regimes and tide impactAbdullah, A.D.A. (University of Missan); Castro Gama, M.E. (UNESCOIHE); Popescu, Ioana (UNESCOIHE; Politehnica University of Timisoara); van der Zaag, P. (TU Delft Water Resources; UNESCOIHE); Karim, Usama F.A. (University of Twente); Al Suhail, Qusay (University of Basrah)Wastewater effluents from irrigation and the domestic and industrial sectors have serious impacts in deteriorating water quality in many rivers, particularly in areas under tidal influence. There is a need to develop an approach that considers the impact of human and natural causes of salinization. This study uses a multiobjective optimization simulation model to investigate and describe the interactions of such impacts in the Shatt alArab River, Iraq. The developed model is able to reproduce the salinity distribution in the river given varying conditions. The salinity regime in the river varies according to different hydrological conditions and anthropogenic activities. Due to tidal effects, salinity caused by drainage water is seen to intrude further upstream into the river. The applied approach provides a way to obtain optimal solutions where both river salinity and deficit in water supply can be minimized. The approach is used for exploring the tradeoff between these two objectives.^drainage water; optimization; salinity; Shatt alArab River; tidal influence; water management
20190331)uuid:f10d191c62584d63ab772ff9fe86c516Dhttp://resolver.tudelft.nl/uuid:f10d191c62584d63ab772ff9fe86c516HighPermittivity Pad Design for Dielectric Shimming in Magnetic Resonance Imaging Using ProjectionBased Model Reduction and a Nonlinear Optimization Schemevan Gemert, J.H.F. (TU Delft Microwave Sensing, Signals & Systems); Brink, Wyger M. (Leiden University Medical Center); Webb, A. (Leiden University Medical Center); Remis, R.F. (TU Delft Circ< uits and Systems)Inhomogeneities in the transmit radio frequency magnetic field ( {\text{B}}{1}^{+} ) reduce the quality of magnetic resonance (MR) images. This quality can be improved by using highpermittivity pads that tailor the {\text{B}}{1}^{+} fields. The design of an optimal pad is applicationspecific and not straightforward and would therefore benefit from a systematic optimization approach. In this paper, we propose such a method to efficiently design dielectric pads. To this end, a projectionbased model order reduction technique is used that significantly decreases the dimension of the design problem. Subsequently, the resulting reducedorder model is incorporated in an optimization method in which a desired field in a region of interest can be set. The method is validated by designing a pad for imaging the cerebellum at 7 T. The optimal pad that is found is used in an MR measurement to demonstrate its effectiveness in improving the image quality.udielectric shimming; fields; highpermittivity pads; Magnetic resonance imaging; optimization; reduced order modelingAccepted author manuscript$Microwave Sensing, Signals & Systems)uuid:367f977dba1444be9c4e93eba5af508fDhttp://resolver.tudelft.nl/uuid:367f977dba1444be9c4e93eba5af508fxKinetic modeling and optimization of parameters for biomass pyrolysis: A comparison of different lignocellulosic biomassMahmood, Hamayoun (University of Engineering & Technology Lahore); Ramzan, Naveed (University of Engineering & Technology Lahore); Shakeel, A. (TU Delft Rivers, Ports, Waterways and Dredging Engineering; University of Engineering & Technology Lahore); Moniruzzaman, Muhammad (Universiti Teknologi Petronas); Iqbal, Tanveer (University of Engineering & Technology Lahore); Kazmi, Mohsin Ali (University of Engineering & Technology Lahore); Sulaiman, Muhammad (University of Engineering & Technology Lahore)xA primitive element for the development of sustainable pyrolysis processes is the study of thermal degradation kinetics of lignocellulosic waste materials for optimal energy conversion. The study presented here was conducted to predict and compare the optimal kinetic parameters for pyrolysis of various lignocellulosic biomass such as wood sawdust, bagasse, rice husk, etc., under both isothermal and nonisothermal conditions. The pyrolysis was simulated over the temperature range of 500 2400K for isothermal process and for heating rate range of 25 165K/s under nonisothermal conditions to assess the maximum pyrolysis rate of virgin biomass in both cases. Results revealed that by increasing the temperature, the pyrolysis rate was enhanced. However, after a certain higher temperature, the pyrolysis rate was diminished which could be due to the destruction of the active sites of char. Conversely, a decrease in the optimum pyrolysis rate was noted with increasing reaction order of the virgin biomass. Although each lignocellulosic material attained its maximum pyrolysis rate at the optimum conditions of 1071K and 31K/s for isothermal and nonisothermal conditions, respectively, but under these conditions, only wood sawdust exhibited complete thermal utilization and achieved final concentrations of 0.000154 and 0.001238 under nonisothermal and isothermal conditions, respectively.Ckinetic modeling; lignocellulolsic residue; optimization; Pyrolysis)uuid:1397c49e4df94ff284d76a8511757062Dhttp://resolver.tudelft.nl/uuid:1397c49e4df94ff284d76a8511757062}Numerical thermal analysis and optimization of multichip LED module using response surface methodology and genetic algorithmgTang, H. (TU Delft Electronic Components, Technology and Materials); Ye, HuaiYu (Chongqing University); Chen, Xian Ping (Chongqing University); Qian, Cheng (Chinese Academy of Sciences; Changzhou Institute of Technology Research for Solid State Lighting); Fan, Xuejun (Lamar University); Zhang, G.Q. (TU Delft Electronic Components, Technology and Materials)In this paper, the heat transfer performance of the multichip (MC) LED module is investigated numerically by using a general analytical sol< ution. The configuration of the module is optimized with genetic algorithm (GA) combined with a response surface methodology. The space between chips, the thickness of the metal core printed circuit board (MCPCB), and the thickness of the base plate are considered as three optimal parameters, while the total thermal resistance (R<sub>tot</sub>) is considered as a single objective function. After optimizing objectives with GA, the optimal design parameters of three types of MC LED modules are determined. The results show that the thickness of MCPCB has a stronger influence on the total thermal resistance than other parameters. In addition, the sensitivity analysis is performed based on the optimum data. It reveals thatR<sub>tot</sub> increases with the increased thickness of MCPCB, and reduces as the space between chips increases. The effect of the thickness of base plate is far less than that of the thickness of MCPCB. After optimization, three types of MC LED modules obtain lower T<sub>j</sub> andR<sub>tot</sub>. Moreover, the optimized modules can emit large luminous energy under highpower input conditions. Therefore, the optimization results are of great significance in the selection of configuration parameters to improve the performance of the MC LED module.zgenetic algorithm; Multichip LED module; optimization; response surface methodology; thermal resistance; OAFund TU Delft)uuid:e6fc3865531f4ea9aeffe2ef923ae36fDhttp://resolver.tudelft.nl/uuid:e6fc3865531f4ea9aeffe2ef923ae36fRModeling, design and optimization of flapping wings for efficient hovering flighth9Wang, Q. (TU Delft Structural Optimization and Mechanics)tvan Keulen, A. (promotor); Goosen, J.F.L. (copromotor); Delft University of Technology (degree granting institution)EInspired by insect flights, flapping wing micro air vehicles (FWMAVs) keep attracting attention from the scientific community. One of the design objectives is to reproduce the high power efficiency of insect flight. However, there is no clear answer yet to the question of how to design flapping wings and their kinematics for powerefficient hovering flight. In this thesis, we aim to answer this research question from the perspectives of wing modeling, design and optimization.<br/><br/>Quasisteady aerodynamic models play an important role in evaluating aerodynamic performance and designing and optimizing flapping wings. In Chapter 2, we present a predictive quasisteady model by including four aerodynamic loading terms. The loads result from the wing's translation, rotation, their coupling as well as the addedmass effect. The necessity of including all four of these terms in a quasisteady model to predict both the aerodynamic force and torque is demonstrated. Validations indicate a good accuracy of predicting the center of pressure, the aerodynamic loads and the passive pitching motion for various Reynolds numbers. Moreover, compared to the existing quasisteady models, the proposed model does not rely on any empirical parameters and, thus, is more predictive, which enables application to the shape and kinematics optimization of flapping wings.<br/><br/>For flapping wings with passive pitching motion, a shift in the pitching axis location alters the aerodynamic loads, which in turn change the passive pitching motion and the flight efficiency. Therefore, in Chapter 3, we investigate the optimal pitching axis location for flapping wings to maximize the power efficiency during hovering flight. Optimization results show that the optimal pitching axis is located between the leading edge and the midchord line, which shows a close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to optimized traditional wings used by most of the flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. In addition, the optimized pitching axis enables the drive system to recycle more energy during the deceleration phases as compared to their counterparts. This observation underlin< es the particular importance of the wing pitching axis location for energyefficient FWMAVs when using kinetic energy recovery drive systems. <br/><br/>The presence of wing twist can alter the aerodynamic performance and power efficiency of flapping wings by changing the angle of attack. In order to study the optimal twist of flapping wings for hovering flight, we propose a computationally efficient fluidstructure interaction (FSI) model in Chapter 4. The model uses an analytical twist model and the quasisteady aerodynamic model introduced in Chapter 2 for the structural and aerodynamic analysis, respectively. Based on the FSI model, we optimize the twist of a rectangular wing by minimizing the power consumption during hovering flight. The power efficiency of the optimized twistable wings is compared with corresponding optimized rigid wings. It is shown that the optimized twistable wings can not dramatically outperform the optimized rigid wings in terms of power efficiency, unless the pitching amplitude at the wing root is limited. When this amplitude decreases, the optimized twistable wings can always maintain high power efficiency by introducing certain twist while the optimized rigid wings need more power for hovering. <br/><br/>Considering the high impact of the root stiffness on flapping kinematics and power consumption, we present an active hinge design which uses electrostatic force to change the hinge stiffness in Chapter 5. The hinge is realized by stacking three conducting spring steel layers which are separated by dielectric Mylar films. The theoretical model shows that the stacked layers can switch from slipping with respect to each other to sticking together when the resultant electrostatic force between layers, which can be controlled by the applied voltage, is above a threshold value. The switch from slipping to sticking will result in a dramatic increase of the hinge stiffness (about 9x). Therefore, a short duration of the sticking can still lead to a considerable change in the passive pitching motion. Experimental results successfully show the decrease of the pitching amplitude with the increase of the applied voltage. Flight control based on the electrostatic force can be very powerefficient since there is ideally no power consumption due to the control operations. <br/><br/>In Chapter 6, we retrospect and discuss the most important aspects related to the modeling, design and optimization of flapping wings for efficient hovering flight. In Chapter 7, the overall conclusions are drawn and recommendations for further study are provided.aflapping wing; passive pitching; pitching axis; aerodynamic model; power efficiency; optimization9789492516572)uuid:7f63baf498e44b799307577299d843e6Dhttp://resolver.tudelft.nl/uuid:7f63baf498e44b799307577299d843e6bLocal Alternative for Energy Supply: Performance Assessment of Integrated Community Energy SystemsKoirala, B.P. (TU Delft Energy & Industry); Chaves Avila, J.P. (Comillas Pontifical University); Gomez, T. (Comillas Pontifical University); Hakvoort, R.A. (TU Delft Energy & Industry); Herder, P.M. (TU Delft Engineering, Systems and Services)iIntegrated community energy systems (ICESs) are emerging as a modern development to reorganize local energy systems allowing simultaneous integration of distributed energy resources (DERs) and engagement of local communities. Although local energy initiatives, such as ICESs are rapidly emerging due to community objectives, such as cost and emission reductions as well as resiliency, assessment and evaluation are still lacking on the value that these systems can provide both to the local communities as well as to the whole energy system. In this paper, we present a modelbased framework to assess the value of ICESs for the local communities. The distributed energy resourcesconsumer adoption model (DERCAM) based ICES model is used to assess the value of an ICES in the Netherlands. For the considered community size and local conditions, gridconnected ICESs are already beneficial to the alternative of solely being supplied < from the grid both in terms of total energy costs and CO2 emissions, whereas griddefected systems, although performing very well in terms of CO2 emission reduction, are still rather expensive.distributed energy resources (DERs); energy communities; smart grids; multicarrier energy systems; optimization; OAFund TU Delft!Engineering, Systems and ServicesEnergy & Industry)uuid:9b46e18b1fa34517a666660e4a50f18eDhttp://resolver.tudelft.nl/uuid:9b46e18b1fa34517a666660e4a50f18eqComputationally efficient analysis & design of optimally compact gear pairs and assessment of gear compliance'Amani, A. (TU Delft Emerging Materials)2Spitas, C. (promotor); Spitas, Vasilios (promotor),gear design; spur gear; design parameters; pitch compatibility; interference; corner contact; pointed tip; undercutting; nonstandard; nondimensional; design guidelines; highest point of single tooth contact (HPSTC); finite element analysis; stress analysis; bending strength; compact gears; optimization; centre distance; deviation; tolerance zone; computational modelling; compact gear drive; compliance; bending compliance; foundational compliance; Hertzian compliance; nondimensional modelling; SaintVenant's Principle; cubic Hermitian interpolation9789461867391
20181115)uuid:e8dbb294dd574c10b733b4aded62607cDhttp://resolver.tudelft.nl/uuid:e8dbb294dd574c10b733b4aded62607cpStrategies, Methods and Tools for Solving Longterm Transmission Expansion Planning in Largescale Power Systems)Fitiwi, D.Z. (TU Delft Energy & Industry)4Herder, P.M. (promotor); Rivier Abbad, M. (promotor)transmission expansion planning; uncertainty and variability; optimization; stochastic programming; moments technique; clustering9788460899556)uuid:0010fdac32ec459bbb9b3e6327a85496Dhttp://resolver.tudelft.nl/uuid:0010fdac32ec459bbb9b3e6327a85496CGradientbased optimization of flow through porous media: Version 32Jansen, J.D. (TU Delft Geoscience and Engineering)These notes form part of the course material for the MSc course AES1490 "Advanced Reservoir Simulation" which has been taught at TU Delft over the past decade as part of the track "Petroleum Engineering and Geosciences" in the twoyear MSc program "Applied Earth Sciences".<br/>The notes cover the gradientbased optimization of subsurface flow. In particular they treat optimization methods in which the gradient information is obtained with the aid of the adjoint method, which is, in essence, an efficient numerical implementation of implicit differentiation in a multivariate setting.<br/>Chapter 1 reviews the basic concepts of multivariate optimization and demonsrates the equivalence of the Lagrange multiplier method for constrained optimization and the use of implicit differentiation to obtain gradients in the presence of constraints.<br/>Chapter 2 introduces the use of Lagrange multipliers and implicit differentiation for the optimization of largescale numerical systems with the adjoint method. In particular it addresses the optimization of oil recovery from subsurface reservoirs represented as reservoir simulation models, i.e. space and timediscretized numerical representations of the nonlinear partial differential equations that govern multiphase flow through porous media. It also covers the use of robust adjointbased optimization to cope with the inherent uncertainty in subsurface flow models and addresses some numerical implementation aspects.<br/>Chapter 3 gives a brief overview of various further topics related to gradientbased optimization of subsurface flow, such as closedloop reservoir management and hierarchical optimization of shortterm and long term reservoir performance.<br`optimization; adjoint; gradient; reservoir; subsurface; porous medium; Lagrange multiplier; flowreport)uuid:01c571340988424f92ba752fd993680bDhttp://resolver.tudelft.nl/uuid:01c571340988424f92ba752fd993680bBImproved multimicrophone noise reduction preserving binaural cuesKoutrouvelis, A. (TU Delft Circuits and Systems); Hendriks, R.C. (TU Delft Circuits and Systems); Jensen, J (Aalbor< g University); Heusdens, R. (TU Delft Circuits and Systems)/Dong, Min (editor); Zheng, Thomas Fang (editor)#We propose a new multimicrophone noise reduction technique for binaural cue preservation of the desired source and the interferers. This method is based on the linearly constrained minimum variance (LCMV) framework, where the constraints are used for the binaural cue preservation of the desired source and of multiple interferers. In this framework there is a tradeoff between noise reduction and binaural cue preservation. The more constraints the LCMV uses for preserving binaural cues, the less degrees of freedom can be used for noise suppression. The recently presented binaural LCMV (BLCMV) method and the optimal BLCMV (OBLCMV) method require two constraints per interferer and introduce an additional interference rejection parameter. This unnecessarily reduces the degrees of freedom, available for noise reduction, and negatively influences the tradeoff between noise reduction and binaural cue preservation. With the proposed method, binaural cue preservation is obtained using just a single constraint per interferer without the need of an interference rejection parameter. The proposed method can simultaneously achieve noise reduction and perfect binaural cue preservation of more than twice as many interferers as the BLCMV, while the OBLCMV can preserve the binaural cues of only one interferer.noise reduction; LCMV; binaural cue preservation,; auditory system; microphones; hearing aids; Interference; Nickel; optimizationIEEE9781479999880Circuits and Systems)uuid:4a3d986dec494b1e8ac2393885d9d026Dhttp://resolver.tudelft.nl/uuid:4a3d986dec494b1e8ac2393885d9d026Transmission Expansion Planning of Transnational Offshore Grids: A TechnoEconomic and Legal Approach Case Study of the North Sea Offshore GridShariat Torbaghan, S.<Van der Meijden, M.A.M.M. (promotor); Gibescu, M. (promotor)2!The new energy policy of the European Union (EU) with the core objectives of competitiveness, reliability and sustainability, has driven Europe into a transition towards a low carbon & sustainable electricity supply systems. Under the new policy, the European energy systems are pursing two major objectives. First is to shift the focus from national to regional or (perhaps) a European level with the ultimate goal of introducing regional markets that facilitates crossborder power trades. Second, is to incorporate large renewable energy sources into the power systems to best exploit the energy resources. In this regards, special attention is oriented towards the development of the offshore gird in the North Sea region where offshore wind is abundant and has potential to become major energy source in the area. This thesis looks into transmission expansion planing in the North Sea region. It presents a market based approach to solve a longterm transmission expansion planning for a meshed VSCHVDC offshore grid that connect regional markets. The main goal here is to determine the grid design that enables harnessing the offshore wind energy most efficiently, at the same time, creating capacity for conducting crossborder power exchange. Development of an offshore grid in the North Sea can encounter various technical, legal and economic barriers. Consequently advanced planning frameworks are required that enables accounting for these issues. The methodology proposed here provides a framework to investigate the impact of each of these factors on the development of offshore infrastructures. More precisely, the contributions of this thesis can be summarized as follows: Static Transmission Expansion Planning framework (STEP) In Chapter 5, I have proposed a multiple timeperiod static transmission expansion planning framework that is applicable to VSCHVDC meshed grids. I have shown that the analytical solution to the problem gives the pricing mechanism that expresses the relationship between the electricity price of different zones and the congestion charges associated with the interconnectors between them. It is an extension of the work o< f Schweppe et al. that has been proven for and applied to VSCHVDC grids. The proposed formulation includes investment recovery through congestion revenues as an implicit strict equality constraint. It, therefore, computes the expansion plan, such that the investment capital will be fully paid off through congestion revenues by the end of the chosen lifetime of the infrastructure. The framework determines the topology, transmission capacities and the power flows through the offshore grid, and the resulting distribution of social welfare among the price zones. By combining both flowconstraints and investment recoveryconstraints and working with historical market data, the framework can deliver useful results that demonstrate how onshore price zones could benefit from an optimal grid design. Iterative clustering methods for computation feasibility The optimization framework proposed in Chapter 5 was intended to be driven by historical marketdata in the form of hourly regional cost curves. The dimensionality of the search space and the computational intensity of the proposed optimization algorithm make the problem intractable. It was desirable to identify and work with only a subset from the total set of operating states. I developed an iterative algorithm that combines an unsupervised clustering technique with the proposed optimization tool to cope with the computational burden of the largescale optimization problem. Automatic space transformation and clustering were performed to select a subset of representative hourly operating states. The number of samples in the subset was adjusted in order to match the congestioninduced revenues to that of the full data set. This ensured that essential information was not lost. The framework, thus, balances the need for reasonable computation times against the benefits of a model that allows multiple timeperiods (as defined by zonal prices and wind power production combinations) and obtains realistic results. Several clustering algorithms (including Kmeans) and feature reduction techniques (such as Principal Component Analysis (PCA)) have been used in investment planning analysis. Their combination has also been explored in literature. However, this is the first time that an unsupervised PCA/clustering technique has been combined with an optimization tool to refine the clustering results. StaticWind and Transmission Expansion Planning framework (SWTEP) Chapter 6 describes a novel cooptimization wind and transmission expansion framework applicable to VSCHVDC meshed grids. This is an extension of the static framework presented in Chapter 5 that adds wind to the TEP formulation, while implementing support schemes, which inherently induce a deviation from perfect competition. This results in a fundamental contradiction between the structure of the competitive market and the nature of support policies. The novelty of the work presented in Chapter 6 is that it has limited the market distortion by excluding the support payments from the market clearing process. To do so, I have proposed a formulation that divides the initial investment of the offshore wind infrastructure into subsidized and unsubsidized parts. Thus, the objective of the optimization problem was to maximize sum of incremental social welfare of all regions at all times, minus the aggregated investment cost of offshore transmission infrastructure and the investment cost of building the offshore wind farms that has not been covered through the support payments. The proposed framework enables the impact of implementing two types of feedin premium support schemes (i.e., generationbased and capacitybased) to be accounted for in the final development of the grid. The goal of this chapter was to investigate the performance of the two feedin support policies to verify if investment recovery would be fulfilled under a certain support scheme design. In addition, an optimal support level and offshore wind support tariff rate were determined. The analytical solution to the optimization problems confirms the complete recovery of the in< vestment cost of transmission infrastructure. In addition, under the assumption that no offshore wind was curtailed, the revenues collected from market sales of offshore wind farms can pay off the unsubsidized part of the wind farm investment, regardless of the payment basis (generationbased or capacitybased). Dynamic Transmission Expansion Planning framework (DTEP) In Chapter 7, I have proposed a marketbased, multiple stage, multitime period dynamic transmission expansion planning framework for a meshed offshore grid to connect upcoming offshore wind farms to multiple onshore markets. The main contribution of this framework is that it enables accounting for delays in the construction and implementation of offshore infrastructures, including wind farms and transmission systems. Delays can occur mainly due to legal barriers associated with differing permitting criteria in an international context, but also due to market maturity and supply chain issues. The timing of delays in grid, market and wind farm developments are set exogenously in the model. This is an extension of the work presented in Chapter 5 in which the whole offshore grid was assumed to be built in one instant. The final results include the optimal grid topology, transmission capacities, construction timing and the resulting remuneration and distribution of the social welfare increase and financial benefit among the various onshore price zones. The analytical solution to the optimization problem gives the pricing mechanism that is consistent with the AC onshore counterpart. The proposed market mechanism facilitates the integration of a multiterminal VSCHVDC offshore grid into the existing AC grid. In addition, the analytical solution confirms the investment recovery through congestion revenues, regardless of the number of investors that are involved. In the case of multiple investors, an independent financial entity is required that collects the transmission revenues from the grid operators and distributes them appropriately amongst the investors. Under this regulatory assumption, the investment recovery of every cable of every interconnector will be completely fulfilled within the desired economic lifetime.}long term planning; optimization; wind energy; HVDC transmission; electricity markets; support schemes; policy recommendation8Electrical Engineering, Mathematics and Computer ScienceElectrical Sustainable Energy)uuid:0d3a06953eb64da3a341e022df2c8629Dhttp://resolver.tudelft.nl/uuid:0d3a06953eb64da3a341e022df2c8629'Lightsheet optimization for microscopykWilding, D. (TU Delft Numerics for Control & Identification); Pozzi, P. (TU Delft Numerics for Control & Identification); Soloviev, O.A. (TU Delft Numerics for Control & Identification; Flexible Optical B.V.); Vdovine, Gleb (TU Delft Numerics for Control & Identification; Flexible Optical B.V.); Verhaegen, M.H.G. (TU Delft Numerics for Control & Identification)IBifano, Thomas G. (editor); Kubby, Joel (editor); Gigan, Sylvain (editor)Aberrations, scattering and absorption degrade the performance lightsheet fluorescence microscopes (LSFM). An adaptive optics system to correct for these artefacts and to optimize the lightsheet illumination is presented. This system allows a higher axial resolution to be recovered over the fieldofview of the detection objective. It is standard selective plane illumination microscope (SPIM) configuration modified with the addition of a spatial light modulator (SLM) and a third objective for the detection of transmitted light. Optimization protocols use this transmission light allowing the extension the depthoffield and correction of aberrations whilst retaining a thin optical section.JAdaptive optics; imaging; microscopy; lightsheet microscopy; optimizationSPIE9781628419511%Numerics for Control & Identification)uuid:8e6df8cdd5e94df397ac46f4a83a5413Dhttp://resolver.tudelft.nl/uuid:8e6df8cdd5e94df397ac46f4a83a5413?A Risk Analysis for Asset Management Considering Climate ChangeOrcesi, Andr D. (Universit ParisEst); Chemineau, Hlne (Universit Pa< risEst); Lin, P.H. (TU Delft Safety and Security Science); van Gelder, P.H.A.J.M. (TU Delft Safety and Security Science); van Erp, H.R.N. (TU Delft Safety and Security Science)This paper presents an optimization framework for highway infrastructure elements that integrates risk profiles (for infrastructures) and economic aspects. One main goal is to assess the necessary additional effort to satisfy performance constraints under different scenarios of climate change. In order to be easily deployable by national road administrations (NRAs), this framework is built in such a way that it can be embedded into asset management systems that include an inventory of the asset, inspection strategies (to report element conditions and safety defects) and decisionmaking for funds allocation. Using the inventory of the asset and condition assessment as input, the method aims to determine some degradation profiles for bridge components, retaining walls and steep embankments. The method to determine the degradation process is detailed so that any infrastructure manager can determine their own deterioration processes based on the inventory and condition assessment of their stock. Combining degradation of highway infrastructures with a risk analysis, this paper presents an optimization framework to determine optimal management strategies.Fcondition rating; highway infrastructures; optimization; risk analysis)uuid:b1d71cd95aae4a13af52b5788f379c5fDhttp://resolver.tudelft.nl/uuid:b1d71cd95aae4a13af52b5788f379c5fDefinition of Ship Outfitting Scheduling as a Resource Availability Cost Problem and Development of a Heuristic Solution TechniqueRose, C.Coenen, J. (advisor)2shipbuilding; scheduling; outfitting; optimization
indefinite.Mechanical, Maritime and Materials EngineeringMarine and Transport Technology%Ship Design, Production and Operation)uuid:bf7c3074dcb9478fb4691aecd6c8c8c1Dhttp://resolver.tudelft.nl/uuid:bf7c3074dcb9478fb4691aecd6c8c8c1CMultivariate Interactive Visualization of Data in Generative DesignChaszar, A.T. (TU Delft Design Informatics; Singapore University of Technology); von Buelow, P (University of Michigan); Turrin, M. (TU Delft Design Informatics; South China University of Technology)aAttar, Ramtin (editor); Chronis, Angelos (editor); Hanna, Sean (editor); Turrin, Michela (editor)wWe describe our work on providing support for design decision making in generative design systems producing large quantities of data, motivated by the continuing challenge of making sense of large design and simulation result datasets. Our approach provides methods and tools for multivariate interactive data visualization of the generated designs and simulation results, enabling designers to focus not only on highperforming results but also examine suboptimal designs attributes and outcomes, thus discovering relationships giving greater insight to design performance and facilitating guidance of further design generation. We illustrate this by an example exploring building massing and envelope design (fenestration arrangement and external shading) with simulations of daylighting and heat gain. We conclude that the visualization techniques investigated can help designers better comprehend interrelationships between variable parameters, constraints and outcomes, with consequent benefits of: finding good design outcomes; verifying that simulation results are reliable and; understanding characteristics of the fitness landscape.parametric; performance design; optimization; exploration; visualization; multiobjective; multivariate; evolutionary computingsimAUD9781365058721)uuid:5b73c483bca34cad87946e02a353baf3Dhttp://resolver.tudelft.nl/uuid:5b73c483bca34cad87946e02a353baf3AEfficient optimization methods for freeway management and controlCong, Z.2De Schutter, B. (promotor); Babuska, R. (promotor)Due to the rapid growth of human population, and jobs being distributed unevenly in different locations, daily commuting is required more than ever, which in its turn is creating a huge socioeconomic issue: traffic con< gestion. In order to prevent, or at least to alleviate this problem, trafficmanagement and control is urgently required. This thesis develops three different management and control methods to improve the performance of traffic networks, with a particular focus on freeway networks, namely ant colony optimization for dynamic traffic routing; codesign of network topology and controlmeasures; path planning of unmanned aerial vehicles for monitoring traffic networks. Usually, solving these problems for a largescale freeway network will result in an extremely high computational burden. The main contribution of this thesis consists in the development of solution methods for these problems to solve them efficiently with a wellbalanced tradeoff between performance and computation speed.ufreeway networks; optimization; dynamic traffic routing; codesign of topology and controlmeasures; UAV path planning$Delft Center for Systems and Control)uuid:03acffecba044e43a4b4ab95b16e5553Dhttp://resolver.tudelft.nl/uuid:03acffecba044e43a4b4ab95b16e5553TEstimation of volcanic ash emissions using trajectorybased 4DVar data assimilation4Lu, S.; Lin, X.; Heemink, A.W.; Fu, G.; Segers, A.J.Volcanic ash forecasting is a crucial tool in hazard assessment and operational volcano monitoring. Emission parameters such as plume height, total emission mass, and vertical distribution of the emission plume rate are essential and important in the implementation of volcanic ash models. Therefore, estimation of emission parameters using available observations through data assimilation could help to increase the accuracy of forecasts and provide reliable advisory information. This paper focuses on the use of satellite totalashcolumn data in 4DVar based assimilations. Experiments show that it is very difficult to estimate the vertical distribution of effective volcanic ash injection rates from satelliteobserved ash columns using a standard 4DVar assimilation approach. This paper addresses the illposed nature of the assimilation problem from the perspective of a spurious relationship. To reduce the influence of a spurious relationship created by a radiate observation operator, an adjointfree trajectorybased 4DVar assimilation method is proposed, which is more accurate to estimate the vertical profile of volcanic ash from volcanic eruptions. The method seeks the optimal vertical distribution of emission rates of a reformulated cost function that computes the total difference between simulated and observed ash columns. A 3D simplified aerosol transport model and synthetic satellite observations are used to compare the results of both the standard method and the new method.observational techniques and algorithms; satellite observations; mathematical and statistical techniques; optimization; variational analysis; models and modeling; data assimilation; applications; air pollutionAmerican Meteorological Society
20160618&Delft Institute of Applied Mathematics)uuid:06966158ae0c45058995e1423145b9f2Dhttp://resolver.tudelft.nl/uuid:06966158ae0c45058995e1423145b9f2IUncertainties Analysis and Life Cycle Costs of Piping Mitigation Measures7Miranda, C.; Teixeira, A.; Huber, M.; Schweckendiek, T.The traditional Dutch way to deal with piping failure in river dikes is the implementation of piping berms. The disadvantage of such a measure is the required inland space. Relief wells, on the other hand, require less or no inland space representing an attractive alternative solution. The aims of this paper are first, to show how reliability analysis of relief wells systems can be carried out, and second to examine the costs required to achieve a reliability target for piping failure, as set in the Netherlands. The outcomes of the analyses will help comparing relief wells with piping berms in economic terms. Subsequently, a life cycle cost analysis is performed. A comparison of the net present value of the two mitigation measures is made. Finally, analyses of two case studies are performed to show the possible economic advantages of installing relief< wells, resulting in relief wells as a costeffective mitigation measure, outperforming piping berms.srelief wells; berms; probabilistic analyses; uncertainties; design; optimization; piping; uplift; costs; life cycle)uuid:fcd7ff9d4b824568b3ce9b0ca807fbb9Dhttp://resolver.tudelft.nl/uuid:fcd7ff9d4b824568b3ce9b0ca807fbb9ZParameter Identification of Consolidation Settlement Based on Multiobjective Optimization#Zheng, Y.F.; Zhang, L.L.; Zhang, J.Due to the complexity of natural ground condition, consolidation settlement is usually difficult to predict. In this study, a Pareto multiobjective optimization based back analysis method for consolidation settlement is presented in this study. The model is a coupled flow and deformation model for unsaturated soil foundation which is implemented in the interactive multiphysics software environment COMSOL. A multiobjective optimization algorithm AMALGAM is adopted to identify soil parameters based on multiple types of measurements. A case history of a highway trial embankment is used to demonstrate the proposed back analysis method. The observed displacement and porewater pressures are utilized simultaneously to estimate the mechanical and hydraulic parameters of the soil. The results show that the biobjective Pareto front exhibits a sharp rectangular pattern. When only displacement is used in back analysis, the numerical model with optimized soil parameters cannot simulate porewater pressure very well, and vice versa. However, the back analyzed soil parameters of the compromise solution from the biobjective back analysis can reasonably simulate both the displacement and porewater pressure and predict the settlement well.Gbackanalysis; consolidation; settlement; multiobjective; optimization)uuid:5d098b7cc7d64d6aba6fb69c13b14169Dhttp://resolver.tudelft.nl/uuid:5d098b7cc7d64d6aba6fb69c13b14169Benefits of Coordinating PlugIn Electric Vehicles in Electric Power Systems: Through Market Prices and UseofSystem Network Charges
Momber, I.7Gmez San Romn, T. (promotor); Herder, P.M. (promotor)> Both electric power systems and the transportation sector are essential constituents to modern life, enhancing social welfare, enabling economic prosperity and ultimately providing wellbeing to the people. However, to mitigate adverse climatological effects of emitting greenhouse gases, a rigorous decarbonization of both industries has been set on the political agenda in many parts of the world. To this end, electrifying personal vehicles is believed to contribute to an affordable and reliable energy model that provides tolerable environmental impact. Representing an inherently flexible electricity demand, plugin electric vehicles (PEVs) promise to facilitate the integration of variable renewable energy sources. Yet, how should the PEVs' system usage be ideally coordinated for providing benefits to electric power systems in the presence of resource scarcity? The thesis develops a model of an aggregation agent as the interface to the wholesale electricity generators, which is envisaged to be in charge of procuring energy in electricity markets, exposed to uncertainty in prices, fleet availability and demand requirements. This aggregator could coordinate the PEV charging either with direct load control (DLC), i.e., sending power set points to the individual vehicles, or with indirect load control (ILC), i.e., by sending retail price signals. Contributing to the technical literature this thesis has on the one hand proposed a twostage stochastic linear program for the PEV aggregator's dayahead and balancing decisions with DLC over a large fleet of PEVs, while accounting for conditional value at risk in the objective function. On the other hand, it has put forward a formulation of ILC coordination as a bilevel optimization problem given by mathematical programming with equilibrium constraints, in which 1) the upper level decisions on retail tariffs and optimal bidding in electricity markets are subject to 2) the lower level clientside optimization of PEV charging schedules. These < decisions may respect a potential discomfort that could arise when PEV users have to deviate from their preferred charging schedule. Set in an existing, real medium voltage distribution network with urban characteristics and spatial PEV mobility, network UoS tariffs for capacity have been applied to both DLC and ILC scheduling by a PEV aggregator..electric vehicles; power systems; optimization!Technology, Policy and Management)uuid:eb5a59c03fcc49d790c63d8e88f3dcb2Dhttp://resolver.tudelft.nl/uuid:eb5a59c03fcc49d790c63d8e88f3dcb2iMulticriteria optimization framework for road infrastructures under different scenario of climate changemOrcesi, A.; Chemineau, H.; Van Gelder, P.H.A.J.M.; Van Erp, H.R.N.; Lin, P.H.; Obel Nielsen, K.; Pedersen, C.Soptimization; bridge management; Markov chains; IQOA scoring system; climate changeIABSE Values Technology and Innovation)uuid:86c8c0027d1e410fab2e36122120e797Dhttp://resolver.tudelft.nl/uuid:86c8c0027d1e410fab2e36122120e797RBuckling and firstply failure optimization of stiffened variable angle tow panelsNJeliazkov, M.; Sardar Abadi, P.M.; Lopes, C.S.; Abdalla, M.M.; Peeters, D.M.J.A computationally efficient twolevel design methodology is developed for the optimization of stiffened compression loaded panels having variable stiffness panels as their skin. In the first step extensive bay panel optimization is performed using a RayleighRitz energy method coupled with a specialized Genetic Algorithm. Results in agreement with lamination parameter optima are achieved by employing distinct steeredfiber configurations in different layers. Additionally, a local equivalent laminate robustness constraint is applied, and it is shown to have detrimental effect on the buckling performance of variable stiffness layups. The optimal results obtained are used to characterize the plate buckling response in laminate stiffness space. An approximate analytical model is developed to analyze the bucklingrelated failure modes of the stiffened panel. Panels are optimized for a variety of configurations and loads. Variable stiffness designs achieve up to 20% weight reduction compared to their straight fiber counterparts, while 57% improvements are possible when a local 10% rule is enforced. Varying the fiber orientation is also shown to increase the weightoptimal stiffener spacing. The results indicate that the application of the concept is most promising for lightly loaded configuration, which are driven primarily by buckling, and not material failure. Alternatively, high weight savings are achieved in cases where large stiffener spacing is enforced by nonperformance related requirements.Fbuckling; optimization; stiffened panels; variable stiffness laminatesICCMAerospace Engineering.Aerospace Structures & Computational Mechanics)uuid:8a736606a00c461899551ec44d87471eDhttp://resolver.tudelft.nl/uuid:8a736606a00c461899551ec44d87471e,Damage resistance of dispersedply laminates\Sardar Abadi, P.M.; Jeliazkov, M.; Sebaey, T.A.; Lopes, C.S.; Abdalla, M.M.; Peeters, D.M.J.*This paper presents the design procedure of a quasiisotropic (QI) laminate employing dispersion of ply orientations. The goal is to improve damage resistance of a laminate under low velocity impact (LVI). The LVI is treated as a quasistatic loading and instead of a plate a laminated beam is considered. Therefore, this situation simplifies the problem to an interlaminar shear (ILS) test. Although the specimen might experience several failure mechanism, only delamination which influence the load carrying capability of it drastically under compression after impact (CAI) is considered here. By studying the interlaminar shear stresses through the thickness of the laminate, initiation of crack can be inspected in every layer using a quadratic initiation criterion (QIC). Finally, employing a modified ant colony optimization (ACO) algorithm (twopheromone ACO algorithm) a fully dispersed QI laminate is designed. The domain of the orientation angles is between 85 to 90 with a 5 interval. The results showed that the inte< rface angles does not present a decisive influence on the crack onset. On the other hand, the dispersion tends to have as large as possible angles near the middle of the laminate to minimize the maximum value of QIC, and some small angles in the outside to provide enough bending stiffness.Ndamage resistance; dispersedply laminates; optimization; ant colony algorithm Aerospace Structures & Materials)uuid:0c2e04b608c143e7805118c5344c124fDhttp://resolver.tudelft.nl/uuid:0c2e04b608c143e7805118c5344c124fCOptimal rerouting of shortturning trains during track obstructions%Ghaemi, N.; Goverde, R.M.P.; Cats, O.Railway trafic controllers have only limited decision support to deal with disruptions. The aim of this research is to provide an algorithm to compute conflictfree routes for trains that cannot continue their planned operation due to a track obstruction. In case of complete blockage, trains need to shortturn at the closest possible station to the disturbed area and provide services in the opposite direction. In such cases, a new timetable is needed to provide a plan during disruption. In this paper a rescheduling and rerouting model is used to find feasible route plans with a focus on shortturnings. The model is applied on a corridor of the Dutch railway network. The results show that such algorithm can provide realtime solutions for trafic controllers during disruption. In addition, it is shown that rerouting the shortturned trains can decrease the delay propagation to the neighboring stations significantly..railway disruption; rescheduling; optimization!Civil Engineering and GeosciencesTransport & Planning)uuid:f0483aa32a1b4700b78e12dbab6c6a2bDhttp://resolver.tudelft.nl/uuid:f0483aa32a1b4700b78e12dbab6c6a2bTBuckling optimization of steering stiffeners for gridstiffened composite structuresWang, D.; Abdalla, M.M.Gridstiffened composite structures, where the skin is stiffened by a lattice of stiffeners, not only allow for significant reduction in structural weight but are also competitive in terms of structural stability and damage tolerance compared with sandwich composite structures. As the development of Automated Fiber Placement (AFP) technology matures, integrated construction of skin and stiffeners is easily manufacturable. Optimization of gridstiffened structures is needed to fully take advantage of the expanded design possibilities. In this paper, a steering/curved stiffener layout is optimized for gridstiffened composite structures in order to enhance the structural buckling resistance. A homogenization method is used to calculate the equivalent material properties. Global and local buckling loads are determined by a global/local coupled strategy. A linear variation of stiffener angles is assumed resulting in the formation of a locally rhombic lattice pattern by the stiffeners. Moreover, manufacturing constraints are considered in the optimization by setting a lower bound on the stiffener spacing. Since the calculation is implemented on an equivalent model with a fixed mesh, it is possible to use a gradientbased optimization algorithm. A comparison between the performance of gridstiffened composite structures with curved stiffeners, with straight stiffeners, and with variablestiffness skins with curved fibers, reveals the potential of curved stiffener configurations in improving structural efficiency._curved stiffeners; optimization; gridstiffened composite structures; global and local buckling)uuid:37e9f5548996491fb15af6a30faabb9dDhttp://resolver.tudelft.nl/uuid:37e9f5548996491fb15af6a30faabb9dA 24GHz Radar Receiver in CMOS
Kwok, K.C.Long, J.R. (promotor)$This thesis investigates the system design and circuit implementation of a 24GHzband shortrange radar receiver in CMOS technology. The propagation and penetration properties of EM wave offer the possibility of noncontact based remote sensing and throughthewall imaging of distance stationary or moving objects. The feasibility of realizing these concepts in hardware with a small form factor could accelerate commercializ< ation and initiate new product opportunities. Minimizing the receiver power consumption to the 15mW range enables 4 hours of continuous operation from a 1.2 gram button sized lithium battery. CMOS technology has the potential for realization of both the RF transceiver and baseband processor in a single chip. An understanding of the functional requirements is a prerequisite for system optimization. The 15mW power budget necessitates the continuous nature of FMCW radar configuration, which obviates the requirement for a powerhungry transmitting amplifier. FMCW radar in shortrange applications benefits from the phase noise correlation between transmitted and received waveforms, which may be exploited to lower the power consumption of the LO generation circuits. A choice for the heterodyne receiver architecture mitigates erroneous detection due to secondorder intermodulation distortions caused by interfering radar transmitters nearby, accuracy degradation due to frequency pulling of the ultrawideband VCO, and signal quality degradation due to flicker noise generated by CMOS transistors. The power dissipation and hardware overhead of a heterodyne receiver are relaxed by proper frequency planning and elimination of the imagereject filter due to frequency chirping property of the FMCW signal. A frequency downconverter for the radar receiver is realized by integrating a LNA, a Gilberttype mixer, and a VCO running at the carrier frequency. A varactorless frequency tuning scheme is proposed for the VCO which breaks through the conventional tradeoffs seen in continuous and wideband mmwave frequency generation between capacitance tuning ratio, quality factor, and operating frequency in CMOS design. Inductive frequency tuning is enabled by a transformer resonant tank which exploits the gyration (90 degree) across the input/output terminal voltages of a transconductor. The parallel resonant frequency is controlled by sweeping the sign and magnitude of the transconductance. The VCO is frequencyagile, and is continuously tunable by altering the DC bias current of the transconductance cell. Adaptability between frequency tuning and power consumption is possible. Two VCO test circuits are reported in this thesis. (1) A proof of concept in 0.13um RFCMOS consumes 43mW from a 1.2V supply. The frequency coverage is from 23.2GHz to 29.4GHz (23.6% tunable range) and the phase noise is 92.6dBc/Hz at 1MHz frequency offset. (2) A miniaturized prototype is implemented in 90nm CMOS for the radar receiver. It consumes 5.7mW from a 1.0V supply. Its maximum frequency range is from 18.6GHz to 21.2GHz (13.1% tunable range) and phase noise is at 1MHz frequency offset. Operation of a CMOS LNA in the moderate inversion region and at a frequency approaching the transistor's operational limit deteriorates its power gain and noise figure. A twostep LNA optimization algorithm is proposed in this thesis which addresses both the device and circuit levels. Transistor dimensions and biasing are set for optimal power gain, noise figure, linearity, bandwidth, and matching network loss. Partitioning the limited power budget across multiple gain stages maximizes the overall power gain. Optimizing the transistor's interaction with bilateral power flows in a multistage amplifier is facilitated by Smith chart based visualization and a computeraided design methodology. The advantages of this methodology are demonstrated by design examples. Currentfeedback by a 3port transformer in a cascode LNA is proposed in this thesis in order to increase the power gain and lower the noise figure performance under lowpower conditions. The feedback modifies the relationship between the input referred voltage and current noise sources of a commongate MOS transistor, and thereby fulfills the internal interface impedance conditions in the cascode LNA for optimal power gain and noise figure matching. A twostage, singleended, currentfeedback cascode LNA prototype is realized in 90nm CMOS. Physical implementation with multiple magnetic components, signal integrity associated with current< return path, and circuit simulations employing an Sparameter model are addressed and emphasized in the LNA development. Consuming just 3mW from a 1V supply, the LNA achieves 14.5dB peak power gain, a 3dB gain bandwidth of 5.0GHz. The noise figure varies from 4.9dB to 5.6dB across a 22GHz and 26GHz RF bandwidth, and the IIP3 is 6.0dBm. The frequency downconverter is realized by integrating the inductivetuned VCO and currentfeedback LNA with a differential Gilberttype mixer. Isolation of the LNA singleended current return path from the rest of the receiver is maintained by a 8port transformer balun preceding the mixer. This receiver RF frontend draws 10.7mW from a 1.0V supply, and delivers 12.6dB peak power gain, 3dB bandwidth of 1.25GHz. The noise figure varies from 10.6dB to 11.5dB across the RF bandwidth, and the IIP3 of the downconverter is 12.1dBm.Xradar; receiver; low noise amplifier; mixer; voltagecontrolled oscillator; optimization'Microelectronics & Computer Engineering)uuid:753ec916cfc14a02a1eabf3f39d310a5Dhttp://resolver.tudelft.nl/uuid:753ec916cfc14a02a1eabf3f39d310a5SOn the nonlinear thermomechanical behavior and delamination of conductive adhesives
ztrk, B.1Ernst, L.J. (promotor); Jansen, K.M.B. (promotor)Adhesives based on thermoset polymers are used as thermal and electrical interfaces. These adhesives are filled with different particles in order to meet the requirements of heat transfer and electrical properties. Due to the reliability requirements of automotive applications, they are required to have excellent bulk and interface properties. Finite element analysis is used to locate stress and strain concentrations and to assess where the material is expected to fail. However, the accuracy of the design calculations is dependent on the validity of the material models used in the analysis. In this thesis, the limitations of the linear viscoelastic material model are discussed and a nonlinear viscoelastic material model is proposed. Although the experiments and the nonlinear viscoelastic modeling are illustrated for an adhesive, qualitatively similar results are also obtained for commercial molding compounds. For all test cases, when compared to the linear viscoelastic material model, the nonlinear viscoelastic material model is shown to improve the prediction of the experimental results. This will allow designers to perform quantitative FE simulations of adhesive joints. As adhesives join different materials together, the interface between the adjacent materials is the place where delamination related failure is most likely to occur. Since delamination also can initiate other failure mechanisms, such as electrical, thermal or mechanical failure mechanisms, the assessment of the risk of delamination has become an integral part of the reliability approach. Only very few studies focus on delamination of adhesive bonds. In this thesis, a new methodology to produce delamination specimens from existing products is described. Although the method is illustrated for two interfaces, wherein a single step of the production process is added, the approach makes it possible to examine different interfaces, which have the same processing properties as in the real product. The LTCC/adhesive and Alloy 42/adhesive interfaces are experimentally investigated for (near) mode I loading conditions. Finite element analysis (e.g. Jintegral) is used to extract the energy release rate and the established critical value is implemented for cohesive zone modeling. The presented approach will allow delamination studies of interfaces between brittle materials (such as LTCC). The increased design complexity and the demand for reduced product development times require fast prequalification methods to assess the reliability of an adhesive bond and in order to obtain qualitative comparisons between different adhesive choices (e.g. material changes, surface preparations, etc.). In this thesis, a novel lap shear specimen, which is obtained by optimizing the standard geometry, is tested under cyclic loading. The test results, thei< r meaning and their reliability are discussed. Suggestions are made to further improve the approach. The presented "lapshear" test approach, on test samples from genuine products, can be used to assess the stability of the adhesive joints (For example, a Cu / lamination foil / CU  connection structure (not yet published), which can be used in nextgeneration power modules).delamination testing; finite element analysis; adhesives; conductive adhesives; epoxy; lap shear; nonlinear viscoelasticity; linear viscoelasticity; mode mixity; tensile testing; constitutive modeling; product based experimentation; virtual doe; optimization&Precision and Microsystems Engineering)uuid:4095d737931649d0a949e28d597fe9f1Dhttp://resolver.tudelft.nl/uuid:4095d737931649d0a949e28d597fe9f1lOptical coherence tomography complemented by hyperspectral imaging for the study of protective wood coatingsCDingemans, L.M.; Papadakis, V.; Liu, P.; Adam, A.J.L.; Groves, R.M.Optical coherence tomography (OCT) is a contactless and nondestructive testing (NDT) technique based on lowcoherence interferometry. It has recently become a popular NDTtool for evaluating cultural heritage. In this study, protective coatings on wood and their penetration into the wood structure were measured with a customized infrared fiber optic OCT instrument. In order to enhance the understanding of the OCT measurements of coatings on real wooden samples, an optimization of the measuring and analyzing methodology was performed by developing an averaging approach and by postprocessing the data. The collected information was complemented by data obtained with hyperspectral imaging to allow data from local OCT Ascans to be used in mapping the coating thicknesses over larger areas.optical coherence tomography; ondestructive testing; hyperspectral imaging; wood coatings; averaging; optimization; thickness mapping)uuid:7b1cdc6f3fee4adabd59fb608bf0ca42Dhttp://resolver.tudelft.nl/uuid:7b1cdc6f3fee4adabd59fb608bf0ca42GModelbased Optimization of Oil Recovery: Robust Operational StrategiesVan Essen, G.M.7Jansen, J.D. (promotor); Van den Hof, P.M.J. (promotor)jThe process of depleting an oil reservoir can be poured into an optimal control problem with the objective to maximize economic performance over the life of the ?eld. Despite its large potential, lifecycle optimization has not yet found its way into operational environments. The objective of this thesis is to improve operational applicability of modelbased optimization of oil recovery. The reluctance of oil and gas companies to adopt this technology in their operational environments can mainly be contributed to the large uncertainties that come into play when optimizing production over the entire life of a ?eld and  in effect  the lack of faith that exists in the available methods and models. These uncertainties are of varying nature and originate from different sources. This leads to the main research question of this thesis: Can the performance of modelbased lifecycle optimization of oil and gas production in realistic circumstances be improved by addressing uncertainty in the optimization problem? In this thesis, two approaches to address this research question are presented, related to the choice for a ?xed or adaptive operational strategy. For a ?xed strategy, three methods are described: hierarchical optimization, robust optimization, and integrated dynamic optimization and feedback control. For adaptive operational strategies, two aspects are investigated in a more exploratory setting: the combination of different data sources and the frequency of sequential model updating and reoptimization. The methods laid out in this thesis provide improved economic lifecycle performance under uncertainty in a number of examples. While presented as separate methods, they are not mutually exclusive and could be combined into a single work?ow. Although all the examples involve water?ooding as recovery mechanism, the scope for lifecycle optimization may be larger for enhanced (tertiary) oil recovery methods because of< the generally higher up and downside potential of these techniques. Application of the methods on a real petroleum reservoir is still required to evaluate their merit in a truly realistic environment.?oil recovery; optimization; waterflooding; reservoir simulation
20150805Geoscience & Engineering)uuid:d8961a8ad99d47d58eabe1f7fb486e63Dhttp://resolver.tudelft.nl/uuid:d8961a8ad99d47d58eabe1f7fb486e63uEarly motor learning changes in upperlimb dynamics and shoulder complex loading during handrim wheelchair propulsionVegter, R.J.K.; Hartog, J.; De Groot, S.; Lamoth, C.J.; Bekker, M.J.; Van der Scheer, J.W.; Van der Woude, L.H.V.; Veeger, H.E.J.T Background To propel in an energyefficient manner, handrim wheelchair users must learn to control the bimanually applied forces onto the rims, preserving both speed and direction of locomotion. Previous studies have found an increase in mechanical efficiency due to motor learning associated with changes in propulsion technique, but it is unclear in what way the propulsion technique impacts the load on the shoulder complex. The purpose of this study was to evaluate mechanical efficiency, propulsion technique and load on the shoulder complex during the initial stage of motor learning. Methods 15 naive ablebodied participants received 12minutes uninstructed wheelchair practice on a motor driven treadmill, consisting of three 4minute blocks separated by two minutes rest. Practice was performed at a fixed belt speed (v?=?1.1 m/s) and constant lowintensity power output (0.2 W/kg). Energy consumption, kinematics and kinetics of propulsion technique were continuously measured. The Delft Shoulder Model was used to calculate net joint moments, muscle activity and glenohumeral reaction force. Results With practice mechanical efficiency increased and propulsion technique changed, reflected by a reduced push frequency and increased work per push, performed over a larger contact angle, with more tangentially applied force and reduced power losses before and after each push. Contrary to our expectations, the above mentioned propulsion technique changes were found together with an increased load on the shoulder complex reflected by higher net moments, a higher total muscle power and higher peak and mean glenohumeral reaction forces. Conclusions It appears that the early stages of motor learning in handrim wheelchair propulsion are indeed associated with improved technique and efficiency due to optimization of the kinematics and dynamics of the upper extremity. This process goes at the cost of an increased muscular effort and mechanical loading of the shoulder complex. This seems to be associated with an unchanged stable function of the trunk and could be due to the early learning phase where participants still have to learn to effectively use the full movement amplitude available within the wheelchairuser combination. Apparently whole body energy efficiency has priority over mechanical loading in the early stages of learning to propel a handrim wheelchair.T(MeSH); biomechanics; motor learning; rehabilitation; optimization; wheeled mobilityBioMed CentralBiomechanical Engineering)uuid:050c1e0e89f34ee3a1d0907bb3a31a3aDhttp://resolver.tudelft.nl/uuid:050c1e0e89f34ee3a1d0907bb3a31a3a^Watergas shift (WGS) Operation of Precombustion CO2 Capture Pilot Plant at the Buggenum IGCC2Van Dijk, H.A.J.; Damen, K.; Makkee, M.; Trapp, C.In the Nuon/Vattenfall CO2 Catchup project, a precombustion CO2 capture pilot plant was built and operated at the Buggenum IGCC power plant, the Netherlands. The pilot consist of sweet watergas shift, physical CO2 absorption and CO2 compression. The technology performance was verified and validated models were obtained. This paper describes the validation of a WGS reactor model and the excellent catalyst resistance to carbiding at steam/CO = 1.5 molmol1 testing. Modelbased optimization shows that compared to conventional operation at steam/CO = 2.65, applying steam/CO = 1.5 leads to a 10% lower CO2 capture of penalty of 1155 MJelectrictCO2?1, albeit at a< decreased optimum CO2 capture efficiency of 78.5% versus 87.5%.iprecombustion CO2 capture; pilot plant; WGS section; reactor model; lowered steam/CO ratio; optimizationElsevierApplied SciencesChemE/Chemical Engineering)uuid:8eb074fa3d474373bf01ffcee2a4612cDhttp://resolver.tudelft.nl/uuid:8eb074fa3d474373bf01ffcee2a4612cDOptimal Trajectory Planning and Train Scheduling for Railway SystemsWang, Y.;De Schutter, B. (promotor); Van den Boom, T.J.J. (promotor)Safe, fast, punctual, energyefficient, and comfortable rail traffic systems are important for rail operators, passengers, and the environment. Due to the increasing energy prices and environmental concerns, the reduction of energy consumption has become one of the key objectives for railway systems. On the other hand, with the increase of passenger demands in urban rail transit systems of large cities, it is important to transport passengers safely and efficiently. The main focus of the research presented in this thesis is to determine and develop mathematical models and solution approaches to shorten the travel time of passengers and to reduce energy consumption in railway systems. More specifically, the travel time of passengers has been considered in train scheduling, where passenger demands of urban rail transit systems are included. The energy efficiency has been taken into account both in the train scheduling and in the operation of trains. The main topics investigated in the thesis can be summarized as: Optimal trajectory planning for a single train. We have considered the optimal trajectory planning problem for a single train under various operational constraints, which include the varying line resistance, variable speed restrictions, and the varying maximum traction force. The objective function of the optimization problem is a tradeoff between the energy consumption and the riding comfort. We have proposed two approaches to solve this optimal control problem, namely a mixedinteger linear programming (MILP) approach and the pseudospectral method. Simulation results comparing the MILP approach, the pseudospectral method, and a discrete dynamic programming approach have shown that the pseudospectralmethod results in the best control performance, but that if the required computation time is also take into consideration, the MILP approach yields the best overall performance. Optimal trajectory planning for multiple trains. The optimal trajectory planning problem for multiple trains under fixed block signaling systems and moving block signaling systems has been investigated. Four solution approaches have been proposed: the greedy MILP approach, the simultaneous MILP approach, the greedy pseudospectral approach, the simultaneous pseudospectral method. Simulation results have shown that compared to the greedy approach, the simultaneous approach yields a better control performance but requires a higher computation time. In addition, the end time violations of the MILP approach are slightly larger than those of the pseudospectral method, but the computation time of the MILP approach is one to two orders of magnitude smaller than that of the pseudospectral method. Train scheduling for a single line based on ODindependent passenger demands. The train scheduling problem for an urban rail transit line has been considered with the aim of minimizing the total travel time of passengers and the energy consumption of the operation of trains. The departure times, running times, and dwell times of the trains have been optimized based on origindestinationindependent (ODindependent) passenger demands. We have proposed a new iterative convex programming (ICP) approach to solve this train scheduling problem. The performance of the ICP approach has been comparedwith other alternative approaches, such as nonlinear programming approaches, a mixed integer nonlinear programming (MINLP) approach, and an MILP approach. The ICP approach has been shown, via a case study, to provide the best tradeoff between performance and computational complexity for the train scheduling prob< lem. Train scheduling for a single line based on ODdependent passenger demands. We have adopted a stopskipping strategy to reduce the passenger travel time and the energy consumption further based on origindestination dependent (ODdependent) passenger demands in an urban rail transit line. The train scheduling problem with stopskipping results in a mixed integer nonlinear programming problem and we have proposed a bilevel optimization approach and an efficient bilevel optimization approach to solve this problem. Simulation results show that the stopskipping strategy outperforms the allstop strategy. Moreover, the bilevel approach yields a better control performance than the efficient bilevel approach but at a cost of a higher computation time. Train scheduling for networks with timevarying ODdependent passenger demands. For the train scheduling for urban rail transit networks, we have developed an eventdriven model, where the time varying ODdependent passenger demands, the splitting of passenger flows, and the passenger transfer behavior at transfer stations is included. The resulting train scheduling problem is a realvalued nonlinear nonconvex problem, which can be solved by gradientfree nonlinear programming approaches (e.g., pattern search), gradientbased nonlinear programming approaches (e.g., sequential quadratic programming (SQP)), genetic algorithms, or an MILP approach. We have applied an SQP method and a genetic algorithm to solve the train scheduling problem for a case study, the results of which have shown that the SQP method provides a better tradeoff between control performance and computational complexity with respect to the genetic algorithm.Ytrajectory planning; train scheduling; passenger demand; urban rail transit; optimization)uuid:9bb17c0280c44e2a917b6b3eed182132Dhttp://resolver.tudelft.nl/uuid:9bb17c0280c44e2a917b6b3eed1821327How to achieve aircraft availability in the MRO&U triadKaelen, J.W.E.N.Santema, S.C. (promotor)bThe financial crisis and the introduction of lowbudget companies have brought major changes to the air freight community. Competition became stronger and cost control became more important. This led to new ways of organizing aircraft Maintenance, Repair, Overhauls and Upgrades (MRO&U), which became performance oriented. For the triad of participants in the maintenance process, the Original Equipment Manufacturer (OEM), the maintainers and the operator, this meant that they had to change their way of working, their processes and their culture. The focus shifted to delivering performance i.e. aircraft availability. The objective of the current research is to contribute to the development of a theory on how to achieve the performance objective aircraft availability as outcome of the MRO&U triad collaboration. This qualifies the current research as a theory building research. The outcome of the present research is a model on how to improve and optimize aircraft availability as outcome of collaboration in the aircraft MRO&U triad. This model on how to improve aircraft availability in the MRO&U process contributes to the increase of the turnover per aircraft and hence the financial performance of airliners, as well as to the optimization of the MRO&U process. This research is therefore of interest for airline operators, aircraft maintainers and aircraft OEM s.Faircraft; maintenance; optimization; MRO&U; collaboration; performanceIndustrial Design EngineeringPIM)uuid:c3faec41007f4b2cacbeeb2eb89efd32Dhttp://resolver.tudelft.nl/uuid:c3faec41007f4b2cacbeeb2eb89efd325HighLevel Power Estimation and Optimization of DRAMsChandrasekar, K.Goossens, K.G.W. (promotor)Embedded systems have become an integral part of our life in the last few years in multifarious ways, be it in mobile phones, portable audio players, smart watches or even cars. Most embedded systems fall under the category of consumer electronics, such as televisions, mobile devices, and wearable electronics. With several players competing in this market, manufacturers of embedded systems continue< to add more functionality to these devices to make them more userfriendly, and often equip them with a very high resolution display and graphics support, and better computing and Internet capabilities. Unfortunately, they are often constrained by tight power/energy budgets, since battery capacity does not improve at the same rate as computing power. While there is clearly much progress to be made in harnessing all the possibilities of embedded systems, limitations in battery capacities, thermal constraints and power/energy budgets surely hinder this progress. Although technology scaling has traditionally addressed both the power minimization and highperformance requirements, with Moore's law nearing its limits, the development of energyefficient system designs has become critically important. Thus, to be able to continue to provide new and improved features in embedded systems, designtime and runtime power management and minimization holds the key. As a consequence, power optimization has become one of the most defining aspects of designing modern embedded systems. To design such highperformance and energyefficient embedded systems, it is extremely important to address two basic issues: (1) accurate estimation of power consumption of all system components during early design stages and (2) deriving power optimization solutions that do not negatively impact system performance. In this thesis, we aim to address these two issues for one of the most important components in modern embedded systems: DRAM memories. Towards this, we propose a highprecision DRAM power model (DRAMPower) and a set of performanceneutral DRAM powerdown strategies. DRAMPower is a highlevel DRAM power model that performs highprecision modeling of the power consumption of different DRAM operations, state transitions and powersaving modes at the cycleaccurate level. To further improve the accuracy of DRAMPower's power/energy estimates, we derive better than worstcase and realistic measures for the JEDEC current metrics instead of vendor provided worstcase measures from device datasheets. Towards this, we modify a SPICEbased circuitlevel DRAM architecture and power model and derive better than worstcase current measures under nominal operating conditions applicable to a majority of DRAM devices (>97%) with any given configuration (capacity, data width and frequency). Besides these better than worstcase current measures, we also propose a generic postmanufacturing power and performance characterization methodology for DRAMs that can help identify the realistic current estimates and optimized set of timing measures for a given DRAM device, thereby further improving the accuracy of the power and energy estimates for that particular DRAM device. To optimize DRAM power consumption, we propose a set of performanceneutral DRAM powerdown strategies coupled with a power management policy that for any given usecase (access granularity, page policy and memory type) achieves significant power savings without impacting its worstcase performance (bandwidth and latency) guarantees. We verify the pessimism in DRAM currents and four critical DRAM timing parameters as provided in the datasheets, by experimentally evaluating 48 DDR3 devices of the same configuration. We further derive optimal set of timings using the performance characterization algorithm, at which the DRAM can operate successfully under worstcase runtime conditions, without increasing its energy consumption. We observed up to of 33.3% and 25.9% reduction in DRAM read and write latencies and 17.7% and 15.4% improvement in energy efficiency. We validate DRAMPower model against a circuitlevel DRAM power model and verify it against real power measurements from hardware for different DRAM operations. We observed between 18% difference in power estimates, with an average of 97% accuracy. We also evaluated the powermanagement policy and powerdown strategies and observed significant energy savings (close to theoretical optimal) at very marginal averagecase performance penalty without impacting an< y of the original latency and bandwidth guarantees.BDRAM; power; energy; estimation; optimization; modeling; variation)uuid:3beba71b7e194277bdd7752c43f867afDhttp://resolver.tudelft.nl/uuid:3beba71b7e194277bdd7752c43f867af:Cost optimal river dike design using probabilistic methods,Bischiniotis, K.; Kanning, W.; Jonkman, S.N.?This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike crosssections and gives the one with the least cost, taking into account the boundary conditions and the requirements that are set by the user. Even though there are many ways that may provoke the dike failure, the literature study showed that the failure mechanisms that contribute most to the failure of the typical Dutch river dikes are overflowing, piping and inner slope stability. Based on these, the most important design variables of the dike crosssection dimensions are set and following probabilistic design methods, the probability of failure of many different dike crosssections is estimated taking into account the abovementioned failure mechanisms. Different crosssection configurations may all comply with a set target probability of failure. Of these, the crosssection that results in the lowest cost is considered the optimal. This approach is applied to several representative dikes, each of which gives a different optimal design, depending on the local boundary conditions. The method shows that the use of probabilistic optimization gives more costefficient designs than the traditional partial safety factor designs.Rriver dike; optimization; probabilistic design; crosssection; failure probability?Brazilian Water Resources Association and Acquacon Consultoria.Hydraulic Engineering)uuid:9dff055ceb6d4005a052fce8aaeea792Dhttp://resolver.tudelft.nl/uuid:9dff055ceb6d4005a052fce8aaeea792~Numerical Methods for the Optimization of Nonlinear ResidualBased SungridScale Models Using the Variational Germano IdentityMaher, G.D.; Hulshoff, S.J.The Variational Germano Identity [1, 2] is used to optimize the coefficients of residualbased subgridscale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain values for the parameters of subgridscale models that are nonlinear in their coefficients. Specifically, the NewtonRaphson method is employed. A leastsquares minimization formulation of the Germano Identity is developed to resolve issues that occur when the residual is positive and negative over different regions of the domain. In this case a BroydenFletcherGoldfarbShanno (BFGS) algorithm is used to solve the minimization problem. The developed method is applied to the onedimensional unsteady forced Burgers equation and the twodimensional steady Stokes equations. It is shown that the NewtonRaphson method and BFGS algorithm generally solve, or minimize the residual of, the Germano relations in a relatively small number of iterations. The optimized subgridscale models are shown to outperform standard SGS models with respect to a L2 error. Additionally, the nonlinear SGS models tend to achieve lower L2 errors than the linear models.jsubgridscale model; variational multiscale method; variational Germano identity; optimization; turbulenceCIMNE&Aerodynamics, Wind Energy & Propulsion)uuid:4f9ed7f005e14cbc8992d91dc6c914d7Dhttp://resolver.tudelft.nl/uuid:4f9ed7f005e14cbc8992d91dc6c914d7_Validation and Optimization of a Design Formula for Stable Geometrically Open Filter Structures7Van de Sande, S.A.H.; Uijttewaal, W.S.J.; Verheij, H.J.Granular filters are used for protection against scour and erosion of base material. For a proper functioning it is necessary that at the interfaces between the filter structure, the subsoil and the water flowing above the filter structure no material will be transported. Different types of granular filters can be disti< nguished, this paper focuses on stable geometrically open filter structures under current attack. Hoffmans (2012) developed a design formula for stable geometrically open filters. This paper presents the validation and an optimization of the design formula based on performed model tests. It is shown that the current design formula is too conservative. The proposed improvements allows for a wider range of applicability.filter; granular filter; geometrically open filter; open filter; interface stability; bed protection; design formula; stability; optimization; ICCE 2014$Coastal Engineering Research Council)uuid:cb6544e802f9403c8540698b7af9a185Dhttp://resolver.tudelft.nl/uuid:cb6544e802f9403c8540698b7af9a185/Rolling horizon predictions of bus trajectoriesOshyani, M.F.; Cats, O.{Bus travel times are subject to inherent and recurrent uncertainties. A realtime prediction scheme regarding how the transit system evolves will potentially facilitate more adaptive operations as well as more adaptive passengers decisions. This scheme should be tractable, sufficiently fast and reliable to be used in real time applications. For this purpose, a heuristic hybrid scheme for departure time estimation is proposed in this study. The prediction generated by the proposed hybrid scheme consists of three travel time components: schedule, instantaneous and historical data sources. Genetic algorithm is applied in order to specify the contribution of each data source component to the prediction scheme. The proposed scheme was applied for a trunk bus line in Stockholm, Sweden. In addition, the currently deployed scheme was replicated in order to compare the performance of both schemes. The results suggest that the proposed scheme reduces the overall mean absolute error by almost 20%. Moreover the proposed scheme provides better predictions except for very long term predictions where both schemes yield the same performance.Oprediction; bus departure time; optimization; travel time and genetic algorithm.National Technical University of Athens (NTUA))uuid:650ec0d046134dae96b11f685dff0e60Dhttp://resolver.tudelft.nl/uuid:650ec0d046134dae96b11f685dff0e60>Automatic Hardware Generation for Reconfigurable ArchitecturesNane, R.Bertels, K.L.M. (promotor) Reconfigurable Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons. First, processor clock frequencies reached threshold values past which power dissipation becomes a very difficult problem to solve. As a consequence, alternatives were sought to keep improving the system performance. Second, because FieldProgrammable Gate Arrays (FPGAs) technology substantially improved (e.g., increase in transistors per mm2), system designers were able to use them for an increasing number of (complex) applications. However, the adoption of reconfigurable devices brought with itself a number of related problems, of which the complexity of programming can be considered an important one. One approach to program an FPGA is to implement an automatically generated Hardware Description Language (HDL) code from a HighLevel Language (HLL) specification. This is called HighLevel Synthesis (HLS). The availability of powerful HLS tools is critical to managing the everincreasing complexity of emerging RA systems to leverage their tremendous performance potential. However, current hardware compilers are not able to generate designs that are comparable in terms of performance with manually written designs. Therefore, to reduce this performance gap, research on how to generate hardware modules efficiently is imperative. In this dissertation, we address the tool design, integration, and optimization of the DWARV 3.0 HLS compiler. Dissimilar to previous HLS compilers, DWARV 3.0 is based on the CoSy compiler framework. As a result, this allowed us to build a highly modular and extendible compiler in which standard or custom optimizations can be easily integrated. The compiler is designed to accept a large subset of Ccode as input and to generate synthesizable VHDL code for unrestrict< ed application domains. To enable DWARV 3.0 thirdparty toolchain integration, we propose several IPXACT (i.e., a XMLbased standard used for toolinteroperability) extensions such that hardwaredependent software can be generated and integrated automatically. Furthermore, we propose two new algorithms to optimize the performance for different input area constraints, respectively, to leverage the benefits of both jump and predication schemes from conventional processors adapted for hardware execution. Finally, we performed an evaluation against stateoftheart HLS tools. Results show that application execution time wise, DWARV 3.0 performs, on average, the best among the academic compilers.hhighlevel synthesis; hardware; reconfigurable; architecture; compiler; survey; dwarv; HLS; optimizationCPI Koninklijke WohrmannComputer Engineering)uuid:d063dfb96ec64c43b315fb98a576498aDhttp://resolver.tudelft.nl/uuid:d063dfb96ec64c43b315fb98a576498a5Modelbased Feedforward Control for Inkjet Printheads
Khalate, A.A..Babuska, R. (promotor); Bombois, X. (promotor)In recent years, inkjet technology has emerged as a promising manufacturing tool. This technology has gained its popularity mainly due to the facts that it can handle diverse materials and it is a noncontact and additive process. Moreover, the inkjet technology offers low operational costs, easy scalability, digital control and low material waste. Thus, apart from conventional document printing, the inkjet technology has been successfully applied as a micromanufacturing tool in the areas of electronics, mechanical engineering, and life sciences. In this thesis, we investigate a piezobased dropondemand (DoD) printhead which is commonly used for industrial and commercial applications due to its ability to handle diverse materials. A typical dropondemand (DoD) inkjet printhead consists of several ink channels in parallel. Each ink channel is provided with a piezoactuator which on the application of an actuation voltage pulse, generates pressure oscillations inside the ink channel. These pressure oscillations push the ink drop out of the nozzle. The print quality delivered by an inkjet printhead depends on the properties of the jetted drop, i.e., the drop velocity, the drop volume and the jetting direction. To meet the challenging performance requirements posed by new applications, these drop properties have to be tightly controlled. The performance of the inkjet printhead is limited by two factors. The first one is the residual pressure oscillations. The actuation pulses are designed to provide an ink drop of a specified volume and velocity under the assumption that the ink channel is in a steady state. Once the ink drop is jetted the pressure oscillations inside the ink channel take several microseconds to decay. If the next ink drop is jetted before these residual pressure oscillations have decayed, the resulting drop properties will be different from the ones of the previous drop. The second limiting factor is the crosstalk. The drop properties through an ink channel are affected when the neighboring channels are actuated simultaneously. Generally, the drop consistency is improved by manual tuning of the piezo actuation pulse based on some physical insight or based on exhaustive experimental studies on the printhead. However, these adhoc procedures have proved to be insufficient in dealing with the above limitations. In this thesis, a modelbased control approach is proposed to improve the performance of a DoD inkjet printhead. It offers a systematic and efficient means to improve the attainable performance of a DoD inkjet printhead by reducing the effect of the residual oscillations and the crosstalk. Furthermore, the models that have been developed for this purpose can also give new insights into the operation of the printhead. In order to achieve this goal, it is required to have a fairly accurate and simple model of an inkjet printhead. It is not easy to obtain a good physical model for an inkjet printhead due to insufficient knowledge of the complex intera< ctions in the printhead. Therefore, in this thesis, we have used system identification, i.e. we use experimental measurements in order to develop a model. For this purpose, it is required that the piezoactuator is also used as a sensor. Note that the crucial aspect in the model development is to obtain a model of the inkjet system close to its operating conditions. Therefore, we have collected measurements of the piezo sensor signal during the jetting of a series of drops at a given DoD frequency. For the printhead under investigation, we found that the dynamics of the ink channel are dependent on the DoD frequency. This phenomenon is caused by nonlinearities in the droplet formation. Consequently, we have modeled the ink channel dynamics for every DoD frequency. In this thesis, it is shown that the set of local inkjet models obtained at different DoD frequencies can be encompassed by a polytopic uncertainty on the parameters of a nominal model. Using the same identification procedure, the crosstalk can also be modeled. In order to improve the printhead performance the actuation pulse was redesigned. The new drive pulse is designed to provide good performance for all models in the area of uncertainty by means of robust feedforward control. The pulse also respects the pulse shape constraints posed by driving electronics (ASICS). Besides the robust actuation pulse, our approach also introduces an optimal delay between actuation of neighboring channels to reduce the crosstalk. The current driving electronics limits the possibilities of reshaping the actuation pulse. Since it is expected that this limitation will be relaxed in the future, we have also developed procedure to design a robust pulse without pulse shape constraints. The performance improvement achieved with this unconstrained pulse has proved to be quite limited. The proposed method is also useful for inkjet practitioners who do not have any insight in the inkjet dynamics. The efficacy of our approach is demonstrated by our experimental results. The proposed method was verified in practice by jetting a series of ink drops at various DoD frequencies and also by jetting a bitmap image. For the printhead under consideration, the dropconsistency is improved by almost four times with the proposed approach when compared to the conventional methods.Sinkjet printhead; identification; feedforward control; robust control; optimization)uuid:8d1abf3374d04042bae96e4468b7bb81Dhttp://resolver.tudelft.nl/uuid:8d1abf3374d04042bae96e4468b7bb81^Averaging Level Control to Reduce OffSpec Material in a Continuous Pharmaceutical Pilot PlantPLakerveld, R.; Benyahia, B.; Heider, P.L.; Zhang, H.; Braatz, R.D.; Barton, P.I.The judicious use of buffering capacity is important in the development of future continuous pharmaceutical manufacturing processes. The potential benefits are investigated of using optimalaveraging level control for tanks that have buffering capacity for a section of a continuous pharmaceutical pilot plant involving two crystallizers, a combined filtration and washing stage and a buffer tank. A closedloop dynamic model is utilized to represent the experimental operation, with the relevant model parameters and initial conditions estimated from experimental data that contained a significant disturbance and a change in setpoint of a concentration control loop. The performance of conventional proportionalintegral (PI) level controllers is compared with optimalaveraging level controllers. The aim is to reduce the production of offspec material in a tubular reactor by minimizing the variations in the outlet flow rate of its upstream buffer tank. The results show a distinct difference in behavior, with the optimalaveraging level controllers strongly outperforming the PI controllers. In general, the results stress the importance of dynamic process modeling for the design of future continuous pharmaceutical processes.control; process modeling; process simulation; parameter estimation; dynamic modeling; optimization; crystallization; continuous pharmaceutical manufactur< ingMDPIProcess and Energy)uuid:f30bd41b4b444459ab68d913fffdb8e9Dhttp://resolver.tudelft.nl/uuid:f30bd41b4b444459ab68d913fffdb8e9>Estimation of primaries by sparse inversion incuding the ghostVerschuur, D.J.zToday, the problem of surfacerelated multiples, especially in shallow water, is not fully solved. Although surfacerelated multiple elimination (SRME) method has proved to be successful on a large number of data cases, the involved adaptive subtraction acts as a weak link in this methodology, where primaries can be distorted due to their interference with multiples. Therefore, recently, SRME has been redefined as a largescale inversion process, called estimation of primaries by sparse inversion (EPSI). In this process the multidimensional primary impulse responses are considered as the unknowns in a largescale inversion process. By parameterizing these impulse responses as spikes in the spacetime domain, and using a sparsity constraint in the update step, the algorithm looks for those primaries that, together with their associated multiples, explain the total input data. As the objective function in this minimization process truly goes to zero, the tendency for distorting primaries is greatly reduced. An additional advantage is that imperfections in the data can be included in the forward model and resolved simultaneously, such as the missing near offsets. In this paper it is demonstrated that the ghost effect can also be included in the EPSI formulation after which a ghostfree primary estimate can be obtained, even in the case the ghost notch is within the desired spectrum.>acquisition; inversion; multiples; optimization; wave equation$Society of Exploration Geophysicists"IST/Imaging Science and Technology)uuid:5ede00e1910149ea9a2f81b99291b110Dhttp://resolver.tudelft.nl/uuid:5ede00e1910149ea9a2f81b99291b110CRisk approach to land reclamation: Feasibility of a polder terminal,Lendering, K.T.; Jonkman, S.N.; Peters, D.J.New ports are mostly constructed on low lying coastal areas or shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting conventional terminal requires large volumes of fill material often dredged from the sea, which is costly. The terminal yard of a polder terminal lies below the outside water level and is surrounded by a quay wall flood defense structure. This saves large amounts of reclamation cost but introduces higher damage potential during flooding and thus an increased flood risk. A riskbased framework is made to determine the optimal quay wall and polder level, which is an optimization (cost benefit analysis) under two variables. Overtopping failure proves to be the dominant failure mechanism for flooding. The reclamation savings prove to be larger than the increased flood risk demonstrating that the polder terminal could be an attractive alternative to the conventional terminal.Vcontainer terminals; flood risks; optimization; polder terminals; probabilistic design*CRC Press/Balkema  Taylor & Francis Group)uuid:6bf9ad22c4a54f5f8006fce525935f04Dhttp://resolver.tudelft.nl/uuid:6bf9ad22c4a54f5f8006fce525935f046CloudBased Design Analysis and Optimization FrameworkMueller, V.; Strobbe, T.\Integration of analysis into early design phases in support of improved building performance has become increasingly important. It is considered a required response to demands on contemporary building design to meet environmental concerns. The goal is to assist designers in their decision making throughout the design of a building but with growing focus on the earlier phases in design during which design changes consume less effort than similar changes would in later design phases or during construction and occupation.Multidisciplinary optimization has the potential of providing design teams with information about the potential tradeoffs between various goals, some of which may be in conflict with each other. A commonly used class of optimization algorithms is the class of genetic algorithms < which mimic the evolutionary process. For effective parallelization of the cascading processes occurring in the application of genetic algorithms in multidisciplinary optimization we propose a cloud implementation and describe its architecture designed to handle the cascading tasks as efficiently as possible.Wcloud computing; design analysis; optimization; generative design; building performance)uuid:7d81abadfcbe4094871a54755ee0f03eDhttp://resolver.tudelft.nl/uuid:7d81abadfcbe4094871a54755ee0f03e,Packing Optimization for Digital Fabrication#Dritsas, S.; Kalvo, R.; Sevtsuk, A.We present a designcomputation method of designtoproduction automation and optimization in digital fabrication; an algorithmic process minimizing material use, reducing fabrication time and improving production costs of complex architectural form. Our system compacts structural elements of variable dimensions within fixedsize sheets of stock material, revisiting a classical challenge known as the twodimensional binpacking problem. We demonstrate improvements in performance using our heuristic metric, an approach with potential for a wider range of architectural and engineering designbuilt digital fabrication applications, and discuss the challenges of constructing freeform design efficiently using operational research methodologies.Adesign computation; digital fabrication; automation; optimization)uuid:76b9b6db926c479e9031ed4abf2324dfDhttp://resolver.tudelft.nl/uuid:76b9b6db926c479e9031ed4abf2324dfYA Computational Method for Integrating Parametric Origami Design and Acoustic EngineeringTakenaka, T.; Okabe, A.This paper proposes a computational formfinding method for integrating parametric origami design and acoustic engineering to find the best geometric form of a concert hall. The paper describes an application of this method to a concert hall design project in Japan. The method consists of three interactive subprograms: a parametric origami program, an acoustic simulation program, and an optimization program. The advantages of the proposed method are as follows. First, it is easy to visualize engineering results obtained from the acoustic simulation program. Second, it can deal with acoustic parameters as one of the primary design materials as well as origami parameters and design intentions. Third, it provides a final optimized geometric form satisfying both architectural design and acoustic conditions. The method is valuable for generating new possibilities of architectural form by shifting from a traditional formmaking process to a formfinding process.finteractive design method; parametric origami; acoustic simulation; optimization; quadrat count method)uuid:241873a0ad1443f8a135e2c133622c2fDhttp://resolver.tudelft.nl/uuid:241873a0ad1443f8a135e2c133622c2fBiological Computation for Digital Design and Fabrication: A biologicallyinformed finite element approach to structural performance and material optimization of robotically deposited fibre structures?Oxman, N.; Laucks, J.; Kayser, M.; Uribe, C.D.G.; DuroRoyo, J.The formation of nonwoven fibre structures generated by the Bombyx mori silkworm is explored as a computational approach for shape and material optimization. Biological case studies are presented and a design approach for the use of silkworms as entities that can compute fibrous material organization is given in the context of an architectural design installation. We demonstrate that in the absence of vertical axes the silkworm can spin flat silk patches of variable shape and density. We present experiments suggesting sufficient correlation between topographical surface features, spinning geometry and fibre density. The research represents a scalable approach for optimizationdriven fibrebased structural design and suggests a biologydriven strategy for material computation.rbiologically computed digital fabrication; robotic fabrication; finite element analysis; optimization; CNC weaving)uuid:38379080da964acda86df3b8f492dd1bDhttp://resolver.tudelft.nl/uuid:38379080da964acda86df3b8f492dd1b< 'Algorithmic Engineering in Public SpaceHulin, J.; Pavlicek, J.oThe paper reflects on a relationship between an algorithmic and a standard (intuitive) approach to design of public space. A realized project of a plaza renovation in Czech town Vsetin is described as a study case. The paper offers an overview of benefits and drawbacks of the algorithmic approach in the described study case and it outlines more general conclusions.?algorithm; public space; circle packing; optimization; pavement)uuid:25459ba0fe3a444c847a34ad5c41ab9fDhttp://resolver.tudelft.nl/uuid:25459ba0fe3a444c847a34ad5c41ab9feIntegrating Computational and Building Performance Simulation Techniques for Optimized Facade DesignsGadelhak, M.!This paper investigates the integration of Building Performance Simulation (BPS) and optimization tools to provide high performance solutions. An office room in Cairo, Egypt was chosen as a base testing case, where a Genetic Algorithm (GA) was used for optimizing the annual daylighting performance of two parametrically modeled daylighting systems. In the first case, a combination of a redirecting system (light shelf) and shading system (solar screen) was studied. While in the second, a freeform gills surface was also optimized to provide acceptable daylighting performance. Results highlight the promising future of using computational techniques along with simulation tools, and provide a methodology for integrating optimization and performance simulation techniques at early design stages.^High performance facade; daylighting simulation; optimization; form finding; genetic algorithm)uuid:3bfab3e0d82644c581daf06c33ee0299Dhttp://resolver.tudelft.nl/uuid:3bfab3e0d82644c581daf06c33ee0299?A Case Study in Teaching Construction of Building Design Spaces%Nicknam, M.; Bernal, M.; Haymaker, J.Until recently, design teams were constrained by tools and schedule to only be able to generate a few alternatives, and analyze these from just a few perspectives. The rapid emergence of performancebased design, analysis, and optimization tools gives design teams the ability to construct and analyze far larger design spaces more quickly. This creates new opportunities and challenges in the ways we teach and design. Students and professionals now need to learn to formulate and execute design spaces in efficient and effective ways. This paper describes curriculum that was taught in a course 8803 Multidisciplinary Analysis and Optimization taught by the authors at Schools of Architecture and Building Construction at Georgia Tech in spring 2013. We approach design as a multidisciplinary design space formulation and search process that seeks maximum value. To explore design spaces, student designers need to execute several iterative processes of problem formulation, generate alternative, analyze them, visualize trade space, and address decisionmaking. The paper first describes students design space exploration experiences, and concludes with our observations of the current challenges and opportunities.Mdesign space exploration; teaching; multidisciplinary; optimization; analysis)uuid:1d9c4022dbd6445298424649c1fdd432Dhttp://resolver.tudelft.nl/uuid:1d9c4022dbd6445298424649c1fdd432LA Freight Transport Model for Integrated Network, Service, and Policy Design Zhang, M.Tavasszy, L.A. (promotor) The goal of the European Transport Policy is to establish a sustainable transport system that meets society s economic, social and environmental needs (ECE, 2009). This statement indicates the challenges that the European transport policy makers are faced with when facilitating an increasing freight transport demand with limited transport infrastructures. The development of an interconnected intermodal transport system has been recognized by the European Commission as an important, strategic task that will contribute to solving the dilemma between the accommodation of an increased freight flow and the need for a sustainable living environment. This thesis focuses on modelbased, quantitative analysis for infrastructure network desig< n decisions for large scale intermodal transport systems.. The involvement of public concerns, as represented by the governmental objectives on sustainability, brings additional complexity into infrastructure network design. Governments are often concerned with network design on a regional scale or a national scale. The enlargement of the network scale to an international level further increases the level of heterogeneity of the network, among other factors in terms of the number of actors involved, the diversity of transport demand and the variety of transport service supply. These new objectives and dimensions pose new challenges to freight transport infrastructure network design. This thesis proposes a new model to support policy making for an intermodal freight transport network. The model is able to simultaneously incorporate large scale, multimodal, multicommodity and multiactor perspectives. It can be used for integrated policy, infrastructure and service design. Results can be visualized per transport mode and per commodity value group on a geographic information system at segmental level, terminal level, corridor level, regional level, national level, and network level. Implementation of the model for a realistic scale network design is another contribution of this thesis. To this end, we calibrated the model by using two approaches: a Genetic Algorithm based method and a feedbackbased method. The model was validated by comparing the modelled link flows with observations, testing the cross elasticities of the costs to demand and comparing the catchment area of the terminals with areas observed in practice. The calibration results indicate that the model adequately captures the network usage decisions on an aggregated level. The model was applied to Dutch container transport network design problems. Databases of Dutch container transport demand, features of the European multimodal freight transport infrastructure network, information about selected inland waterway transport services, and information about transport and transhipment costs, emissions and external costs were embedded in the model. After completing the theoretical and empirical specification the model was applied to policy decisions on the Dutch container transport. The thesis extensively discusses the integrated infrastructure, service, and policy design that may contribute to managing the costs of the freight flows, meanwhile ensuring a sustainable living environment. The main findings from the application are as follows.  A higher CO2 price can results in lower total transport costs, despite extra handling costs in intermodal transhipments. The costs saved by bundling freight and using intermodal transport can compensate the additional handling costs. As these cannot compensate for the internalized CO2 emission costs, the total operational costs borne by transport operators will increase.  Network efficiency can be increased by closing terminals that are not able to attract sufficient volumes of demand. However, it is not likely to happen in practice, due to the fact that the private terminal operators and the local governments have local interests to protect on those small terminals that may conflict with the objective of minimizing total network costs.  The hubnetworkservices assumed and tested in this study cannot compete with road transport or shuttle barge transport services in the base scenario due to the extra transhipment costs, low load factor, and low demand for IWW container transport. In a future scenario, these services are only feasible under very high traffic growth.  There is not one single optimal future infrastructure network. Instead, a good infrastructure network design mainly depends on the future demand, transport price, and development of new transport technology. Based on the conclusions drawn in this thesis, implementing the combination of CO2 pricing and terminal network configuration is more effective than solely implementing CO2 pricing, with regard to total network CO2 emissions. A range of efficient networks, forming a fron< tier of minimal total network costs and total network CO2 emissions, is presented in the thesis, instead of one single optimal solution. The frontier provides more options in terminal network optimization in terms of the target network performance. The question which is the optimal network will depend on the relative value placed on CO2 emissions. The thesis ends with a vision on future freight transport network design models. A potential research direction is to incorporate the dimension of time into the model. This extension will enable the model to capture dynamic demand; to be applicable for scheduling synchronized intermodal transport services; to provide more realistic estimations of transport emissions and to analyse network reliability, including network robustness and service robustness. Reference: CEC (2009) 'COMMUNICATION FROM THE COMMISSION: A sustainable future for transport: Towards an integrated, technologyled and user friendly system', Commission of the European Communities, Brussels.Xfreight; transport; network design; optimization; GIS; service network; transport policyTRAIL Research School)uuid:0feb1f5032ae4e5487ea3b551497389eDhttp://resolver.tudelft.nl/uuid:0feb1f5032ae4e5487ea3b551497389ePRisk based design of land reclamation and the feasibility of the polder terminal*Lendering, K.; Jonkman, S.N.; Peters, D.J.New ports are mostly constructed on low lying coastal areas or in shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting conventional terminal requires large volumes of good quality fill material often dredged from the sea, which is costly. The alternative concept of a polder terminal has a terminal yard which lies below the outside water level and is surrounded by a quay wall flood defence structure. This saves large amounts of reclamation investment but introduces a higher damage potential in case of flooding and corresponding flood risk. Important conditions for the feasibility of a polder terminal are low pervious subsoil and high reclamation cost. Further, a polder terminal requires a water storage and drainage system, against additional cost. A riskbased analysis of the optimal quay wall height and polder level is performed, which is an optimization (cost benefit analysis) under two variables. The overtopping failure mechanism proves to be the dominant failure mechanism for flooding. During overtopping the water depth in the polder terminal is larger than on the conventional terminal, resulting in higher damage potential and corresponding flood risk for the polder terminal. However, the reclamation savings prove to be larger than the increased flood risk: the polder terminal could save 10 to 30% of the total cost (investment and risk) demonstrating that it to be an economically attractive alternative to a conventional terminal.,Institute for Research and Community Service)uuid:56a648000dde42fda2f105ed7c357b0bDhttp://resolver.tudelft.nl/uuid:56a648000dde42fda2f105ed7c357b0b[An Optimization Model for Simultaneous Periodic Timetable Generation and Stability Analysis*Sparing, D.; Goverde, R.M.P.; Hansen, I.A.0We present an optimization model which is able to generate feasible periodic timetables for networks given the line structure and the requested line frequencies, taking into account infrastructure constraints and train overtake locations. As the model uses the minimum cycle time as the objective function, the stability of the timetable is also simultaneously expressed. Dimension reduction techniques are presented taking advantage of the symmetries of periodic timetables. The model is applied to a case study of a dense corridor with heterogeneous traffic.3timetable design; timetable stability; optimization@International Association of Railway Operations Research (IAROR))uuid:3e2cb6d73ba24b45af712fa106b5d189Dhttp://resolver.tudelft.nl/uuid:3e2cb6d73ba24b45af712fa106b5d189cOptimal Usage of Multiple Energy Carriers in Residential Systems: Unit Scheduling and Power ControlRamirez< Elizondo, L.M.Van der Sluis, L. (promotor)zThe world s increasing energy demand and growing environmental concerns have motivated scientists to develop new technologies and methods to make better use of the remaining resources of our planet. The main objective of this dissertation is to develop a scheduling and control tool at the district level for smallscale systems with multiple energy carriers and to apply exergyrelated concepts for the optimization of these systems. The tool is based on the energy hub approach and provides insights and techniques that can be used to evaluate new district energy scenarios. The topics that are presented include the multicarrier unit commitment framework, the multicarrier exergy hub approach, a hierarchical multicarrier control architecture, a comparison of multicarrier power applications and the implementation of a multicarrier energy management system in a real infrastructure.Foptimization; multiple energycarriers; renewables; sustainable energy)uuid:fcc290f8cf6044a4be68189f29a2fb82Dhttp://resolver.tudelft.nl/uuid:fcc290f8cf6044a4be68189f29a2fb828Estimates of extremes in the best of all possible worlds$Van Nooyen, R.R.P.; Kolechkina, A.G.In applied hydrology the question of the probability of exceeding a certain value occurs regularly. Often it is in a context where extrapolation from a relatively short time series is needed. It is well known that in its simplest form extreme value theory applies to independent identically distributed random variables. It is also well known that more advanced theory allows for some degrees of correlation and that techniques for coping with trends are available. However, the problem of extrapolation remains. To isolate the effect of extrapolation we generate synthetic time series of length 20, 50 and 100 from known distributions to derive empirical distributions for the 1:100 and 1:1000 exceedance.=extremes; estimators; optimization; statistical distributionsSTAHYWater Management)uuid:93af17490b97416aba27907ae4921a7fDhttp://resolver.tudelft.nl/uuid:93af17490b97416aba27907ae4921a7fIUsing particle packing technology for sustainable concrete mixture design Fennis, S.A.A.M.; Walraven, J.C.6The annual production of Portland cement, estimated at 3.4 billion tons in 2011, is responsible for about 7% of the total worldwide CO2emission. To reduce this environmental impact it is important to use innovative technologies for the design of concrete structures and mixtures. In this paper, it is shown how particle packing technology can be used to reduce the amount of cement in concrete by concrete mixture optimization, resulting in more sustainable concrete. First, three different methods to determine the particle distribution of a mixture are presented; optimization curves, particle packing models and discrete element modelling. The advantage of using analytical particle packing models is presented based on relations between packing density, water demand and strength. Experiments on ecological concrete demonstrate how effective particle packing technology can be used to reduce the cement content in concrete. Three concrete mixtures with low cement content were developed and the compressive strength, tensile strength, modulus of elasticity, shrinkage, creep and electrical resistance was determined. By using particle packing technology in concrete mixture optimization, it is possible to design concrete in which the cement content is reduced by more than 50% and the CO2emission of concrete is reduced by 25%.Paggregate; cement spacing; concrete; flowability; particle packing; optimizationHeronStructural Engineering)uuid:3dacc24dcf414c138e1e10f11a1b6f23Dhttp://resolver.tudelft.nl/uuid:3dacc24dcf414c138e1e10f11a1b6f23QSequential robust optimization of a Vbending process using numerical simulations6Wiebenga, J.H.; Van den Boorgaard, A.H.; Klaseboer, G.$The coupling of finite element simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming indust< ries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a generally applicable strategy for modeling and efficiently solving robust optimization problems based on time consuming simulations. Noise variables and their effect on the responses are taken into account explicitly. The robust optimization strategy consists of four main stages: modeling, sensitivity analysis, robust optimization and sequential robust optimization. Use is made of a metamodelbased optimization approach to couple the computationally expensive finite element simulations with the robust optimization procedure. The initial metamodel approximation will only serve to find a first estimate of the robust optimum. Sequential optimization steps are subsequently applied to efficiently increase the accuracy of the response prediction at regions of interest containing the optimal robust design. The applicability of the proposed robust optimization strategy is demonstrated by the sequential robust optimization of an analytical test function and an industrial Vbending process. For the industrial application, several production trial runs have been performed to investigate and validate the robustness of the production process. For both applications, it is shown that the robust optimization strategy accounts for the effect of different sources of uncertainty onto the process responses in a very efficient manner. Moreover, application of the methodology to the industrial Vbending process results in valuable process insights and an improved robust process design.nmetal forming processes; finite element method; optimization; uncertainty; robustness; sequential optimizationSpringerVerlagMaterials Innovation Institute)uuid:aa419ba53d314d73adf3c79870deccc7Dhttp://resolver.tudelft.nl/uuid:aa419ba53d314d73adf3c79870deccc7AOptimal Adaptive Policymaking under Deep Uncertainty? Yes we can!%Hamarat, C.; Kwakkel, J.H.; Pruyt, E.Uncertainty manifests itself in almost every aspect of decision making. Adaptive and flexible policy design becomes crucial under uncertainty. An adaptive policy is designed to be flexible and can be adapted over time to changing circumstances and unforeseeable surprises. A crucial part of an adaptive policy is the monitoring system and associated prespecified actions to be taken in response to how the future unfolds. However, the adaptive policymaking literature remains silent on how to design this monitoring system and how to specify appropriate values that will trigger the prespecified responses. These trigger values have to be chosen such that the resulting adaptive plan is robust and flexible to surprises in the future. Actions should be neither triggered too early nor too late. One possible family of techniques for specifying triggers is optimization. Trigger values would then be the values that maximize the extent of goal achievement across a large ensemble of scenarios. This ensemble of scenarios is generated using Exploratory Modeling and Analysis. In this paper, we show how optimization can be useful for the specification of trigger values. A Genetic Algorithm is used because of its flexibility and efficiency in complex and irregular solution spaces. The proposed approach is illustrated for the transitions of the energy system towards a more sustainable functioning which requires effective dynamic adaptive policy design. The main aim of this paper is to show the contribution of optimization for adaptive policy design.Fadaptive policymaking; exploratory modeling and analysis; optimizationMulti Actor Systems)uuid:a53f5bbd264041cb982db05a6fff9166Dhttp://resolver.tudelft.nl/uuid:a53f5bbd264041cb982db05a6fff9166<Manifold mapping optimization with of without true gradients4Delinchant, B.; Lahaye, D.; Wurtz, F.; Coulomb, J.L.This paper deals with the Space Mapping optimization algorithms in general and with the Manifold Mapping technique in particular. The idea of such algorithms is to optimize a model with a minimum number of each obj< ective function evaluations using a less accurate but faster model. In this optimization procedure, fine and coarse models interact at each iteration in order to adjust themselves in order to converge to the real optimum. The Manifold Mapping technique guarantees mathematically this convergence but requires gradients of both fine and coarse model. Approximated gradients can be used for some cases but are subject to divergence. True gradients can be obtained for many numerical model using adjoint techniques, symbolic or automatic differentiation. In this context, we have tested several Manifold Mapping variants and compared their convergence in the case of real magnetic device optimization.yspace mapping; manifold mapping; optimization; surrogate model; gradients; symbolic derivation; automatic differentiationDelft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics)uuid:9a018e13f29e459788706f8ab2fa9787Dhttp://resolver.tudelft.nl/uuid:9a018e13f29e459788706f8ab2fa9787>MultiObjective Optimization for Urban Drainage RehabilitationBarreto Cordero, W.J.3Price, R.K. (promotor); Solomatine, D.P. (promotor)Flooding in urbanized areas has become a very important issue around the world. The level of service (or performance) of urban drainage systems (UDS) degrades in time for a number of reasons. In order to maintain an acceptable performance of UDS, early rehabilitation plans must be developed and implemented. In developing countries the situation is serious, little investment is done and there are smaller funds each year for rehabilitation. The allocation of such funds must be optimal in providing value for money. However this task is not easy to achieve due to the multicriteria nature of the rehabilitation process, taking into account technical, environmental and social interests. Most of the time these are conflicting, which make it a highly demanding task. The present book introduce a framework to deal with multicriteria decision making for the rehabilitation of urban drainage systems, and focuses on several aspects such as the improvement of the performance of the multicriteria optimization through the inclusion of new features in the algorithms and the proper selection of performance criteria. The use of Genetic Algorithms, parallelization and application in countries like Brazil, Colombia y Venezuela are treated in this book.Vmultiobjective; urban drainage; optimization; parallel computing,; genetic algorithmsCRC Press/Balkema)uuid:b4aee571048942ffab55d74e980f724aDhttp://resolver.tudelft.nl/uuid:b4aee571048942ffab55d74e980f724aMShape Parameterization in Aircraft Design: A Novel Method, Based on BSplinesStraathof, M.H.Van Tooren, M.J.L. (promotor)This thesis introduces a new parameterization technique based on the ClassShapeTransformation (CST) method. The new technique consists of an extension to the CST method in the form of a refinement function based on Bsplines. This ClassShapeRefinementTransformation (CSRT) method has the same advantages as the original CST method, while also allowing for local deformations in a shape. A number of test cases were performed using two different design frameworks with low and high fidelity. The low fidelity framework was based on a commercial panel method code and coupled to various optimization algorithms. The high fidelity framework used an inhouse Euler code and employed adjoint optimization.yshape; parameterization; aircraft; design; Bsplines; ClassShapeRefinementTransformation; adjoint; euler; optimization
20120203FPP)uuid:65db30d9206c4661abd2c645482a8e2dDhttp://resolver.tudelft.nl/uuid:65db30d9206c4661abd2c645482a8e2dVBinaural ModelBased Speech Intelligibility Enhancement and Assessment in Hearing AidsSchlesinger, A.Gisolf, D. (promotor); Boone, M.M. (promotor)The enhancement of speech intelligibility in noise is still the main subject in hearing aid research. Based on the advanced results obtained with the hearing glasses, in the present research the < speech intelligibility is even further improved by the application of binaural postfilters. The functionalities of these filters are related to the principles of the auditory scene analysis. A statistical analysis of binaural cues in noise at the output of different hearing aids, the utilization of a Bayesian classifier in the source separation process and an evolutionary optimization against binaural models of speech intelligibility provides a comprehensive understanding for the utilization of binaural postfilters in adverse environments. As the listening ease and a fair amount of speech quality are mandatory in speech enhancement, tradeoffs between speech intelligibility and quality were studied in terms of the preservation of natural binaural cues and the suppression of musical noise.RCASA; STI; SII; binaural; genetic algorithm; optimization; Bayesian classificationTU Delft
20111223Imaging Science and Technology)uuid:dfaae28fc2dd4bdc82d6a1c1aa98fa26Dhttp://resolver.tudelft.nl/uuid:dfaae28fc2dd4bdc82d6a1c1aa98fa26XPredicting Storm Surges: Chaos, Computational Intelligence, Data Assimilation, EnsemblesSiek, M.B.L.A.Solomatine, D.P. (promotor)Accurate predictions of storm surge are of importance in many coastal areas. This book focuses on datadriven modelling using methods of nonlinear dynamics and chaos theory for predicting storm surges. A number of new enhancements are presented: phase space dimensionality reduction, incomplete time series, phase error correction, finding true neighbours, optimization of chaotic model, data assimilation and multimodel ensembles. These were tested on the case studies in the North Sea and Caribbean Sea. Chaotic models appear to be are accurate and reliable short and midterm predictors of storm surges aimed at supporting decisionmakers for flood prediction and ship navigation. ocean wave prediction; nonlinear dynamics and chaos theory; neural networks; optimization; dimensionality reduction; phase error correction; incomplete time series; multimodel ensemble prediction; datadriven modelling; computational intelligence; hydroinformatics)uuid:e8f7fdb9d20945be9e0313da46e386bcDhttp://resolver.tudelft.nl/uuid:e8f7fdb9d20945be9e0313da46e386bchEventbased progression detection strategies using scanning laser polarimetry images of the human retinaHVermeer, K.A.; Lo, B.; Zhou, Q.; Vos, F.M.; Vossepoel, A.M.; Lemij, H.G.Monitoring glaucoma patients and ensuring optimal treatment requires accurate and precise detection of progression. Many glaucomatous progression detection strategies may be formulated for Scanning Laser Polarimetry (SLP) data of the local nerve fiber thickness. In this paper, several strategies, all based on repeated GDx VCC SLP measurements, are tested to identify the optimal one for clinical use. The parameters of the methods were adapted to yield a set specificity of 97.5% on real image series. For a fixed sensitivity of 90%, the minimally detectable loss was subsequently determined for both localized and diffuse loss. Due to the large size of the required data set, a previously described simulation method was used for assessing the minimally detectable loss. The optimal strategy was identified and was based on two baseline visits and two followup visits, requiring twooutoffour positive tests. Its associated minimally detectable loss was 5 12?m, depending on the reproducibility of the measurements.Xprogression detection; simulation; glaucoma; polarimetry; optimization; image processing)uuid:be0f5746ff0542a3805af4a72fef4cc6Dhttp://resolver.tudelft.nl/uuid:be0f5746ff0542a3805af4a72fef4cc6{Applying the shuffled frogleaping algorithm to improve scheduling of construction projects with activity splitting allowed'Tavakolan, M.T.; Ashuri, B.; Chiara, N.In situation of contractors competing to finish a given project with the least duration and cost, acquiring the ability to improve the project quality properties seems essential for project managers. Evolutionary Algorithm (EAs) have been applied as suitable algorithms to develop the multiobjec< tive TimeCost tradeoff Optimization (TCO) and TimeCostResource Optimization (TCRO) in the past few decades ; however, by improving EAs, the Shuffled Frog Leaping Algorithm (SFLA) has been introduced as an algorithm capable of achieving a better solution with faster convergence. Furthermore, considering splitting in execution of activities can make models closer to approximating real projects. One example has been used to demonstrate the impact of SFLA and splitting on the results of the model and to compare with previous algorithms. Current research has elucidated that SFLA improves final results and splitting allows the model find suitable solutions.Poptimization; multiobjective SFLA; splitting; leveling; construction management)uuid:8d7290d3a9034cfe8c120387b94a192eDhttp://resolver.tudelft.nl/uuid:8d7290d3a9034cfe8c120387b94a192e8Information Theory for Riskbased Water System OperationWeijs, S.V.Van de Giesen, N.C. (promotor)Operational management of water resources needs predictions of future behavior of water systems, to anticipate shortage or excess of water in a timely manner. Because the natural systems that are part of the hydrological cycle are complex, the predictions inevitably are subject to considerable uncertainty. Still, definitive decisions about e.g. hydropower reservoir releases or polder pump flows have to be made looking ahead into the uncertain future. This demands riskbased approach, in which, ideally, all possible future events should be considered, along with their probabilities that represent the information and uncertainty available at the time of decision. The thesis deals with water, but the flows studied are mostly those of information. Like the flow of water, also information flows obey certain fundamental laws. These are the laws of Information Theory, which also provide guidelines for developing models, handling data, and designing statistical procedures to make predictions and decisions. The informationtheoretical perspective used in the thesis leads to the conclusion that predictions should necessarily be probabilistic and should be evaluated using a relative entropy measure, of which an intuitive decomposition into three components is presented. Other chapters in the thesis deal with the use of model predictive control and stochastic dynamic programming for operational water management, the timedynamics of information, generation of weighted ensemble forecasts that balance uncertainty and information, and a perspective on data compression as philosophy of science. Recommendations for practice and further research indicate that entropy has a bright future, not only as an everincreasing thermodynamic measure, but also as an informationtheoretical measure of uncertainty that is useful in any field where predictions and decisions have to be made in a context of complex and largely unobservable systems.information theory; operational water management; risk; probabilistic forecasts; optimization; entropy; control; water; hydrology; water resources managementVSSD
20110329Watermanagement)uuid:58f4d3c30a384640aded51d7bca2396eDhttp://resolver.tudelft.nl/uuid:58f4d3c30a384640aded51d7bca2396e0Analysis of nearoptimal evacuation instructions4Huibregtse, O.L.; Bliemer, M.C.J.; Hoogendoorn, S.P.In this paper, approximations of optimal evacuation instructions are analyzed. The instructions, consisting of a departure time, a destination, and a route, are for the evacuation by car of a population of a region threatened by a hazard. An optimization method presented in earlier research is applied on three different hazard scenarios resulting in an instruction set for each scenario. These instruction sets are different because of network degeneration caused by the different hazard scenarios. Analysis of the network occupancy during the evacuations as consequence of the instruction sets shows that the capacity is used in the scenarios for minimal 87%, 90%, and 87% for the period wherein the effect of the network degeneration is relatively small. Although the results are logical, no< clear patterns are perceptible in the instructions leading to this network occupancy. This endorses to the viewpoint from the earlier paper, namely, that it is useful to apply an optimization method to create evacuation instructions instead of applying instructions set up by straightforward rules (like evacuating to the nearest destination). Furthermore, it shows the efficiency of this specific optimization method.&evacuation; instructions; optimizationTransport and Planning)uuid:ccc6e7f33b214f05a0cadf8cad6d0ca0Dhttp://resolver.tudelft.nl/uuid:ccc6e7f33b214f05a0cadf8cad6d0ca0@Optimization of sandwich composites fuselages under flight loads7Yan, C.; Bergsma, O.; Koussios, S.; Zu, L.; Beukers, A.The sandwich composites fuselages appear to be a promising choice for the future aircrafts because of their structural efficiency and functional integration advantages. However, the design of sandwich composites is more complex than other structures because of many involved variables. In this paper, the fuselage is designed as a sandwich composites cylinder, and its structural optimization using the finite element method (FEM) is outlined to obtain the minimum weight. The constraints include structural stability and the composites failure criteria. In order to get a verification baseline for the FEManalysis, the stability of sandwich structures is studied and the optimal design is performed based on the analytical formulae. Then, the predicted buckling loads and the optimization results obtained froma FEMmodel are compared with that from the analytical formulas, and a good agreement is achieved. A detailed parametric optimal design for the sandwich composites cylinder is conducted. The optimization method used here includes two steps: the minimization of the layer thickness followed by tailoring of the fiber orientation. The factors comprise layer number, fiber orientation, core thickness, frame dimension and spacing. Results show that the twostep optimization is an effective method for the sandwich composites and the foam sandwich cylinder with core thickness of 5 mm and frame pitch of 0.5 m exhibits the minimum weight.4sandwich; composites; stability; optimization; ANOVASpringer%Aerospace Materials and Manufacturing)uuid:c2a93de021e4490ba18c09f319c2da17Dhttp://resolver.tudelft.nl/uuid:c2a93de021e4490ba18c09f319c2da17IRigorous simulations of emitting and nonemitting nanooptical structuresJanssen, O.T.A.Urbach, H.P. (promotor)In the next decade, several applications of nanotechnology will change our lives. LED lighting is about to replace the common light bulb. The main advantages are its energy efficiency and long lifetime. LEDs can be much more efficient, when part of the emitted light that is currently trapped in the device, could be radiated out of the device. Other devices such as photovoltaic solar cells and biosensors can also be made more efficient and cheaper. LEDs, solar cells and biosensors have in common that they consist of small structures of the order of the wavelength of the light. With such small structures light can be manipulated in a special way. In this thesis, we describe a method to calculate the interaction of light with these small structures. It is shown that an efficient LED which radiates light, can be treated as a solar cell that absorbs as much of the incoming light as possible. On this socalled reciprocity principle, which was discovered by Henrik Antoon Lorentz, a very efficient computational optimalisation method can be based. With this method existing designs of for example LEDs can be made more efficient iteratively. This thesis shows optimized designs of LEDs, solar cells and biosensors.<FDTD; LED; plasmonics; optimization; reciprocity; biosensorsOptics Research Group
20101109Imaging Science & Technology)uuid:f34c2606dbae4182873b8c1a99714297Dhttp://resolver.tudelft.nl/uuid:f34c2606dbae4182873b8c1a99714297CInterval Analysis: Contributions to static and dynamic optimization
De Weerdt, E.Mulder, J.A. (promotor)%The field of global optimization has been an active< one for many years. By far the most applied methods are gradient and evolutionary based algorithms. The most appearing drawback of those types of methods is that one cannot guarantee that the global solution is found within finite time. Moreover, if the global solution is found (by chance), the methods cannot provide a guaranteed feedback to the user stating that the provided solution is the global one. Therefore, no natural stopping conditions are available for most of the existing optimization algorithms. There are, however, other tools available, which do provide the guarantee that the global solution is found and that have natural stopping conditions. Interval analysis in combination with interval arithmetic is such a tool. Interval arithmetic was initially developed to cope with rounding errors in digital computers. Using interval arithmetic, one can perform reliable computing such that catastrophic numeric errors can be prevented (the explosion of the Ariane 5 rocket on June 4, 1996 was caused by a simple numeric overflow). It was soon found, that interval arithmetic could be used to form guaranteed bounds on any type of function or numeric algorithm for any domain. These bounds provide the crucial information needed to perform global optimization. Interval analysis is the group name of all methods that use the information obtained from guaranteed bounds to solve global optimization problems. Developed in the 1960 s, interval analysis gained popularity during the 90 s when digital computers became increasingly powerful. Nowadays, interval analysis has been widely applied in the field of static optimization, i.e. optimization that does not involve differential algebraic equations, and verified integration. However, interval analysis has not been applied often in the field of dynamic optimization. The goal of the research is to investigate whether interval analysis, in combination with interval arithmetic, can be used to solve nonlinear, constrained, dynamic optimization problems. Moreover, the possibility of extending existing theory in the field of static optimization is investigated. The focus of the research lies on trajectory optimization (a specific case of dynamic optimization). The most important condition of the designed solvers is that the dynamic constraints, formed by the equations of motion, must be satisfied for all time instances. To reach the research objectives, the theory and application of both interval arithmetic and interval analysis have been thoroughly investigated. The work is divided into two parts. The first part is on static optimization, which includes the discussion on interval arithmetic and describes the basics regarding interval analysis. The existing theory of inclusion functions, formed via interval arithmetic, has been evaluated and extended upon. The development of the Polynomial Inclusion Function, a new type of inclusion function, shows that significant improvements are possible in this field. During the review of interval analysis, its main virtues and limitations were demonstrated. The most important advantages are the guarantee that all optimal solutions are found to any degree of accuracy and that the user knows when the solution set has been found. The main limitation is the curse of dimensionality: the computational load grows, for most problems, exponentially with al linear increase in problem dimension. The author believes that this curse is mainly caused by two aspects of the current implementation of interval analysis. The first aspect is the widening of the inclusion function due to the dependency effects. The dependency effects can be partially prevented by efficient implementation of function evaluations and through application of advanced inclusion functions. However, a generic efficient method for preventing dependency effects is still not available. The other aspect causing the curse of dimensionality is the current inefficient handling of available information. The optimization algorithms within interval analysis are commonly based on branch and bound algorithms. Through a pr< ocess of elimination, one is left with a list of domains in which the optimal solution set must lie. Current methods for eliminating (part of) the domain, such as the Newton step, do not use the gathered/available information efficiently. This is mainly due to the definition of the domain and the storage of the information, i.e. keeping track of infeasible regions. It is the author s opinion that this is the reason that the application of interval analysis is limited to solving lower dimensional problems. Despite the curse of dimensionality, interval analysis based solvers can solve complicated, nonlinear, constrained problems. This has been shown in multiple chapters in the first part. Complicated problems, such as neural network output optimization and the problem integer ambiguity resolution in the field of Global Navigation Satellite Systems, are solved rigorously by interval analysis based solvers. The applications show that equality and inequality constraints are efficiently handled using interval analysis. Moreover, they show that interval analysis can be used to solve reallife problems and demonstrate that interval analysis is a strong global optimization tool. The second part of the research is on dynamic optimization, thereby focusing on trajectory optimization. The trajectory optimization problem is infinite dimensional with begin and endpoint constraints, dynamic constraints (the equations of motion), and possibly additional equality and inequality constraints. The problem is infinite dimensional since the states and controls need to be specified for each time instance. In the field of trajectory optimization one can identify two classes of methods: indirect methods and direct methods. Disregarding the optimization problems for which an analytic solution is present, both classes require a transformation to make the problem solvable. Three transformation methods have been considered: control parameterization, state parameterization, and control and state parameterization. With control parameterization, the control is defined for each time step using a polynomial and the states are computed using explicit integration. For state parameterization, the states are defined and the controls are deduced via the equations of motion (implicit integration). The last method applies parameterization of both the states and controls with respect to time. Trajectories are sought that satisfy the dynamic constraints at given time instances. The nature of the transformation methods implies that the first two methods can be used to find trajectories that satisfy the dynamic constraints at all time instances, while the latter cannot be used for this purpose. Therefore, only the first two methods have been thoroughly investigated. The last method was only briefly reviewed. The main conclusion regarding the control parameterization approach is that it suffers greatly from the required explicit integration. Although verified integration is possible and sharp bounds on the trajectories can be provided, the problem is to prove the existence of a solution within a given domain of the search space. Without the ability to update the estimate of the minimal cost function value early in the optimization process, the computational load becomes very high. Despite the drawback of control parameterization, it has been demonstrate that this approach can be used to find the global solution, although, currently, only very low dimensional problems can be solved. Higher dimensional problems can be solved using the state parameterization approach. By using simplex splines, the begin and endpoint constraints can be implicitly satisfied, which significantly reduces the problem complexity. The limitation is that the approach is only suitable for fully controllable systems. For systems that are not fully controllable one needs to apply explicit integration for all dependent states. This will increase the computational load significantly and would eliminate most of the benefits of the state parameterization approach. An interval analysis based solver has been appl< ied to solve the problem of satellite trajectory planning for formation flying. Although still suffering from the curse of dimensionality, the results demonstrate that interval analysis can be used to solve the problem rigorously. Moreover, it has been shown that the performance of the solver is superior to gradient based solvers when constraints are imposed. The main conclusion of the research is that it is possible to apply interval analysis to dynamic optimization. The current status of the solvers (in this thesis and in literature) allows one to solve only lower dimensional problems. Radical changes in the approach of handling information and keeping track of infeasible regions must be made to make interval analysis applicable to higher dimensional problems. Despite the limitations of interval analysis, the presented results clearly demonstrate the virtues of interval analysis based solvers in the field of global optimization. Several new exciting research opportunities have been identified, such as nonlinear stability analysis using interval analysis, the combination of interval analysis and evolutionary algorithms, and a new way of forming inclusion functions to boost the efficiency of interval analysis based solvers. Overall, the potential of interval analysis is very large and the author believes that interval analysis will become one of the most important tools in the field of global optimization in the near future.(interval analysis; optimization; dynamic
20100914Control and Simulation Division)uuid:fdc2dbdab419450fa30564825a43a0c8Dhttp://resolver.tudelft.nl/uuid:fdc2dbdab419450fa30564825a43a0c8]Global Optimization using Interval Analysis: Interval Optimization for Aerospace ApplicationsVan Kampen, E.+Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft reentry optimization and airport approach and departure optimization; in control optimization, for example in adaptive control algorithms; and in system identification problems, such as online aircraft model identification or human perception modeling. The main goal of this thesis is to investigate how Interval Analysis (IA) can be used as a tool for aerospace related optimization problems; to examine its theoretical and practical limitations, and to explore the ways in which optimization algorithms can benefit from interval analysis. A subset of goals is to improve the solutions for a number of aerospace related optimization problems. The scientific contribution of this thesis consists of the design and implementation of interval optimization algorithms for four important aerospace problems. The first contribution concerns finding the trim points for a nonlinear aircraft model. Trim points, defined as the combination of control settings for which all linear and rotational accelerations on the aircraft are zero, are important for flight control system design, since they provide information about the flight envelope and stability properties of the aircraft. Unlike other trim algorithms, the interval based method can guarantee that all trim points are found. In the second application, an interval optimization algorithm is developed for fitting pilot input/output data from an experiment in the SIMONA Research Simulator to a multimodal human perception model. Perception models improve the understanding of how humans perceive motion and are an essential tool in the design of flight simulators. Results show that the minimum of the cost function found by the interval method is lower than the one previously found, resulting in an improved human perception model. This second application particularly demonstrates the capabilities of IA optimization as a parameter identification tool. The third contribution is an interval based algorithm for solving the integer ambiguity problem related to Global Navigation Satellite Systems (GNSS). Phase measurements of the carrier wave of a GNSS signal are used to estimate the length and orientation of ba< selines between two or more antennas. This estimation procedure contains an optimization problem in which the integer number of carrier wavelengths between antennas has to be determined. The new interval method provides guarantees that correct solutions are found when the measurement noise is encapsulated by an interval number. The final contribution is an interval optimization algorithm that minimizes fuel consumption during rendezvous and docking procedures of satellites in circular orbits. To avoid integration of interval functions, an analytical solution to the system of differential equations that describes the relative motion of the satellites is used to generate trajectories resulting from a set of thruster pulses of varying amplitudes. Introduction of obstacles, in the form of forbidden areas in the path between the two satellites, makes the problem nonlinear, such that gradientbased optimization algorithms can fail to obtain the globally optimal solution. The interval algorithm always converges to the trajectory that avoids all obstacles and results in minimum fuel consumption. It can be concluded that IA is an excellent tool for solving nonlinear optimization algorithms, providing guarantees on obtaining the global minimum of the cost function.optimization; interval analysis
20100924Control and Simulation)uuid:f272117ce1b54ae696cbaa86fe62a015Dhttp://resolver.tudelft.nl/uuid:f272117ce1b54ae696cbaa86fe62a015JOverview of Methods for MultiLevel and/or MultiDisciplinary OptimizationDe Wit, A.J.; Van Keulen, A.Multilevel optimization and multidisciplinary optimization are areas of research that are concerned with developing efficient analysis and optimization techniques for complex systems that are made up of coupled elements (components). Within the field of multilevel optimization and multidisciplinary optimization a large number of techniques have been developed for efficient analysis and optimization of complex systems. This paper presents an unified overview of main stream approaches that were found in the literature. Four general steps are distinguished in both multilevel optimization and multidisciplinary optimization: physical coupling, optimization problem coupling, coordination and solution sequence. Via these four steps approaches are classified and possibilities for combining aspects of different methods are given. Finally, advantages and disadvantages of approaches applied to engineering problems are discussed and directions for further research are given.Tmultilevel; multidisciplinary; optimization; decomposition; coordination; overview9American Institute of Aeronautics and Astronautics (AIAA))uuid:319dffb83bbc49dea6c568d8972f3888Dhttp://resolver.tudelft.nl/uuid:319dffb83bbc49dea6c568d8972f3888HA generic method to optimize instructions for the control of evacuations?Huibregtse, O.L.; Hoogendoorn, S.P.; Pel, A.J.; Bliemer, M.C.J.A method is described to develop a set of optimal instructions to evacuate by car the population of a region threatened by a hazard. By giving these instructions to the evacuees, traffic conditions and therefore the evacuation efficiency can be optimized. The instructions, containing a departure time, a destination, and a route, are created using an optimization method based on ant colony optimization. Iteratively is searched for an approximation of the optimal evacuation instructions. The usefulness of the optimization method compared to other optimization methods is the simultaneous optimization of the departure time, destination, and route instructions instead of the optimization of only one or two of these variables for a dynamic instead of static evacuation problem. In a case study, the functioning of the method is illustrated. The relative high fitness in the case study of the set of instructions following from the optimization method compared with the fitness of a set of instructions set up by straightforward rules (like evacuating to the nearest destination) shows also the usefulness of applying an optimization method to create a set of evacuation< instructions.Hevacuation; instructions; control; optimization; ant colony optimizationIFAC)uuid:1137ebe33dcb43ca84f789bbbbc2d635Dhttp://resolver.tudelft.nl/uuid:1137ebe33dcb43ca84f789bbbbc2d635eEfficient particlebased estimation of marginal costs in a firstorder macroscopic traffic flow model,Zuurbier, F.S.; Hegyi, A.; Hoogendoorn, S.P.Marginal costs in traffic networks are the extra costs incurred to the system as the result of extra traffic. Marginal costs are required frequently e.g. when considering system optimal traffic assignment or tolling problems. When explicitly considering spillback in a traffic flow model, one can use a numerical derivative or resort to heuristics to calculate the marginal costs. Numerical derivatives are computationally demanding, restricting its use to simple networks. Heuristic approaches in most cases approximate the marginal costs by only considering the extra costs on the links which are traveled by the extra traffic, excluding the possibly external costs incurred on other links due to spillback. This paper proposes a novel way to estimate the true marginal costs of traffic in a dynamic discrete LWR model which correctly deals with congestion onset, spillback and dissolution. The proposed methodology tracks virtual changes in density through the network by means of particles which travel along with the characteristics of traffic. By using density based cost functions, the virtual changes in density can be directly related to the marginal costs. The computational efficiency of the methodology stems from the fact that only local conditions are considered when propagating the virtual change in density. The paper discusses the methodology and necessary model extensions, provides a numerical validation experiment illustrating the exact detail of the solution by comparison to a numerical derivative and discusses some generalizations.Woptimization; dynamic traffic assignment; system optimal; LWR; marginal costs; particle)uuid:d8f58668ba49441dbbf0aa8c7114da4aDhttp://resolver.tudelft.nl/uuid:d8f58668ba49441dbbf0aa8c7114da4aVA Unified Approach towards Decomposition and Coordination for Multilevel OptimizationDe Wit, A.J.Van Keulen, A. (promotor)RComplex systems, such as those encountered in aerospace engineering, can typically be considered as a hierarchy of individual coupled elements. This hierarchy is reflected in the analysis techniques that are used to analyze the physcial characteristics of the system. Consequently, a hierarchy of coupled models is to be used, accounting for different physical scales, components and/or disciplines. Numerical optimization of complex systems with embedded hierarchy is accomplished via multilevel optimization methods. Multilevel optimization methods utilize the hierarchical nature of complex systems to distribute the optimization process into smaller coupled less complex optimization problems located at the individual elements of the hierarchy. The present thesis presents a generalized approach towards decomposition and coordination for the numerical optimization of complex systems with embedded hierarchy. The developed methods are applied to numericaly maximizing the range of a supersonic business jet via multilevel optimization considering coupling between multiple engineering disciplines.Jmultilevel; multidisciplinary; optimization; decomposition; coordination
20091130)uuid:25c85feb7ef147529810e70f49e88802Dhttp://resolver.tudelft.nl/uuid:25c85feb7ef147529810e70f49e888028On maximum field components in the focal point of a lens(Urbach, H.P.; Pereira, S.F.; Broer, D.J.We determine field distributions in the pupil of a high NA lens, that give, for a given power incident on the lens, the maximum electric field amplitude in focus in a specific direction. We consider in particular the cases of maximum longitudinal and maximum transverse components. The distribution of the maximum longitudinal component in the focal plane is narrower than that of the focused Airy spot and hence can give higher resolution in imaging.>High NA; bea< m shaping; optimization; longitudinal polarization)uuid:dc5b1158be5442d6a4d3b0a19462f507Dhttp://resolver.tudelft.nl/uuid:dc5b1158be5442d6a4d3b0a19462f507Robustness of networksWang, H.Van Mieghem, P. (promotor) Our society depends more strongly than ever on large networks such as transportation networks, the Internet and power grids. Engineers are confronted with fundamental questions such as how to evaluate the robustness of networks for a given service? , how to design a robust network? , because networks always affect the functioning of a service. Robustness is an important issue for many complex networks, on which various dynamic processes or services take place. In this work, we define robustness as follows: a network is more robust if the service on the network performs better, where performance of the service is assessed when the network is either (a) in a conventional state or (b) under perturbations, e.g. failures, virus spreadings etc. In this thesis, we survey a particular line of network robustness research within our general framework: robustness quantification, optimization and the interplay between service and network. Significant progress has been made in understanding the relationship between the structural properties of networks and the performance of the dynamics or services taking place on these networks. We assume that network robustness can be quantified by a topological measure of the network. A brief overview of the topological measures is presented. Each measure may represent the robustness of a network with respect to a certain performance aspect of a service. We focus on the measure known as algebraic connectivity. Evidence collected from literature shows that the algebraic connectivity characterizes network robustness with respect to synchronization of dynamic processes at nodes, random walks on graphs and the connectivity of a network. Moreover, we illustrate that, on a given diameter, graphs with large algebraic connectivity tend to be dense in the core and sparse at the border. Such structures distribute traffic homogeneously and are thus robust in terms of traffic engineering. How do we design a robust network with respect to the metric algebraic connectivity? First, the complete graph has the maximal algebraic connectivity, while its high link density makes it impractical to use due to the cost of constructing links. Constraints on other network features are usually set up to incorporate realistic requirements. For example, constraint on the diameter may guarantee certain endtoend quality of service levels such as the delay. We propose a class of clique chain structures which optimize the algebraic connectivity and many other robust features among all graphs with diameter D and size N. The optimal graph within the class can be determined either analytically or numerically. Second, complete replacement of an existing infrastructure is expensive. Thus, we design strategies for robustness optimization using minor topological modifications. These strategies are evaluated in various classes of graphs. The robustness quantification, or equivalently, the association of the performance of a service with a topological measure, may be implicit. In this case, we explore the interplay between topology and service in determining the overall performance. Many services on communications and transportation networks are based on shortest path routing. The weight of a link, such as delay or bandwidth, is generally a metric optimized via shortest path routing. Thus, link weight tuning, a mechanism to control traffic, is also considered as part of the service. The interplay between service (shortest path routing and link weight tuning) and topology is investigated for the following performance aspects: (a) the structure of the transport overlay network, which is the union of shortest paths between all node pairs and (b) the traffic distribution in the overlay network. Important new findings are (i) the universal phase transition in overlay structures as we tune the link weight structure over different clas< ses of networks and (ii) the power law traffic distribution in the overlay networks when link weights vary strongly in various classes of networks. Furthermore, we consider the service that measures a network topology as the union of shortest paths among a set of testboxes (nodes). The measured topology is a subgraph of the overlay network, which is again a subgraph of the actual network. The performance in terms of the sampling bias of measuring a network topology is investigated. Our work contributes substantially to a better understanding of the effect of the service (testbox selection) and the actual network structure on the performance with respect to sampling bias. Our investigations on the interplay between service and network reveal again the association between the performance of a service and certain topological feature, and thus, contribute to the quantification of network robustness. The multidisciplinary nature of this research lies not only in the presence of robustness issues in many complex networks, but also in that advances in other disciplines such as graph theory, combinatorics, linear algebra and statistical physics are widely applied throughout the thesis to study optimization problems and the performance of large networks.3robustness; network topology; service; optimizationTelecommunications)uuid:c58b5999da124a62876f95d7784edf91Dhttp://resolver.tudelft.nl/uuid:c58b5999da124a62876f95d7784edf91jModelBased Control and Optimization of Large Scale Physical Systems  Challenges in Reservoir Engineering@Van den Hof, P.M.J.; Jansen, J.D.; Van Essen, G.M.; Bosgra, O.H.fDue to urgent needs to increase efficiency in oil recovery from subsurface reservoirs new technology is developed that allows more detailed sensing and actuation of multiphase flow properties in oil reservoirs. One of the examples is the controlled injection of water through injection wells with the purpose to displace the oil in an appropriate direction. This technology enables the application of modelbased optimization and control techniques to optimize production over the entire production period of a reservoir, which can be around 25 years. Large scale reservoir flow models are used for optimizing production settings, but suffer from high levels of uncertainty and limited validation options. One of the challenges is the development of reduced complexity models that deliver accurate longterm predictions, and at the same time are not more complex than can be warranted by the amount of data that is available. In this paper an overview will be given of the problems and opportunities for modelbased control and optimization in this field aiming at the development of a closedloop reservoir management system."petroleum; reservoir; optimization)uuid:cb3de0cfa5064490b988f4d1bf00ae55Dhttp://resolver.tudelft.nl/uuid:cb3de0cfa5064490b988f4d1bf00ae55FModelbased predictive control applied to multicarrier energy systems;Arnold, M.; Negenborn, R.R.; Andersson, G.; De Schutter, B.The optimal operation of an integrated electricity and natural gas infrastructure is investigated. The couplings between the electricity system and the gas system are modeled by socalled energy hubs, which represent the interface between the loads on the one hand and the transmission infrastructures on the other. To increase reliability and efficiency, storage devices are present in the multicarrier energy system. In order to optimally incorporate these storage devices in the operation of the infrastructure, the capacity constraints and dynamics of these have to be taken into account explicitly. Therefore, we propose a model predictive control approach for controlling the system. This controller takes into account the present constraints and dynamics, and in addition adapts to expected changes of loads and/or energy prices. Simulations in which the proposed scheme is applied to a threehub benchmark system are presented.goptimal power flow; electric power systems; model predictive control; natural gas systems; optimization)uuid:ff8e44db72e249fabd7fbde923758e< 68Dhttp://resolver.tudelft.nl/uuid:ff8e44db72e249fabd7fbde923758e68qAn efficient method for reducing the sound speed induced errors in multibeam echosounder bathymetric measurements%Snellen, M.; Siemes, K.; Simons, D.G.Nowadays extensive use is made of multibeam echosounders (MBES) for mapping the bathymetry of sea and riverfloors. The MBES is capable of covering large areas in limited time by emitting an acoustic pulse along a wide swathe perpendicular to the sailing direction. The angle and the corresponding twoway traveltime of the received signals are determined through beamsteering at reception. Water depths along the swathe can be derived from this angle and traveltime combination. In general, two sets of sound speed measurements are taken when conducting MBES measurements. The first set is used for the beamsteering and consists of the sound speeds at the MBES transducer. The second set is used for determining the propagation of the sound through the water column, needed for correctly converting the measured travel times to a depth. In general, this set of sound speed measurements consist of the complete sound speed profiles (SSPs). The quality of the sound speed measurements at the transducer position sometimes gets degraded, resulting in beam steering angles that differ from those aimed for. Also sometimes the SSPs used for converting the beam travel times to depths deviate from the true prevailing SSPs due to the, in general, limited amount of SSP measurements taken during a survey. Both above mentioned effects result in an erroneous bathymetry. Here, we present a method for eliminating these errors, without the need for additional sound speed information.8multibeam echosounder; sound speed profile; optimizationRemote Sensing)uuid:fbc64a39931e4b408803486466f20703Dhttp://resolver.tudelft.nl/uuid:fbc64a39931e4b408803486466f20703sThe potential of inverting geotechnical and geoacoustic sediment parameters from singlebeam echo sounder returns%Simons, D.G.; Snellen, M.; Siemes, K.Seafloor characterization is important in many fields including hydrography, marine geology, coastal engineering and habitat mapping. The advantage of noninvasive acoustic methods for sediment characterization over conventional bottom grabbing is the nearly continuous versus sparse sensing and the enormous reduction in survey time and costs. Among the various acoustic systems for seafloor characterization, the singlebeam echo sounder is of particular interest due to its simplicity and versatility. Seafloor characterization algorithms can be roughly divided into two categories: modelbased and empirical, where the latter simply relies on the observation that certain echo features, such as amplitude, duration and skewness of the echo, are correlated with sediment type. Here we apply the modelbased approach where we compare the measured echo signal with theoretically modeled echo envelopes in the time domain. For modeling the received echo sounder signals use is made of a physical backscatter model that fully accounts for watersediment interface roughness and sediment volume scattering. We use differential evolution, a fast variant of a genetic algorithm, as the global optimization method to invert the model input parameters mean grain size, spectral strength of the interface roughness and volume scattering cross section. In the model grain size determines geoacoustic parameters like sediment sound speed, density and attenuation. The analysis is applied to simulated data.>singlebeam echosounder; seafloor classification; optimization)uuid:6c6197bd5757428a9d3de94af148ce90Dhttp://resolver.tudelft.nl/uuid:6c6197bd5757428a9d3de94af148ce90vA systematic analysis of the optical merit function landscape: Towards improved optimization methods in optical designVan Turnhout, M./Urbach, H.P. (promotor); Bociort, F. (promotor)TA major problem in optical system design is that the optical merit function landscape is usually very complicated, especially for complex design problems where many minima are present. Finding good new loc< al minima is then a difficult task. We show however that a certain degree of order is present in the optical design space, which is best observed when we consider not only local minima, but saddle points as well. With a special method, which we call SaddlePoint Construction (SPC), saddle points can be constructed in a simple way. Via saddle points, new local minima can be obtained very rapidly. When using a local optimization method, the final design after optimization highly depends on the starting configuration. We can group the initial configurations that lead to a given local minimum after local optimization into a graphical region, which shape depends on the optimization method used. However, saddle points are critical points in the merit function landscape that always remain on the boundaries, independent of the used optimization method. When the local optimization process is not chaotic, the geometric decomposition of the space of initial configurations into discrete regions has boundaries given by simple curves. But when the optimization is chaotic, the curves separating the different regions are very complicated objects termed fractals. In such cases, starting configurations, which are very close to each other, lead to different local minima after optimization. A better understanding of these instabilities can be obtained by using low damping values in a damped leastsquares method.Aoptical system design; saddle point; optimization; fractal; chaos)uuid:4f491cc5cdc749b48b80700dae2cf57cDhttp://resolver.tudelft.nl/uuid:4f491cc5cdc749b48b80700dae2cf57ceValidity improvement of evolutionary topology optimization: Procedure with element replaceable method Zhu, J.; Zhang, W.; Bassir, D.H.The aim of this paper is to enhance the validity of existing evolutionary topology optimization procedures. As this hardkilling scheme related to the element sensitivity values may lead to incorrect predictions of inefficient elements to be removed and the value of the objective function becomes sharply deteriorated during the iterations, a check position (CP) control is proposed to prevent the erroneous topology design generated by the rejection criteria of evolutionary methods. For this purpose, we introduce a sort of orthotropic cellular microstructure (OCM) element with moderate pseudodensity that acts as a compromising element between solid element and void OCM element. In this way, all inefficient elements removed previously are automatically replaced with the moderate OCM elements depending upon the deterioration of the objective function. Erroneously removed elements are then identified in the updated finite element model through a direct sensitivity computing of the moderate OCM elements and will be finally recovered by the bidirectional element replacement. Besides, detailed structures with checkerboard patterns are eliminated by controlling the local structural bandwidth with the socalled threshold method. Typical optimization examples of structural compliance and natural frequency that were difficult to tackle are solved by the proposed design procedure. Satisfactory numerical results are obtained.doptimization; evolutionary method; erroneous design; check position control; moderate microstructureEDP sciencesAerospace Structures)uuid:ff66e490db594e3cb6e2926da4f074dfDhttp://resolver.tudelft.nl/uuid:ff66e490db594e3cb6e2926da4f074df5Algebraic Connectivity Optimization via Link AdditionWang, H.; Van Mieghem, P.Dalgebraic connectivity; synchronization; optimization; link additionICST)uuid:a8ec762b8e2a422f9978a6e85673df40Dhttp://resolver.tudelft.nl/uuid:a8ec762b8e2a422f9978a6e85673df40CUnderstanding catchment behaviour through model concept improvementFenicia, F.Savenije, H.H.G. (promotor)This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of modelling and experi< menting. We show a number of case studies where we demonstrate the need of combining the power of physical laws and established scientific theories with qualitative understanding of natural phenomena, which requires creativity and intuition. We emphasize the importance of the 'Art' of modelling, which is often a neglected aspect of scientific research. We address topical research issues such as reducing model structural uncertainty through progressive understanding of catchment behaviour, incorporating process knowledge in the different stages of model development, linking modelling and experimentation, and understanding the contribution of data to process understanding.Ohydrological modelling; calibration; optimization; uncertainty; model structure)uuid:7cd0b27cf95b47c3969b36c4b7affa0dDhttp://resolver.tudelft.nl/uuid:7cd0b27cf95b47c3969b36c4b7affa0dWSaddlepoint construction in the design of lithographic objectives, part 2: ApplicationMarinescu, O.; Bociort, F.Hsaddle point; lithography; optimization; optical system design; EUV; DUV)uuid:f16b0c66bef346f9a84c174c0e0bc449Dhttp://resolver.tudelft.nl/uuid:f16b0c66bef346f9a84c174c0e0bc449RSaddlepoint construction in the design of lithographic objectives, part 1: Method)uuid:324e0e8a527e43bb87c08e131654acc9Dhttp://resolver.tudelft.nl/uuid:324e0e8a527e43bb87c08e131654acc94Performance Enhancement of Abrasive Waterjet CuttingKarpuschewski, B. (promotor)Abrasive Waterjet (AWJ) Machining is a recent nontraditional machining process. This technology is widely used in industry for cutting difficulttomachinematerials, milling slots, polishing hard materials etc. AWJ machining has many advantages, e.g. it can cut netshape parts, no heat is generated during the cutting process, it is particularly environmentally friendly as it is clean and it does not create dust. Although AWJ machining has many advantages, a big disadvantage of this technology is its relatively high cutting cost. Consequently, the reduction of the machining cost and the increase of the profit rate are big challenges in AWJ technology. To reduce the total cutting cost as well as to increase the profit rate, this research focuses on performance enhancement of AWJ cutting with two possible solutions including optimization in the cutting process and abrasive recycling. The first solution to enhance the AWJ cutting performance is the optimization of the AWJ cutting process. As a precondition, it is necessary to have a cutting process model for optimization. In order to use that model for this purpose, several important requirements are given. The most important requirement for such a model is that it can describe the optimum relation between the optimum abrasive mass flow rate and the maximum depth of cut. To develop a cutting process model which can be used for the AWJ optimization, many available models have been analyzed. Since the most important requirement for a process model (see above) can be obtained from Hoogstrate's model, an extension of this model is carried out. The extension model consists of three submodels including pure waterjet model, abrasive waterjet model and abrasivework material interaction model. The extension cutting process model is more accurate than the original one and it is capable to optimize AWJ systems. The influence of many process parameters, the work materials, the abrasive type and size have been taken into account. Up to now, there has not been a model for the prediction of AWJ nozzle wear. Therefore, modeling the nozzle wear rate has been carried out and a model for the wear rate of nozzles made from composite carbide has been proposed. Based on the extension cutting process model, two types of optimization applications have been carried out. They are related to technical problems and economical problems. From the results of these problems, regression models for determining the optimum nozzle exchange diameter and the optimum abrasive mass flow rate for various objectives have been proposed. The other solution to enhance the cutting performance is abrasive< recycling. In this study, GMA garnet, the most popular abrasives for blast cleaning and waterjet cutting, has been chosen for the investigation. The recycling of GMA abrasives has been investigated on both technical side and economical side. On the technical side, the reusability and the cutting performance of the recycled and recharged abrasives have been analysed. The influence of the recycled and recharged abrasives on the cutting quality was studied. On the economical side, first, the prediction of the cost of recycled and recharged abrasives was done. Then, the economic comparisons for selecting abrasives have been carried out. In addition, the economics of cutting with recycled and recharged abrasives have been studied. Several suggestions for an abrasive recycling process which promises a more effective use of the grains have been proposed. By optimization in the cutting process and by abrasive recycling, the cutting performance can be increased, the total cutting cost can be reduced, and the profit rate can be enlarged considerably. Consequently, the performance of AWJ cutting can be enhanced significantly.Gabrasive waterjet; waterjet; optimization; abrasive recycling; modeling)uuid:20b5a4b564194593a66848074982bcb3Dhttp://resolver.tudelft.nl/uuid:20b5a4b564194593a66848074982bcb3dModelbased lifecycle optimization of well locations and production settings in petroleum reservoirsZandvliet, M.J.0Bosgra, O.H. (promotor); Jansen, J.D. (promotor)The coming years there is a need to increase production from petroleum reservoirs, and there is an enormous potential to do so by increasing the recovery factor. This is possible by making better use of recent technological developments, such as horizontal wells, downhole valves and sensors. However, actually making better use of these improved capabilities is difficult because of many open problems in reservoir management and production operations processes. Consequently, there is significant scope to increase the recovery factor of oil and gas fields by tailoring tools from the systems and control community to efficiently perform dynamic optimization of wells (e.g. number, locations) and their production settings (e.g. bottomhole pressures, flow rates, valve settings) based on uncertain reservoir models, in the sense that they lead to good decisions while requiring limited time from the user. This thesis aims at developing these tools, and the main contributions are as follows. Many production setting optimization problems can be written as optimal control problems that are linear in the control. If the only constraints are upper and lower bounds on the control, these problems can be expected to have pure bangbang optimal solutions. The adjoint method to derive gradients of a cost function with respect to production settings can be combined with robust optimization to efficiently compute settings that are robust against uncertainty in reservoir models. The gradients used in production setting optimization can be used to efficiently compute directions in which to iteratively improve upon an initial well configuration by surrounding the tobeplaced wells by pseudo wells (i.e. wells that operate at a negligible rate). The controllability and observability properties of singlephase flow reservoir model are analyzed. It is shown that pressures near wells in which we can control the flow rate or bottomhole pressure are controllable, whereas pressures near wells in which we can measure the flow rate or bottomhole pressure are observable. Finally, a new method of regularization in history matching is presented, based on this controllability and observability analysis.Cpetroleum; reservoir engineering; systems and control; optimizationMechanical Maritime and Materials Engineering)uuid:4f4b7fb14a7746bb9c14ff5e4bb6477cDhttp://resolver.tudelft.nl/uuid:4f4b7fb14a7746bb9c14ff5e4bb6477cZOptimization of extreme ultraviolet mirror systems comprising highorder aspheric surfacesSmirror systems; aspheres; extreme ultraviolet lithography; optimization; relaxation)uuid:5feb9aa6< d1bc482b85707e892bdf3bc5Dhttp://resolver.tudelft.nl/uuid:5feb9aa6d1bc482b85707e892bdf3bc5GOptimization based image registration in the presence of moving objectsBKarimi Nejadasl, F.; Gorte, B.G.H.; Hoogendoorn, S.P.; Snellen, M.Mregistration; optimization; Differential Evolution; NelderMead; 3D Euclidean)uuid:d50848b4cd084482a8247d51700be44eDhttp://resolver.tudelft.nl/uuid:d50848b4cd084482a8247d51700be44eMIntegrated modeling of ozonation for optimization of drinking water treatmentvan der Helm, A.W.C.van Dijk, J.C. (promotor)Drinking water treatment plants automation becomes more sophisticated, more online monitoring systems become available and integration of modeling environments with control systems becomes easier. This gives possibilities for modelbased optimization. In operation of drinking water treatment plants, the processes are usually optimized individually on the basis of "rules of thumb" and operator knowledge and experience. However, changes in operational conditions of individual processes can affect subsequent processes and an optimal operation, which can include a number of water quality parameters, costs and environmental impact is different for every operator. Improvement of the operation of a drinking water treatment plant is possible by using an integrated model of the entire water treatment plant as an instrument for operational support and for process control. For this purpose, it is important that explicit objectives are defined for the operation. From the research it is concluded that the objective for integrated optimization of the operation of drinking water treatment should be the improvement of water quality and not a priori reduction of environmental impact or costs. In the research an integrated model for ozonation, including ozone decay, bromate formation, assimilable organic carbon (AOC) formation, E. coli disinfection, CT and decrease in UV absorbance at 254 nm (UVA254) is developed. With the model, different control strategies for ozonation are assessed. The research also describes a newly developed design for ozone installations, the dissolved ozone plug flow reactor, (DOPFR) and the effect of character and removal of natural organic matter (NOM) prior to ozonation. The research was carried out as part of the project Promicit, a cooperation of Waternet, Delft University of Technology, DHV B.V. and ABB B.V. and was subsidized by SenterNovem, agency of the Dutch Ministry of Economic Affairs. Part of the experiments was performed in cooperation with Kiwa Water Research.modeling; modelling; integrated; ozonation; optimization; drinking water; drinking water treatment; bromate; natural organic matter; nom; disinfection; assimilable organic carbon; aoc; life cycle assessment; lca; bottled waterWater Management Academic Press)uuid:28b2169c2dc04258b5728c2320cf81d1Dhttp://resolver.tudelft.nl/uuid:28b2169c2dc04258b5728c2320cf81d1;Practical guide to saddlepoint construction in lens design,Bociort, F.; Van Turnhout, M.; Marinescu, O.Saddlepoint construction (SPC) is a new method to insert lenses into an existing design. With SPC, by inserting and extracting lenses new system shapes can be obtained very rapidly, and we believe that, if added to the optical designer s arsenal, this new tool can significantly increase design productivity in certain situations. Despite the fact that the theory behind SPC contains mathematical concepts that are still unfamiliar to many optical designers, the practical implementation of the method is actually very easy and the method can be fully integrated with all other traditional design tools. In this work we will illustrate the use of SPC with examples that are very simple and illustrate the essence of the method. The method can be used essentially in the same way even for very complex systems with a large number of variables, in situations where other methods for obtaining new system shapes do not work so well.2optical system design; optimization; saddle points)uuid:c05ad7d655044fa4a14f496e9bb20928Dhttp://resolver.tudelft.nl/uuid:c0< 5ad7d655044fa4a14f496e9bb20928BPredictability and unpredictability in optical system optimizationVan Turnhout, M.; Bociort, F.GLocal optimization algorithms, when they are optimized only for speed, have in certain situations an unpredictable behavior: starting points very close to each other lead after optimization to different minima. In these cases, the sets of points, which, when chosen as starting points for local optimization, lead to the same minimum (the socalled basins of attraction), have a fractallike shape. Before it finally converges to a local minimum, optimization started in a fractal region first displays chaotic transients. The sensitivity to changes in the initial conditions that leads to fractal basin borders is caused by the discontinuous evolution path (i.e. the jumps) of local optimization algorithms such as the dampedleastsquares method with insufficient damping. At the cost of some speed, the fractal character of the regions can be made to vanish, and the downward paths become more predictable. The borders of the basins depend on the implementation details of the local optimization algorithm, but the saddle points in the merit function landscape always remain on these borders.Roptimization; optical system design; saddle points; fractals; basins of attraction)uuid:703cd3c28cf448f7babc8b33cdd38949Dhttp://resolver.tudelft.nl/uuid:703cd3c28cf448f7babc8b33cdd38949 Optimization technique for ED&PEKumar, P.; Bauer, P.optimization; BLDC driveTulip)uuid:8eff9ef1b5094f3db1f77d1357c53ff8Dhttp://resolver.tudelft.nl/uuid:8eff9ef1b5094f3db1f77d1357c53ff8qStructured controller synthesis for mechanical servosystems: Algorithms, relaxations and optimality certificatesHol, C.W.J.1Scherer, C.W. (promotor); Bosgra, O.H. (promotor)In many application areas of mechanical servosystems the high demands on the performance often imply a tightly tuned feedback controller, that takes dynamical interaction into account. Modelbased Hoptimal controller synthesis is a wellsuited technique for this purpose. However, the stateoftheart synthesis approach yields controllers with high McMillan degree that can not be implemented in realtime at high samplingrates, because of the limited computational capacity. This motivates to constrain the McMillan degree of the controller. The aim of this thesis is to provide numerical tools for Hoptimal degree constrained (or otherwise structured) controller synthesis. For this problem we have developed relaxations that are based on SumOfSquares polynomials. Their optimal values are lower bounds on the globally optimal structured controller synthesis problem and can be computed by solving LMI problems. It is guaranteed, that the bounds converge to best achievable performance as we improve our relaxations. To make this technique feasible for plants with high McMillan degree, we proposed a computationally less demanding scheme based on partial dualization. The SumOfSquares relaxations have also been applied to robust polynomial SemiDefinite Programs (SDPs). Also for this case a sequence of relaxations has been developed, whose optimal values converge from below to the optimal value of the robust SDP. Furthermore for the structured controller synthesis problem an Interior Point algorithm has been developed. It is shown how this algorithm can be made more efficient, by exploiting the controltheoretic characteristics of the problem. Conditions have been derived to verify local optimality of the optimized controller. Finally, it has been illustrated by realtime experiments that the algorithms described in this thesis can be used to synthesize highperforming fixedorder controllers for a new prototype of a wafer stage.xcontroller synthesis; static output feedback; optimization; sumofsquares; matrix inequalities; bmi; lmi; interior point)uuid:11464f49b10b48ed90759e281514618aDhttp://resolver.tudelft.nl/uuid:11464f49b10b48ed90759e281514618aXAnalytical and Numerical Developments in Optimal Shape Design for Aerospace: An overview
Pironneau, O.Loptimization; < optimal shape design; gradient methods; finite element methods)uuid:63a75aa9c71e44399d0b864fe8c2915dDhttp://resolver.tudelft.nl/uuid:63a75aa9c71e44399d0b864fe8c2915dYA continuous adjoint formulation with emphasis to aerodynamicturbomachinery optimization'Papadimitriou, D.I.; Giannakoglou, K.C.PThis paper summarizes progress, recently made in the Lab. of Thermal Turbomachines of NTUA, on the formulation and use of the continuous adjoint methods in aerodynamic shape optimization problems. The basic features of state of the art adjoint methods and tools which are capable of handling arbitrary objective functions, cast in the form of either boundary or field integrals, are presented. Starting point of the presentation is the formulation of the continuous adjoint method for arbitrary integral objective functionals in problems governed by arbitrary, linear or nonlinear, first or second order state pde's; the scope of this section is to demonstrate that the proposed formulation is general without being restricted to aerodynamics. It is noticeable that, regardless of the type of functional (field of boundary integral) the expressions of its gradient with respect to the design variables include boundary integrals only. Thus, the derived adjoints can be used with either structured or unstructured grids and there is no need for repetitive remeshing or computation of field integrals which increase the CPU cost and deteriorate the computational accuracy. Then, the presentation focuses on aerodynamic shape optimization problems governed by the compressible fluid flow equations, numerically solved through a timemarching formulation and an upwind discretization scheme for the convection terms. Two design problems, namely the inverse design of a 2D cascade at inviscid flow conditions (used as a test bed for the assessment of three descent algorithms based on the same gradient information) and the design optimization of a 3D peripheral compressor cascade for minimum viscous losses are presented. For the latter, the flow is turbulent and the field integral of entropy generation, recently proposed by the same authors, is used as objective function.Tcontinuous adjoint; inverse design; optimization; losses minimization; turbomachines)uuid:cdc345d1a0b54b7098fbbc2235c818a6Dhttp://resolver.tudelft.nl/uuid:cdc345d1a0b54b7098fbbc2235c818a6DApplication of sonic boom optimization to supersonic aircraft design/Daumas, L.; Dinh, Q.V.; Kleinveld, S.; Rog, G.@Preliminary results on shape optimization of a wingbody configuration aiming at reducing sonic boom overpressure will be discussed. The optimization process uses a CAD modeler and an Euler CFD code with adjoint. Thickness, scale, twist and camber at section level were used to obtain gains in ground pressure signature.Kadjoint; CAD modeller; optimization; sonic boom; supersonic aircraft design)uuid:8b3c60a54e174680b7c6252fb4ae87caDhttp://resolver.tudelft.nl/uuid:8b3c60a54e174680b7c6252fb4ae87ca*VIVACE: Multidisciplinary Decision Support Homsi, P.collaboration; multidisciplinary; optimization; decision; knowledge; data management; virtual enterprise; aeronautic; aircraft; engine)uuid:197e6db7921d4786958db0c06079f1fcDhttp://resolver.tudelft.nl/uuid:197e6db7921d4786958db0c06079f1fcSRealistic highlift design of transport aircraft by applying numerical optimization\Wild, J.; Brezillon, J.; Mertins, R.; Quagliarella, D.; Germain, E.; Amoignon, O.; Moens, F.The design activity within the EUROLIFT II project is targeted towards an improvement of the takeoff performance of a generic transport aircraft configuration by a redesign of the trailing edge flap. The involved partners applied different optimization strategies as well as different types of flow solvers in order to cover a wide range of possible approaches for aerodynamic design optimization. The optimization results obtained by the different partners have been crosschecked in order to eliminate solver dependencies and to identify the best obtained design. The final selected design has been applied to the wind tun< nel model and the test in the European Transonic Wind Tunnel (ETW) at high Reynolds number confirms the predicted improvements.>optimization; highlift; application; CFD; wind tunnel testing)uuid:8abc533db86046c188685eabdb33e415Dhttp://resolver.tudelft.nl/uuid:8abc533db86046c188685eabdb33e415.Partitioned strategies for optimization in FSI8Bletzinger, K.U.; Gallinger, T.; Kupzok, A.; Wchner, R.In this paper the possibility of the optimization of coupled problems in partitioned approaches is discussed. As a special focus, surface coupled problems of fluidstructure interaction are considered. Well established methods of optimization are analyzed for usage in the context of coupled problems and in particular for a solution through partitioned approaches. The main benefits expected from choosing a partitioned solution strategy as basis for the optimization are: a high flexibility in the usage of different solvers and therefore different approaches for the singlefield problems as well as the possibility to apply well tested and sophisticated methods for the modeling of complex problems.Qoptimization; coupled problems; fluidstructure interaction; partitioned approach)uuid:fc98242638af4ba7bc57c3e44f14c4c6Dhttp://resolver.tudelft.nl/uuid:fc98242638af4ba7bc57c3e44f14c4c6BAerodynamic optimization of an airfoil using gradient based method/Mirzaei, M.; Roshanian, J.; Nasrin Hosseini, S.A gradient based method is presented for optimization of an airfoil configuration. The flow is governed by two dimensional, compressible Euler equations. A finite volume code based on unstructured grid is developed to solve the equations. The procedure is carried out for optimizing an airfoil with initial configuration of NACA 0012. The advantage of this technique over the other gradient based methods is its speed of converging.ACFD; optimization; gradient; objective function; design variables)uuid:ea7af067bd4648c8a147fe4cddc936ecDhttp://resolver.tudelft.nl/uuid:ea7af067bd4648c8a147fe4cddc936ec1Looking for order in the optical design landscapeBociort, F.; Van Turnhout, M.In presentday optical system design, it is tacitly assumed that local minima are points in the merit function landscapewithout relationships between them. We will show however that there is a certain degree of order in the design landscapeand that this order is best observed when we change the dimensionality of the optimization problem and when weconsider not only local minima, but saddle points as well. We have developed earlier a computational method fordetecting saddle points numerically, and a method, then applicable only in a special case, for constructing saddle points by adding lenses to systems that are local minima. The saddle point construction method will be generalized here and wewill show how, by performing a succession of onedimensional calculations, many local minima of a given global searchcan be systematically obtained from the set of local minima corresponding to systems with fewer lenses. As a simpleexample, the results of the Cooke triplet global search will be analyzed. In this case, the vast majority of the saddlepoints found by our saddle point detection software can in fact be obtained in a much simpler way by saddle point construction, starting from doublet local minima.>saddle point; optimization; optical system design; lithographyOptics Research Groep)uuid:cdd281b20bc74f57a9fb3ddbe49c1082Dhttp://resolver.tudelft.nl/uuid:cdd281b20bc74f57a9fb3ddbe49c1082?Designing lithographic objectives by constructing saddle pointsOptical designers often insert or split lenses in existing designs. Here, we present, with examples from Deep and Extreme UV lithography, an alternative method that consists of constructing saddle points and obtaining new local minima from them. The method is remarkable simple and can therefore be easily integrated with the traditional design techniques. It has significantly improved the productivity of the design process in all cases in which it has been applied so far.Hsaddle point; lithography; optical sys< tem design; optimization; DUV; EUV)uuid:b842a4d007084c37b3e7e86f91c72dd4Dhttp://resolver.tudelft.nl/uuid:b842a4d007084c37b3e7e86f91c72dd4QChallenges for process system engineering in infrastructure operation and controlGLukszo, Z.; Weijnen, M.P.C.; Negenborn, R.R.; De Schutter, B.; Ilic, M.CThe need for improving the operation and control of infrastructure systems has created a demand on optimization methods applicable in the area of complex sociotechnical systems operated by a multitude of actors in a setting of decentralized decision making. This paper briefly presents main classes of optimization models applied in PSE system operation, explores their applicability in infrastructure system operation and stresses the importance of multilevel optimization and multiagent model predictive control. If you want to cite this report, please use the following reference instead: Z. Lukszo, M.P.C. Weijnen, R.R. Negenborn, B. De Schutter, and M. Ilic, Challenges for process system engineering in infrastructure operation and control, in 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering (GarmischPartenkirchen, Germany, July 2006) (W. Marquardt and C. Pantelides, eds.), vol. 21 of ComputerAided Chemical Engineering, Amsterdam, The Netherlands: Elsevier, ISBN 9780444529695, pp. 95 100, 2006.Linfrastructures; optimization; multiagent systems; model predictive control)uuid:37f7ee079bb84b13be8fdc4d27417b0fDhttp://resolver.tudelft.nl/uuid:37f7ee079bb84b13be8fdc4d27417b0fHModel reduction for dynamic realtime optimization of chemical processesVan den Berg, J.Bosgra, O.H. (promotor)The value of models in process industries becomes apparent in practice and literature where numerous successful applications are reported. Process models are being used for optimal plant design, simulation studies, for offline and online process optimization. For online optimization applications the computational load is a limiting factor. The focus of this thesis is on nonlinear model approximation techniques aiming at reduction of computational load of a dynamic realtime optimization problem. Two types of model approximation methods were selected from literature and assessed within a dynamic optimization case study: model reduction by projection and physicsbased model reduction. Model order reduction by projection is partially successful. Even with a strongly reduced number of transformed differential equations it is possible to compute acceptable approximate solutions. Projection does not provide predictable results in terms of simulation error and stability and does not reduce the computational load of simulation. On the other hand, physicsbased model reduction appeared to be very successful in reducing the computational load of the sequential dynamic optimization problem.1chemical processes; model reduction; optimization"Design, Engineering and Production)uuid:a29ca0b4c17d4a1499c09672b805021eDhttp://resolver.tudelft.nl/uuid:a29ca0b4c17d4a1499c09672b805021eZUncertaintybased Design Optimization of Structures with BoundedButUnknown UncertaintiesGurav, S.P.van Keulen, A. (promotor)Euncertainty; optimization; response surface; parallel computing; MEMSDelft University Press)uuid:7bf2a037c8eb44be96ef411529c4be0bDhttp://resolver.tudelft.nl/uuid:7bf2a037c8eb44be96ef411529c4be0bDTopology Optimization using a Topology Description Function Approachde Ruiter, M.J.van Keulen, F. (promotor)#During the last two decades, computational structural optimization methods have emerged, as computational power increased tremendously. Designers now have topological optimization routines at their disposal. These routines are able to generate the entire geometry of structures, provided only with information on loads, supports, and space to work in. The most common way to do this is to partition the available space in elements, and to determine the material content of each of the elements separately. This thesis presents a different approach, namely the \emp< h{Topological Description Function} (TDF) approach. The TDF is a function parametrized by design variables. The function determines a geometry using a levelset approach. A finite element representation of the geometry then is used to determine how well the geometry performs with respect to objective and constraints. This information is given to an optimization program, which has the purpose of finding an optimal combination of values for the design variables. This approach decouples the geometry description of the design from the evaluation, allowing the designer to tune the detailedness of the geometry and the computational grid separately as wished. In this thesis, the concept of a TDF is explained in detail. Using a genetic algorithm for the optimization turns out to be too computationally expensive, however, it shows the validity of the TDF as a geometry description method. A method based on an intuitive updating scheme shows that the TDF approach can be used to do topology optimization.level set method; topology; optimization; tdf; topology description function; genetic algorithm; optimality criteria method; structural optimization)uuid:33282f5fe0934a9a88e8819ccfb40114Dhttp://resolver.tudelft.nl/uuid:33282f5fe0934a9a88e8819ccfb40114EModelbased optimization of the operation procedure of emulsification Stork, M.Emulsions are widely encountered in the food and cosmetic industry. The first food we consume is an emulsion, namely breast milk. Other common emulsions are mayonnaise, dressings, skin creams and lotions. Equipment often used for the production of oilinwater emulsions in the food industry consists of a stirred vessel in combination with a colloid mill and a circulation pipe. Within this setup there are two main variations: i) Configuration I where the colloid mill acts like a shearing device and at the same time as a pump. This configuration is used in the majority of the production facilities, and ii) Configuration II where the shearing and pumping action are not coupled. The operation procedure for obtaining a certain predefined emulsion quality is often established based on experience (best practice). This is most probably timeconsuming (e.g. large experimental efforts for new developed products) and it is also unclear if the process is operated at its optimum (e.g. in minimum time). An other drawback is that there is no feedback during the production process. Hence, it is not possible to deal with disturbances acting on the process. A possible consequence is that, at the end of the production process, the product quality specifications are not met and the product has to be classified as offspec. In order to be able to enlarge the efficiency of the production processes and to shorten the time to market of new products  and therewith create an advantage over competition  it is necessary to overcome these limitations of the current operation procedure. In the work reported a first step is set into this direction. A model describing the droplet size distribution (DSD) and the emulsion viscosity as function of the time was developed and several offline optimization studies were performed. The model comprises several fit parameters and experiments were performed in order to estimate the values of these parameters. A number of additional experiments were performed to compare the simulated results with the measurements (model validation). The results of the parameter estimation and the model validation show that the simulated results are qualitatively in good agreement with the measurement data. Given the overall performance of the model it is expected that the model quality is sufficient to render practical relevant optimization results. Although the optimization studies have been performed for a model emulsion, small scale equipment and are not yet experimentally validated, the results of this work strongly suggest that it is indeed possible to minimize the production times and to shorten the product development times for new products. This overall conclusion is based on the following observations: 1) The< optimization results show that it is beneficial to produce emulsions with Configuration II:  Configuration II allows the production of emulsions with a bimodal DSD. No operation procedure was found for the production of such an emulsion in Configuration I.  The production of emulsions in Configuration II is always at least as fast as in Configuration I. 2) The followed approach allows to calculate: * If an emulsion with a certain, predefined, DSD and emulsion viscosity can be produced. * How the process should be controlled in order to produce such an emulsion. * How the process should be controlled to produce this emulsion in minimal time. 3) The optimization results show that it is possible to produce emulsions with: * A bimodal DSD. * Less oil while maintaining a similar DSD and value of the emulsion viscosity evaluated at a shear rate of 10 1/s by adapting only the operation procedure. Hence, the addition of extra stabilizers is not considered. This offers possibilities for the production of a broader range of emulsion products and could direct product development in a new direction. Based on this, it is worthwhile and therefore recommended to expand this research work in the direction of industrial emulsions.modeling; emulsions; emulsification; optimization; milp; parameter estimation; frymadelmix; colloid mill; population balance equations; droplet size distribution; mayonnaise)uuid:e15f936a94394247b0f9051619b34cd4Dhttp://resolver.tudelft.nl/uuid:e15f936a94394247b0f9051619b34cd4TFinding new local minima by switching merit functions in optical system optimization'Serebriakov, A.; Bocoirt, F.; Braat, J.Moptical design; geometrical optics; optimization; merit function; aberrations)uuid:43fb3a2f0c02406aad7d374ec5f71d63Dhttp://resolver.tudelft.nl/uuid:43fb3a2f0c02406aad7d374ec5f71d634Optimization and analysis of deepUV imaging systemsSerebriakov, A.G.Braat, J.J.M. (promotor)This thesis has been devoted to two main subjects: the compensation of birefringence induced by spatial dispersion (BISD) in DeepUV lithographic objectives and the optimization of optical systems in general.!optimization; lithography; optics)uuid:05dfafdccd7c4b17a92f8420e5bb78a0Dhttp://resolver.tudelft.nl/uuid:05dfafdccd7c4b17a92f8420e5bb78a0KGenerating saddle points in the merit function landscape of optical systemsFinding multiple local minima in the merit function landscape of optical system optimization is a difficult task, especially for complex designs that have a large number of variables. We discuss here a method that enables a rapid generation of new local minima for optical systems of arbitrary complexity. We have recently shown that saddle points known in mathematics as Morse index 1 saddle points can be useful for global optical system optimization. In this work we show that by inserting a thin meniscus lens (or two mirror surfaces) into an optical design with N surfaces that is a local minimum, we obtain a system with N+2 surfaces that is a Morse index 1 saddle point. A simple method to compute the required meniscus curvatures will be discussed. Then, letting the optimization roll down on both sides of the saddle leads to two different local minima. Often, one of them has interesting special properties.)uuid:ab738b03b9064dc79e9c6ac16446af10Dhttp://resolver.tudelft.nl/uuid:ab738b03b9064dc79e9c6ac16446af10HSaddle points in the merit function landscape of lithographic objectivesThe multidimensional merit function space of complex optical systems contains a large number of local minima that are connected via links that contain saddle points. In this work, we illustrate a method to construct such saddle points with examples of deep UV objectives and extreme UV mirror systems for lithography. The central idea of our method is that, at certain positions in a system with N surfaces that is a local minimum, a thin meniscus lens or two mirror surfaces can be introduced to construct a system with N+2 surfaces that is a saddle point. When the optimization goes down on the two sides of the saddle point, two mi< nima are obtained. We show that often one of these two minima can be reached from several other saddle points constructed in the same way. The practical advantage of saddlepoint construction is that we can produce new designs from the existing ones in a simple, efficient and systematic manner.Csaddle point; lithography; optimization; optical system design; EUV)uuid:1e3ce36df1f64fbd934942ba2352d668Dhttp://resolver.tudelft.nl/uuid:1e3ce36df1f64fbd934942ba2352d668GThe network structure of the merit function space of EUV mirror systemsbThe merit function space of mirror systems for EUV lithography is studied. Local minima situated in a multidimensional merit function space are connected via links that contain saddle points and form a network. In this work we present the first networks for EUV lithographic objectives and discuss how these networks change when control parameters, such as aperture and field are varied and constraints are used to limit the variation domain of the variables. A good solution in a network obtained with a limited number of variables has been locally optimized with all variables to meet practical requirements.Knetwork; saddle point; optical system design; EUV lithography; optimization)uuid:a4d313dc81f64f5fa83a404f539aa838Dhttp://resolver.tudelft.nl/uuid:a4d313dc81f64f5fa83a404f539aa838IOptimization of multilayer reflectors for extreme ultraviolet lithography#Bal, M.F.; Singh, M.; Braat, J.J.M.Vmultilayer; optimization; extreme ultraviolet lithography; graded multilayers; imaging)uuid:c253f0faa879422b8027b3de1f91775aDhttp://resolver.tudelft.nl/uuid:c253f0faa879422b8027b3de1f91775akAvoiding unstable regions in the design space of EUV mirror systems comprising highorder aspheric surfaces%Marinescu, O.; Bociort, F.; Braat, J.UWhen Extreme Ultraviolet mirror systems having several highorder aspheric surfaces are optimized, the configurations often enter into highly unstable regions of the parameter space. Small changes of system parameters lead then to large changes in ray paths, and therefore optimization algorithms crash because certain sssumptions upon which they are based become invalid. We describe a technique that keeps the configuration away from the unstable regions. The central component of our technique is a finiteaberration quantity, the socalled quasionvariant, which has been originally introduced by H. A. Buchdahl. The quasiinvariant is computed for several rays in the system, and its average change per surface is determined for all surfaces. Small values of these average changes indicate stability. The stabilization technique consists of two steps: First, we obtain a stable initial configuration for subsequent optimization by choosing the system parameters such that the quasiinvariant change per surface is minimal. Then, if the average changes per surfaces of the quasiinvariant remain small during optimization, the configuration is kept in the safe region of the parameter space. This technique is applicable for arbitrary rotationally symmetric optical systems. Examples from the design of aspheric mirror systems for EUV lithography will be given.Cmirror systems; aspheres; EUV lithography; optimization; relaxation)uuid:b73b1b5be1d84151a9206cd5d44af136Dhttp://resolver.tudelft.nl/uuid:b73b1b5be1d84151a9206cd5d44af1365Dynamic Optimization in Businesswide Process Control
Tousain, R.L.3Bosgra, O.H. (promotor); Backx, A.C.P.M. (promotor) The chemical marketplace is a global one with strong competition between man ufacturers. To continuously meet the customer demands regarding product quality and delivery conditions without the need to maintain very large stor age levels chemical manufactures need to strive for production on demand. In this thesis we research how marketoriented production can be realized for the particular class of multigrade continuous processes. For this class of processes production on demand is particularly challenging due to the the complex trade off between performing costly and timeconsuming changeovers and maintaining high storage l<evels. The first requirement for marketoriented production is that production management cooperates with purchasing and sales management. We propose the use of a scheduler as a decision support system in a cooperative organization constituted by these players. In such a scheduler, decision making is represented using decision variables and their effect on the companywide objective, which is chosen to be the added value of the company, is modeled. The scheduler then selects a decision strategy that is optimal with respect to the objective and presents this strategy to the decision makers who use it to base their actual decision taking on. The companymarket interaction is modeled using a transactionbased mod eling framework. Therein not the actual market behavior is modeled but the expected effect of the interaction of the company with the market. Two types of transactions can be modeled in this framework: orders, which result from contracts with suppliers and customers, and opportunities, which express the expected sales and purchases. Two different approaches to the modeling of production decisions are taken, the choice of which depends largely on the im plementation of the process control hierarchy that is assumed. In the first approach, production management and control is performed by a single level controller and the control decisions are the minute to minute manipulation of the valves. This approach is academically interesting, though practically in tractable due to the combination of long horizons and fast sampling times. In the second approach the process control hierarchy consists of a scheduling layer at which it is determined what products will be produced when, and a process control layer which determines how this production is realized. This approach is taken in the rest of the thesis..chemical processes; optimization; supply chain)uuid:e7367a122b864e56931c0e3bbcb93211Dhttp://resolver.tudelft.nl/uuid:e7367a122b864e56931c0e3bbcb93211IWater Demand Management. Approaches, Experiences and Application to Egypt
Mohamed, A.S.2Van Beek, E. (promotor); Savenije, H.G. (promotor)Egypt; demand management; conservation; reuse; new lands; framework for analysis; strategies; criteria; optimization; financial incentives; water resources management)uuid:0bc0134ec5e84062956d979d049352a8Dhttp://resolver.tudelft.nl/uuid:0bc0134ec5e84062956d979d049352a8WDynamic WaterSystem Control  Design and Operation of Regional WaterResources SystemsLobbrecht, A.H.2Segeren, W.A. (promotor); Lootsma, F.A. (promotor)water management; water resources; control system; realtime control; dynamic control; optimization; successive linear programming; interests; strategy; design)uuid:6b34b76a72e749229a6ab2f389b53877Dhttp://resolver.tudelft.nl/uuid:6b34b76a72e749229a6ab2f389b53877SVerkenning genetische algorithmen, een hulpmiddel bij de inrichting van een Rijntak,Goossens, J.G.C.M.; Boogaard, H.F.P. van den"Waal; optimalisering; optimizationnl
Deltares (WL))uuid:d1f186a566014bfba72f9e007977d6e9Dhttp://resolver.tudelft.nl/uuid:d1f186a566014bfba72f9e007977d6e9VInterior point techniques in optimization: Complementarity, sensitivity and algorithms
Jansen, B.4Lootsma, F.A. (promotor); Boender, C.G.E. (promotor)=optimization; sensitivity analysis; interior point algorithms)uuid:e80f3094dbf54df2b9e573e0937e26ecDhttp://resolver.tudelft.nl/uuid:e80f3094dbf54df2b9e573e0937e26ec1Fuzzy predictive control based on human reasoning(Babuska, R.; Sousa, J.; Verbruggen, H.B.Apredictive control; fuzzy decision making; optimization; learningDelft University of Technology)uuid:717630e4194c4d2ab4d1d7f3929b5608Dhttp://resolver.tudelft.nl/uuid:717630e4194c4d2ab4d1d7f3929b5608lUser's manual for the computer program CUFUS: Quick design procedure for a CUtout in a FUSelage version 1.0Heerschap, M.E.Structural design procedures; cutouts; pressurized fuselages; finite elements; optimization; sensitivity analysis; NASTRAN; PATRAN)uuid:afd31d182efe4149afbea8f946c7c2c7< Dhttp://resolver.tudelft.nl/uuid:afd31d182efe4149afbea8f946c7c2c7+Optimization of design of IMS racing yachtsvan Oossanen, P.optimization; yachtsother)uuid:a65dcff750054a969b250789d7ea095aDhttp://resolver.tudelft.nl/uuid:a65dcff750054a969b250789d7ea095afLokatiekeuze monsternamestation in de Nieuwe Waterweg: Optimalisatiestudie meetlokatie(s) en methodiekBleeker, F.J.; Bons, C.A._waterkwaliteitsmeting; water quality measurement; Nieuwe Waterweg; optimalisering; optimization)uuid:f381200a8c9547b7911e963241f5d4fcDhttp://resolver.tudelft.nl/uuid:f381200a8c9547b7911e963241f5d4fcxComputer aided optimum design of rubblemound breakwater crosssections: Manual of the RUMBA computer package, release 1De Haan, W.The computation of the optimum rubblemound breakwater crosssection is executed on a microcomputer. The RUMBA computer package consists of two main parts: the optimization process is executed by a Turbo Pascal programme, the second part consists of editing functions written in AutoLISP. AutoLISP is the programming language within AutoCAD. The quarry production, divided into a number of categories, and longterm distributions of deep water wave heights and water levels, form the basis of the computation. Concrete armor units have been excluded from the computation. Deep water wave heights are converted to wave heights at site. A set of alternative crosssections is computed based on both functional performance criteria, and Van der Meer's stability formulae for statically stable structures. Construction costs and maintenance costs are determined of each alternative. The optimum is derived by minimizing the sum of the construction costs and maintenance costs. Moreover, the programme provides means to economize the use of the quarry. At this stage the computer programme is useful for feasibility studies of harbour protection or coastal protection in regions, where use can be made of a quarry in the neighbourhood of the project site and the use of concrete armor units is excluded in advance. Briefly a method is described to extend the computer programme to the use of concrete armor units.&breakwater; armour units; optimization)uuid:3a4a1ebcf64a4fba8d46b62dd47ca290Dhttp://resolver.tudelft.nl/uuid:3a4a1ebcf64a4fba8d46b62dd47ca290{Illustrative examples of optimization techniques for quantitative and qualitative water management: Report on investigationVerhaeghe, R.J.; Tholen, N.dwaterbeheer; water resources management; waterkwaliteit; water quality; optimalisering; optimization)uuid:4d4806a83c2d4e3eabe2f0a40476ef72Dhttp://resolver.tudelft.nl/uuid:4d4806a83c2d4e3eabe2f0a40476ef72qOptimalisatie op basis van lineair programmeren (LP) en dynamisch programmeren (DP): Mogelijkheden en beperkingenAbraham, G.; Beek, E. vanbeslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; programmering; programming; optimalisering; optimization)uuid:09369434a25545f4a816baa09f830394Dhttp://resolver.tudelft.nl/uuid:09369434a25545f4a816baa09f830394ZOptimalisatietechnieken in kwantitatief waterbeheer: Ontwerp van beheerstrategien in PAWNSamson, J.; Dijkman, J.P.M.beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; optimalisering; optimization)uuid:d42a86c7b46c471fad182e74cc461b74Dhttp://resolver.tudelft.nl/uuid:d42a86c7b46c471fad182e74cc461b74BOptimalisatietechnieken in kwantitatief en kwalitatief waterbeheerVerhaeghe, R.J.waterbeheer; water resources management; waterkwaliteit; water quality; grondwaterbeheer; groundwater management; watervoorziening; water supply; optimalisering; optimization)uuid:3bfeced07f7b4cda82a3be291e9d8ffeDhttp://resolver.tudelft.nl/uuid:3bfeced07f7b4cda82a3be291e9d8ffeConception de rseau iBGP!Buob, M.O.; Uhlig, S.; Meulle, M.KBGP is used today by all Autonomous Systems (AS) in the Internet. Inside each AS, iBGP sessions distribute the external routes among the routers. In large ASs, relying on a fullmesh of iBGP sessions betwe<en routers is not scalable, so routereflection is commonly used. The scalability of routereflection compared to an iBGP fullmesh comes at the cost of opacity in the choice of best routes by the routers inside the AS. This opacity induces problems like suboptimal route choices in terms of IGP cost, deflection and forwarding loops. In this work, we propose a solution to design iBGP routereflection topologies which lead to the same routing as with an iBGP fullmesh and having a minimal number of iBGP sessions. Moreover we compute a robust topology even if a single node or link failure occurs. We apply our methodology on the network of a tier1 ISP. Twice as many iBGP sessions are required to ensure robustness to single IGP failure. The number of required iBGP sessions in our robust topology is however not much larger than in the current iBGP topology used in the tier1 ISP network.9BGP; routereflection; IBGP topology design; optimizationCFIP"Network Architectures and Services
*+&ffffff?'ffffff?(?)?"dXX333333?333333?U}}}}}}}}}} }
}}}
}}}}}}}}}}}}
@
!
"
#@
$
%
&
'
(
)
*
+
,@

.
/
0
1
2
3@
4
5
6
7
8
9@
:
;
<
=
>
?@
@
A
B
C
D
E
F@
G
H
I
J
K
L
M
N
O@
P
I
Q
R
S
T @
U
V
W
X
Y
Z
[
@
\
]
^
_
`
a
b
c@
d
e
f
g
h
i@
j
k
l
m
n
o
p
@
q
r
I
s
t
u
v
w@
x
y
z
{

}
~
@
I
@
I
@
@
&
/
@
I
@
&
@
@
@
&
@
I
@
@
&
@
&
@
I
z
@
&
@
&
@
&
@
&
!
!
!
!!@
!
!
!
! &
!
!
!
"
"
"
"
""@
"
"
"
" I
"
"
#
#
#
#
##@
#
#
#
# I
#
#!
$"
$#
$$
$%$@
$&
$'
$
$ &
$
$
$
%(
%)
%*
%+
%,%@
%
%.
%
% I
%/
%
%0
&1
&2
&3
&4&@
&5
&6
&
&
&
7
&
&8
'9
':
';
'<'x@
'=
'>
'
'
'
?
'@
'A
(B
(C
(D
(E
(F(x@
(G
(H
(
( I
(
(
)I
)J
)K
)L
)M)x@
)N
)O
)
) I
)P
)Q
*R
*S
*T
*U
*V*x@
*W
*X
*
* I
*
*
+Y
+Z
+[
+\+x@
+]
+^
+
+ &
+
_
+
+`
,a
,b
,c
,d,x@
,e
,f
,
, &
,
g
,
,h
i
j
k
lx@
m
n

 &

o

`
.p
.q
.r
.s.x@
.t
.u
.
. &
.
v
.
.
/w
/x
/y
/z
/{/x@
/
/}
/
/ I
/
~
/
/
0
0
0
0
00t@
0
0
0
0 I
0
0
1
1
1
11t@
1
1
1
1
1
1
1
2
2
2
22t@
2
2
2
2
2
2@
2
3
3
3
33t@
3
3
3
3 &
3
3
3`
4
4
4
44t@
4
4
4
4 &
5
5
5
55t@
5
5
5
5 &
6
6
6
66t@
6
6
6
6 &
7
7
7
77t@
7
7
7
7 &
8
8
8
88t@
8
8
8
8 &
9
9
9
99t@
9
9
9
9 &
:
:
:
::t@
:
:
:
: &
;
;
;
;
;;t@
;
;
;
; I
;
;
;
<
<
<
<<t@
<
<
<
< &
<
<
<`
=
=
=
==t@
=
=
=
= &
=
=
=
>
>
>
>
>>t@
>
>
>
> I
>
>
?
?
?
??p@
?
?
?
? &
?
?
?
@
@
@
@@p@
@
@
@
@
@
@
@
A
A
A
AAp@
A
A
A
A
A
A
A
B
B
B
BBp@
B
B
B
B &
B
B
C
C
C
CCp@
C
C
C
C
C
C
D
D
D
D
DDp@
D
D
D
D I
D
D
D
E
E
E
E
EEp@
E
E
E
E I
E
E
E
F
F
F
F
FFp@
F
F!
F
F I
F
"
F#
F@
F$
G%
G&
G'
G(
G)Gl@
G*
G+
G
G I
G
G
G
H,
H
H.
H/Hl@
H0
H1
H
H
H
?
H@
H
I2
I3
I4
I5Il@
I6
I7
I
I &
J8
J9
J:
J;
J<Jl@
J=
J>
J
J I
J
?
J@
J
JA
KB
KC
KD
KEKh@
KF
KG
K
K
K
?
K
KH
LI
LJ
LK
LLLh@
LM
LN
L
L
L
O
L
LP
MQ
MR
MS
MT
MUMh@
MV
MW
M
M I
M
X
MY
M@
MZ
N[
N\
N]
N^
N_Nh@
N`
Na
N
N I
Nb
N
Nc
Od
Oe
Of
Og
O_Oh@
Oh
Oi
O
O I
Oj
O
Ok
Pl
Pm
Pn
PoPh@
Pp
Pq
P
P &
P
r
P
P!
Qs
Qt
Qu
QvQh@
Qw
Qx
Q
Q &
Q
y
Q
Rz
R{
R
R}Rh@
R~
R
R
R &
R
y
R
S
S
S
S
SSd@
S
S
S
S I
S
S
S!
T
T
T
TTd@
T
T
T
T &
T
T@
TX
U
U
U
U
UUd@
U
U
U
U I
U
U
V
V
V
VVd@
V
V
V
V &
V
V
V
W
W
W
WWd@
W
W
W
W &
W
W
W
X
X
X
XXd@
X
X
X
X &
X
X
Y
Y
Y
YYd@
Y
Y
Y
Y &
Y
Y
Z
Z
Z
Z
ZZd@
Z
Z
Z
Z I
Z@
[
[
[
[[d@
[
[
[
[
[
[
[
\
\
\
\\`@
\
\
\ &
\
\
]
]
]
]
]]`@
]
]
]
] I
]
^
^
^
^^`@
^
^
^
^
^@
^X
_
_
_
__`@
_
_
_
_
_@
_X
`
`
`
```@
`
`
`
` I
`
a
a
a
a
aa`@
a
a
a
a I
a
b
b
b
bb`@
b
b
b
b
b@
bX
c
c
c
cc`@
c
c
c &
c
c
d
d
d
d
dd\@
d
d
d
d I
d
d
e
e
e
ee\@
e
e
e
e &
e
e@
eX
f
f
f
ff\@
f
f
f
f &
f
f@
fX
g
g
g
gg\@
g
g
g &
g
g
h
h
h
h
hhX@
h
h
h
h I
h
i
i
i
iiX@
i
i
i &
j
j
j
jjX@
j
j
j
j &
k
k
k
kkX@
k
k
k
k &
l
l
l
llX@
l
l
l &
m!
m"
m#
m$mX@
m%
m&
m
m &
n'
n(
n)
n*nX@
n+
n,
n
n &
o
o.
o/
o0oX@
o1
o2
o
o &
p3
p4
p5
p6pX@
p7
p8
p
p &
p
p@
p9
q:
q;
q<
qqX@
q=
q>
q
q &
q
q@
qX
r?
r@
rA
rBrX@
rC
rD
r
r
r
r
sE
sF
sG
sH
sIsT@
sJ
sK
s
s I
sL
tM
tN
tO
tP
tQtT@
tR
t
t I
t
S
t
uT
uU
uV
uW
uXuT@
uY
uZ
u
u I
u
v[
v\
v]
v^
vIvT@
v_
v`
v
v I
vL
wa
wb
wc
wdwT@
we
w
w
w
w@
wX
xf
xg
xh
xi
xjxT@
xk
xl
x
x I
x@
ym
yn
yo
y6yT@
yp
y8
y
y &
y
y@
y9
zq
zr
zs
zzT@
zt
zu
z
z &
z
z@
z9
{v
{w
{x
{{T@
{y
{z
{
{ &
{
{@
{9
{

}
~P@




@
X
}
}
}
}}P@
}
}
}
} &
}
}@
}9
~
~
~
~
~~H@
~
~
~
~ I
~
S
~L
D@
I
S
4@
I
0@
0@
I
,@
&
,@
,@
$@
@
`
@
@
@
@
&
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~>@ddyKyKhttp://resolver.tudelft.nl/uuid:6dd884051ecc4ec8b2af58d9a82f349byKyKhttp://resolver.tudelft.nl/uuid:cbde185b76124915b13f47adb099b0b2yKyKhttp://resolver.tudelft.nl/uuid:975b11c53b964e028129ec2171c0114byKyKhttp://resolver.tudelft.nl/uuid:57eb0947760b43a99826f96312bae7d0yKyKhttp://resolver.tudelft.nl/uuid:bdc7d9df33d4449ab3310c60c3b2cb18yKyKhttp://resolver.tudelft.nl/uuid:1b74778703194120be100640f344ec5eyKyKhttp://resolver.tudelft.nl/uuid:f613079c90a147dcafcbf6833646ca5ayKyKhttp://resolver.tudelft.nl/uuid:44dda417a65847d3998b48c082c9e989 yKyKhttp://resolver.tudelft.nl/uuid:6be0d3276da6419fa6a519fd44b1245d
yKyKhttp://resolver.tudelft.nl/uuid:f10d191c62584d63ab772ff9fe86c516yKyKhttp://resolver.tudelft.nl/uuid:367f977dba1444be9c4e93eba5af508fyKyKhttp://resolver.tudelft.nl/uuid:1397c49e4df94ff284d76a8511757062
yKyKhttp://resolver.tudelft.nl/uuid:e6fc3865531f4ea9aeffe2ef923ae36fyKyKhttp://resolver.tudelft.nl/uuid:7f63baf498e44b799307577299d843e6yKyKhttp://resolver.tudelft.nl/uuid:9b46e18b1fa34517a666660e4a50f18eyKyKhttp://resolver.tudelft.nl/uuid:e8dbb294dd574c10b733b4aded62607cyKyKhttp://resolver.tudelft.nl/uuid:0010fdac32ec459bbb9b3e6327a85496yKyKhttp://resolver.tudelft.nl/uuid:01c571340988424f92ba752fd993680byKyKhttp://resolver.tudelft.nl/uuid:4a3d986dec494b1e8ac2393885d9d026yKyKhttp://resolver.tudelft.nl/uuid:0d3a06953eb64da3a341e022df2c8629yKyKhttp://resolver.tudelft.nl/uuid:8e6df8cdd5e94df397ac46f4a83a5413yKyKhttp://resolver.tudelft.nl/uuid:b1d71cd95aae4a13af52b5788f379c5fyKyKhttp://resolver.tudelft.nl/uuid:bf7c3074dcb9478fb4691aecd6c8c8c1yKyKhttp://resolver.tudelft.nl/uuid:5b73c483bca34cad87946e02a353baf3yKyKhttp://resolver.tudelft.nl/uuid:03acffecba044e43a4b4ab95b16e5553yKyKhttp://resolver.tudelft.nl/uuid:06966158ae0c45058995e1423145b9f2yKyKhttp://resolver.tudelft.nl/uuid:fcd7ff9d4b824568b3ce9b0ca807fbb9yKyKhttp://resolver.tudelft.nl/uuid:5d098b7cc7d64d6aba6fb69c13b14169yKyKhttp://resolver.tudelft.nl/uuid:eb5a59c03fcc49d790c63d8e88f3dcb2yKyKhttp://resolver.tudelft.nl/uuid:86c8c0027d1e410fab2e36122120e797yKyKhttp://resolver.tudelft.nl/uuid:8a736606a00c461899551ec44d87471e yKyKhttp://resolver.tudelft.nl/uuid:0c2e04b608c143e7805118c5344c124f!!yKyKhttp://resolver.tudelft.nl/uuid:f0483aa32a1b4700b78e12dbab6c6a2b""yKyKhttp://resolver.tudelft.nl/uuid:37e9f5548996491fb15af6a30faabb9d##yKyKhttp://resolver.tudelft.nl/uuid:753ec916cfc14a02a1eabf3f39d310a5$$yKyKhttp://resolver.tudelft.nl/uuid:4095d737931649d0a949e28d597fe9f1%%yKyKhttp://resolver.tudelft.nl/uuid:7b1cdc6f3fee4adabd59fb608bf0ca42&&yKyKhttp://resolver.tudelft.nl/uuid:d8961a8ad99d47d58eabe1f7fb486e63''yKyKhttp://resolver.tudelft.nl/uuid:050c1e0e89f34ee3a1d0907bb3a31a3a((yKyKhttp://resolver.tudelft.nl/uuid:8eb074fa3d474373bf01ffcee2a4612c))yKyKhttp://resolver.tudelft.nl/uuid:9bb17c0280c44e2a917b6b3eed182132**yKyKhttp://resolver.tudelft.nl/uuid:c3faec41007f4b2cacbeeb2eb89efd32++yKyKhttp://resolver.tudelft.nl/uuid:3beba71b7e194277bdd7752c43f867af,,yKyKhttp://resolver.tudelft.nl/uuid:9dff055ceb6d4005a052fce8aaeea792yKyKhttp://resolver.tudelft.nl/uuid:4f9ed7f005e14cbc8992d91dc6c914d7..yKyKhttp://resolver.tudelft.nl/uuid:cb6544e802f9403c8540698b7af9a185//yKyKhttp://resolver.tudelft.nl/uuid:650ec0d046134dae96b11f685dff0e6000yKyKhttp://resolver.tudelft.nl/uuid:d063dfb96ec64c43b315fb98a576498a11yKyKhttp://resolver.tudelft.nl/uuid:8d1abf3374d04042bae96e4468b7bb8122yKyKhttp://resolver.tudelft.nl/uuid:f30bd41b4b444459ab68d913fffdb8e933yKyKhttp://resolver.tudelft.nl/uuid:5ede00e1910149ea9a2f81b99291b11044yKyKhttp://resolver.tudelft.nl/uuid:6bf9ad22c4a54f5f8006fce525935f0455yKyKhttp://resolver.tudelft.nl/uuid:7d81abadfcbe4094871a54755ee0f03e66yKyKhttp://resolver.tudelft.nl/uuid:76b9b6db926c479e9031ed4abf2324df77yKyKhttp://resolver.tudelft.nl/uuid:241873a0ad1443f8a135e2c133622c2f88yKyKhttp://resolver.tudelft.nl/uuid:38379080da964acda86df3b8f492dd1b99yKyKhttp://resolver.tudelft.nl/uuid:25459ba0fe3a444c847a34ad5c41ab9f::yKyKhttp://resolver.tudelft.nl/uuid:3bfab3e0d82644c581daf06c33ee0299;;yKyKhttp://resolver.tudelft.nl/uuid:1d9c4022dbd6445298424649c1fdd432<<yKyKhttp://resolver.tudelft.nl/uuid:0feb1f5032ae4e5487ea3b551497389e==yKyKhttp://resolver.tudelft.nl/uuid:56a648000dde42fda2f105ed7c357b0b>>yKyKhttp://resolver.tudelft.nl/uuid:3e2cb6d73ba24b45af712fa106b5d189??yKyKhttp://resolver.tudelft.nl/uuid:fcc290f8cf6044a4be68189f29a2fb82@@yKyKhttp://resolver.tudelft.nl/uuid:93af17490b97416aba27907ae4921a7fAAyKyKhttp://resolver.tudelft.nl/uuid:3dacc24dcf414c138e1e10f11a1b6f23BByKyKhttp://resolver.tudelft.nl/uuid:aa419ba53d314d73adf3c79870deccc7CCyKyKhttp://resolver.tudelft.nl/uuid:a53f5bbd264041cb982db05a6fff9166DDyKyKhttp://resolver.tudelft.nl/uuid:9a018e13f29e459788706f8ab2fa9787EEyKyKhttp://resolver.tudelft.nl/uuid:b4aee571048942ffab55d74e980f724aFFyKyKhttp://resolver.tudelft.nl/uuid:65db30d9206c4661abd2c645482a8e2dGGyKyKhttp://resolver.tudelft.nl/uuid:dfaae28fc2dd4bdc82d6a1c1aa98fa26HHyKyKhttp://resolver.tudelft.nl/uuid:e8f7fdb9d20945be9e0313da46e386bcIIyKyKhttp://resolver.tudelft.nl/uuid:be0f5746ff0542a3805af4a72fef4cc6JJyKyKhttp://resolver.tudelft.nl/uuid:8d7290d3a9034cfe8c120387b94a192eKKyKyKhttp://resolver.tudelft.nl/uuid:58f4d3c30a384640aded51d7bca2396eLLyKyKhttp://resolver.tudelft.nl/uuid:ccc6e7f33b214f05a0cadf8cad6d0ca0MMyKyKhttp://resolver.tudelft.nl/uuid:c2a93de021e4490ba18c09f319c2da17NNyKyKhttp://resolver.tudelft.nl/uuid:f34c2606dbae4182873b8c1a99714297OOyKyKhttp://resolver.tudelft.nl/uuid:fdc2dbdab419450fa30564825a43a0c8PPyKyKhttp://resolver.tudelft.nl/uuid:f272117ce1b54ae696cbaa86fe62a015QQyKyKhttp://resolver.tudelft.nl/uuid:319dffb83bbc49dea6c568d8972f3888RRyKyKhttp://resolver.tudelft.nl/uuid:1137ebe33dcb43ca84f789bbbbc2d635SSyKyKhttp://resolver.tudelft.nl/uuid:d8f58668ba49441dbbf0aa8c7114da4aTTyKyKhttp://resolver.tudelft.nl/uuid:25c85feb7ef147529810e70f49e88802UUyKyKhttp://resolver.tudelft.nl/uuid:dc5b1158be5442d6a4d3b0a19462f507VVyKyKhttp://resolver.tudelft.nl/uuid:c58b5999da124a62876f95d7784edf91WWyKyKhttp://resolver.tudelft.nl/uuid:cb3de0cfa5064490b988f4d1bf00ae55XXyKyKhttp://resolver.tudelft.nl/uuid:ff8e44db72e249fabd7fbde923758e68YYyKyKhttp://resolver.tudelft.nl/uuid:fbc64a39931e4b408803486466f20703ZZyKyKhttp://resolver.tudelft.nl/uuid:6c6197bd5757428a9d3de94af148ce90[[yKyKhttp://resolver.tudelft.nl/uuid:4f491cc5cdc749b48b80700dae2cf57c\\yKyKhttp://resolver.tudelft.nl/uuid:ff66e490db594e3cb6e2926da4f074df]]yKyKhttp://resolver.tudelft.nl/uuid:a8ec762b8e2a422f9978a6e85673df40^^yKyKhttp://resolver.tudelft.nl/uuid:7cd0b27cf95b47c3969b36c4b7affa0d__yKyKhttp://resolver.tudelft.nl/uuid:f16b0c66bef346f9a84c174c0e0bc449``yKyKhttp://resolver.tudelft.nl/uuid:324e0e8a527e43bb87c08e131654acc9aayKyKhttp://resolver.tudelft.nl/uuid:20b5a4b564194593a66848074982bcb3bbyKyKhttp://resolver.tudelft.nl/uuid:4f4b7fb14a7746bb9c14ff5e4bb6477cccyKyKhttp://resolver.tudelft.nl/uuid:5feb9aa6d1bc482b85707e892bdf3bc5ddyKyKhttp://resolver.tudelft.nl/uuid:d50848b4cd084482a8247d51700be44eeeyKyKhttp://resolver.tudelft.nl/uuid:28b2169c2dc04258b5728c2320cf81d1ffyKyKhttp://resolver.tudelft.nl/uuid:c05ad7d655044fa4a14f496e9bb20928ggyKyKhttp://resolver.tudelft.nl/uuid:703cd3c28cf448f7babc8b33cdd38949hhyKyKhttp://resolver.tudelft.nl/uuid:8eff9ef1b5094f3db1f77d1357c53ff8iiyKyKhttp://resolver.tudelft.nl/uuid:11464f49b10b48ed90759e281514618ajjyKyKhttp://resolver.tudelft.nl/uuid:63a75aa9c71e44399d0b864fe8c2915dkkyKyKhttp://resolver.tudelft.nl/uuid:cdc345d1a0b54b7098fbbc2235c818a6llyKyKhttp://resolver.tudelft.nl/uuid:8b3c60a54e174680b7c6252fb4ae87cammyKyKhttp://resolver.tudelft.nl/uuid:197e6db7921d4786958db0c06079f1fcnnyKyKhttp://resolver.tudelft.nl/uuid:8abc533db86046c188685eabdb33e415ooyKyKhttp://resolver.tudelft.nl/uuid:fc98242638af4ba7bc57c3e44f14c4c6ppyKyKhttp://resolver.tudelft.nl/uuid:ea7af067bd4648c8a147fe4cddc936ecqqyKyKhttp://resolver.tudelft.nl/uuid:cdd281b20bc74f57a9fb3ddbe49c1082rryKyKhttp://resolver.tudelft.nl/uuid:b842a4d007084c37b3e7e86f91c72dd4ssyKyKhttp://resolver.tudelft.nl/uuid:37f7ee079bb84b13be8fdc4d27417b0fttyKyKhttp://resolver.tudelft.nl/uuid:a29ca0b4c17d4a1499c09672b805021euuyKyKhttp://resolver.tudelft.nl/uuid:7bf2a037c8eb44be96ef411529c4be0bvvyKyKhttp://resolver.tudelft.nl/uuid:33282f5fe0934a9a88e8819ccfb40114wwyKyKhttp://resolver.tudelft.nl/uuid:e15f936a94394247b0f9051619b34cd4xxyKyKhttp://resolver.tudelft.nl/uuid:43fb3a2f0c02406aad7d374ec5f71d63yyyKyKhttp://resolver.tudelft.nl/uuid:05dfafdccd7c4b17a92f8420e5bb78a0zzyKyKhttp://resolver.tudelft.nl/uuid:ab738b03b9064dc79e9c6ac16446af10{{yKyKhttp://resolver.tudelft.nl/uuid:1e3ce36df1f64fbd934942ba2352d668yKyKhttp://resolver.tudelft.nl/uuid:a4d313dc81f64f5fa83a404f539aa838}}yKyKhttp://resolver.tudelft.nl/uuid:c253f0faa879422b8027b3de1f91775a~~yKyKhttp://resolver.tudelft.nl/uuid:b73b1b5be1d84151a9206cd5d44af136yKyKhttp://resolver.tudelft.nl/uuid:e7367a122b864e56931c0e3bbcb93211yKyKhttp://resolver.tudelft.nl/uuid:0bc0134ec5e84062956d979d049352a8yKyKhttp://resolver.tudelft.nl/uuid:6b34b76a72e749229a6ab2f389b53877yKyKhttp://resolver.tudelft.nl/uuid:d1f186a566014bfba72f9e007977d6e9yKyKhttp://resolver.tudelft.nl/uuid:e80f3094dbf54df2b9e573e0937e26ecyKyKhttp://resolver.tudelft.nl/uuid:717630e4194c4d2ab4d1d7f3929b5608yKyKhttp://resolver.tudelft.nl/uuid:afd31d182efe4149afbea8f946c7c2c7yKyKhttp://resolver.tudelft.nl/uuid:a65dcff750054a969b250789d7ea095ayKyKhttp://resolver.tudelft.nl/uuid:f381200a8c9547b7911e963241f5d4fcyKyKhttp://resolver.tudelft.nl/uuid:3a4a1ebcf64a4fba8d46b62dd47ca290yKyKhttp://resolver.tudelft.nl/uuid:4d4806a83c2d4e3eabe2f0a40476ef72yKyKhttp://resolver.tudelft.nl/uuid:09369434a25545f4a816baa09f830394yKyKhttp://resolver.tudelft.nl/uuid:d42a86c7b46c471fad182e74cc461b74yKyKhttp://resolver.tudelft.nl/uuid:3bfeced07f7b4cda82a3be291e9d8ffegg
Root Entry FyF@yF@@SummaryInformation( F<Workbook FDocumentSummaryInformation8 F
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABC