Quasi-steady aerodynamic models play an important role in evaluating aerodynamic performance and designing and optimizing flapping wings. In Chapter 2, we present a predictive quasi-steady model by including four aerodynamic loading terms. The loads result from the wing's translation, rotation, their coupling as well as the added-mass effect. The necessity of including all four of these terms in a quasi-steady model to predict both the aerodynamic force and torque is demonstrated. Validations indicate a good accuracy of predicting the center of pressure, the aerodynamic loads and the passive pitching motion for various Reynolds numbers. Moreover, compared to the existing quasi-steady models, the proposed model does not rely on any empirical parameters and, thus, is more predictive, which enables application to the shape and kinematics optimization of flapping wings.

For flapping wings with passive pitching motion, a shift in the pitching axis location alters the aerodynamic loads, which in turn change the passive pitching motion and the flight efficiency. Therefore, in Chapter 3, we investigate the optimal pitching axis location for flapping wings to maximize the power efficiency during hovering flight. Optimization results show that the optimal pitching axis is located between the leading edge and the mid-chord line, which shows a close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to optimized traditional wings used by most of the flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. In addition, the optimized pitching axis enables the drive system to recycle more energy during the deceleration phases as compared to their counterparts. This observation underlines the particular importance of the wing pitching axis location for energy-efficient FWMAVs when using kinetic energy recovery drive systems.

The presence of wing twist can alter the aerodynamic performance and power efficiency of flapping wings by changing the angle of attack. In order to study the optimal twist of flapping wings for hovering flight, we propose a computationally efficient fluid-structure interaction (FSI) model in Chapter 4. The model uses an analytical twist model and the quasi-steady aerodynamic model introduced in Chapter 2 for the structural and aerodynamic analysis, respectively. Based on the FSI model, we optimize the twist of a rectangular wing by minimizing the power consumption during hovering flight. The power efficiency of the optimized twistable wings is compared with corresponding optimized rigid wings. It is shown that the optimized twistable wings can not dramatically outperform the optimized rigid wings in terms of power efficiency, unless the pitching amplitude at the wing root is limited. When this amplitude decreases, the optimized twistable wings can always maintain high power efficiency by introducing certain twist while the optimized rigid wings need more power for hovering.

Considering the high impact of the root stiffness on flapping kinematics and power consumption, we present an active hinge design which uses electrostatic force to change the hinge stiffness in Chapter 5. The hinge is realized by stacking three conducting spring steel layers which are separated by dielectric Mylar films. The theoretical model shows that the stacked layers can switch from slipping with respect to each other to sticking together when the resultant electrostatic force between layers, which can be controlled by the applied voltage, is above a threshold value. The switch from slipping to sticking will result in a dramatic increase of the hinge stiffness (about 9x). Therefore, a short duration of the sticking can still lead to a considerable change in the passive pitching motion. Experimental results successfully show the decrease of the pitching amplitude with the increase of the applied voltage. Flight control based on the electrostatic force can be very power-efficient since there is ideally no power consumption due to the control operations.

In Chapter 6, we retrospect and discuss the most important aspects related to the modeling, design and optimization of flapping wings for efficient hovering flight. In Chapter 7, the overall conclusions are drawn and recommendations for further study are provided.","flapping wing; passive pitching; pitching axis; aerodynamic model; power efficiency; optimization","en","doctoral thesis","","978-94-92516-57-2","","","","","","","","","","","","" "uuid:7f63baf4-98e4-4b79-9307-577299d843e6","http://resolver.tudelft.nl/uuid:7f63baf4-98e4-4b79-9307-577299d843e6","Local Alternative for Energy Supply: Performance Assessment of Integrated Community Energy Systems","Koirala, B.P. (TU Delft Energy & Industry); Chaves Avila, J.P. (Comillas Pontifical University); Gomez, T. (Comillas Pontifical University); Hakvoort, R.A. (TU Delft Energy & Industry); Herder, P.M. (TU Delft Engineering, Systems and Services)","","2016","Integrated community energy systems (ICESs) are emerging as a modern development to re-organize local energy systems allowing simultaneous integration of distributed energy resources (DERs) and engagement of local communities. Although local energy initiatives, such as ICESs are rapidly emerging due to community objectives, such as cost and emission reductions as well as resiliency, assessment and evaluation are still lacking on the value that these systems can provide both to the local communities as well as to the whole energy system. In this paper, we present a model-based framework to assess the value of ICESs for the local communities. The distributed energy resources-consumer adoption model (DER-CAM) based ICES model is used to assess the value of an ICES in the Netherlands. For the considered community size and local conditions, grid-connected ICESs are already beneficial to the alternative of solely being supplied from the grid both in terms of total energy costs and CO2 emissions, whereas grid-defected systems, although performing very well in terms of CO2 emission reduction, are still rather expensive.","distributed energy resources (DERs); energy communities; smart grids; multi-carrier energy systems; optimization; OA-Fund TU Delft","en","journal article","","","","","","","","","","Engineering, Systems and Services","Energy & Industry","","","" "uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","http://resolver.tudelft.nl/uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","Computationally efficient analysis & design of optimally compact gear pairs and assessment of gear compliance","Amani, A. (TU Delft Emerging Materials)","Spitas, C. (promotor); Spitas, Vasilios (promotor)","2016","","gear design; spur gear; design parameters; pitch compatibility; interference; corner contact; pointed tip; undercutting; non-standard; non-dimensional; design guidelines; highest point of single tooth contact (HPSTC); finite element analysis; stress analysis; bending strength; compact gears; optimization; centre distance; deviation; tolerance zone; computational modelling; compact gear drive; compliance; bending compliance; foundational compliance; Hertzian compliance; non-dimensional modelling; Saint-Venant's Principle; cubic Hermitian interpolation","en","doctoral thesis","","978-94-6186-739-1","","","","","","2018-11-15","","","","","","" "uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","http://resolver.tudelft.nl/uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems","Fitiwi, D.Z. (TU Delft Energy & Industry)","Herder, P.M. (promotor); Rivier Abbad, M. (promotor)","2016","","transmission expansion planning; uncertainty and variability; optimization; stochastic programming; moments technique; clustering","en","doctoral thesis","","978-84-608-9955-6","","","","","","","","","","","","" "uuid:0010fdac-32ec-459b-bb9b-3e6327a85496","http://resolver.tudelft.nl/uuid:0010fdac-32ec-459b-bb9b-3e6327a85496","Gradient-based optimization of flow through porous media: Version 3","Jansen, J.D. (TU Delft Geoscience and Engineering)","","2016","These notes form part of the course material for the MSc course AES1490 ""Advanced Reservoir Simulation"" which has been taught at TU Delft over the past decade as part of the track ""Petroleum Engineering and Geosciences"" in the two-year MSc program ""Applied Earth Sciences"".

The notes cover the gradient-based optimization of subsurface flow. In particular they treat optimization methods in which the gradient information is obtained with the aid of the adjoint method, which is, in essence, an efficient numerical implementation of implicit differentiation in a multivariate setting.

Chapter 1 reviews the basic concepts of multivariate optimization and demonsrates the equivalence of the Lagrange multiplier method for constrained optimization and the use of implicit differentiation to obtain gradients in the presence of constraints.

Chapter 2 introduces the use of Lagrange multipliers and implicit differentiation for the optimization of large-scale numerical systems with the adjoint method. In particular it addresses the optimization of oil recovery from subsurface reservoirs represented as reservoir simulation models, i.e. space- and time-discretized numerical representations of the nonlinear partial differential equations that govern multi-phase flow through porous media. It also covers the use of robust adjoint-based optimization to cope with the inherent uncertainty in subsurface flow models and addresses some numerical implementation aspects.

Chapter 3 gives a brief overview of various further topics related to gradient-based optimization of subsurface flow, such as closed-loop reservoir management and hierarchical optimization of short-term and long term reservoir performance.

97%) with any given configuration (capacity, data width and frequency). Besides these better than worst-case current measures, we also propose a generic post-manufacturing power and performance characterization methodology for DRAMs that can help identify the realistic current estimates and optimized set of timing measures for a given DRAM device, thereby further improving the accuracy of the power and energy estimates for that particular DRAM device. To optimize DRAM power consumption, we propose a set of performance-neutral DRAM power-down strategies coupled with a power management policy that for any given use-case (access granularity, page policy and memory type) achieves significant power savings without impacting its worst-case performance (bandwidth and latency) guarantees. We verify the pessimism in DRAM currents and four critical DRAM timing parameters as provided in the datasheets, by experimentally evaluating 48 DDR3 devices of the same configuration. We further derive optimal set of timings using the performance characterization algorithm, at which the DRAM can operate successfully under worst-case run-time conditions, without increasing its energy consumption. We observed up to of 33.3% and 25.9% reduction in DRAM read and write latencies and 17.7% and 15.4% improvement in energy efficiency. We validate DRAMPower model against a circuit-level DRAM power model and verify it against real power measurements from hardware for different DRAM operations. We observed between 1-8% difference in power estimates, with an average of 97% accuracy. We also evaluated the power-management policy and power-down strategies and observed significant energy savings (close to theoretical optimal) at very marginal average-case performance penalty without impacting any of the original latency and bandwidth guarantees.","DRAM; power; energy; estimation; optimization; modeling; variation","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","","" "uuid:3beba71b-7e19-4277-bdd7-752c43f867af","http://resolver.tudelft.nl/uuid:3beba71b-7e19-4277-bdd7-752c43f867af","Cost optimal river dike design using probabilistic methods","Bischiniotis, K.; Kanning, W.; Jonkman, S.N.","","2014","This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and the requirements that are set by the user. Even though there are many ways that may provoke the dike failure, the literature study showed that the failure mechanisms that contribute most to the failure of the typical Dutch river dikes are overflowing, piping and inner slope stability. Based on these, the most important design variables of the dike cross-section dimensions are set and following probabilistic design methods, the probability of failure of many different dike cross-sections is estimated taking into account the abovementioned failure mechanisms. Different cross-section configurations may all comply with a set target probability of failure. Of these, the cross-section that results in the lowest cost is considered the optimal. This approach is applied to several representative dikes, each of which gives a different optimal design, depending on the local boundary conditions. The method shows that the use of probabilistic optimization gives more cost-efficient designs than the traditional partial safety factor designs.","river dike; optimization; probabilistic design; cross-section; failure probability","en","conference paper","Brazilian Water Resources Association and Acquacon Consultoria.","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:9dff055c-eb6d-4005-a052-fce8aaeea792","http://resolver.tudelft.nl/uuid:9dff055c-eb6d-4005-a052-fce8aaeea792","Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity","Maher, G.D.; Hulshoff, S.J.","","2014","The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain values for the parameters of subgrid-scale models that are nonlinear in their coefficients. Specifically, the Newton-Raphson method is employed. A least-squares minimization formulation of the Germano Identity is developed to resolve issues that occur when the residual is positive and negative over different regions of the domain. In this case a Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is used to solve the minimization problem. The developed method is applied to the one-dimensional unsteady forced Burgers’ equation and the two-dimensional steady Stokes’ equations. It is shown that the Newton-Raphson method and BFGS algorithm generally solve, or minimize the residual of, the Germano relations in a relatively small number of iterations. The optimized subgridscale models are shown to outperform standard SGS models with respect to a L2 error. Additionally, the nonlinear SGS models tend to achieve lower L2 errors than the linear models.","subgrid-scale model; variational multiscale method; variational Germano identity; optimization; turbulence","en","conference paper","CIMNE","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy & Propulsion","","","","" "uuid:4f9ed7f0-05e1-4cbc-8992-d91dc6c914d7","http://resolver.tudelft.nl/uuid:4f9ed7f0-05e1-4cbc-8992-d91dc6c914d7","Validation and Optimization of a Design Formula for Stable Geometrically Open Filter Structures","Van de Sande, S.A.H.; Uijttewaal, W.S.J.; Verheij, H.J.","","2014","Granular filters are used for protection against scour and erosion of base material. For a proper functioning it is necessary that at the interfaces between the filter structure, the subsoil and the water flowing above the filter structure no material will be transported. Different types of granular filters can be distinguished, this paper focuses on stable geometrically open filter structures under current attack. Hoffmans (2012) developed a design formula for stable geometrically open filters. This paper presents the validation and an optimization of the design formula based on performed model tests. It is shown that the current design formula is too conservative. The proposed improvements allows for a wider range of applicability.","filter; granular filter; geometrically open filter; open filter; interface stability; bed protection; design formula; stability; optimization; ICCE 2014","en","conference paper","Coastal Engineering Research Council","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:cb6544e8-02f9-403c-8540-698b7af9a185","http://resolver.tudelft.nl/uuid:cb6544e8-02f9-403c-8540-698b7af9a185","Rolling horizon predictions of bus trajectories","Oshyani, M.F.; Cats, O.","","2014","Bus travel times are subject to inherent and recurrent uncertainties. A real-time prediction scheme regarding how the transit system evolves will potentially facilitate more adaptive operations as well as more adaptive passengers’ decisions. This scheme should be tractable, sufficiently fast and reliable to be used in real time applications. For this purpose, a heuristic hybrid scheme for departure time estimation is proposed in this study. The predic-tion generated by the proposed hybrid scheme consists of three travel time components: schedule, instantaneous and historical data sources. Genetic algorithm is applied in order to specify the contribution of each data source component to the prediction scheme. The pro-posed scheme was applied for a trunk bus line in Stockholm, Sweden. In addition, the current-ly deployed scheme was replicated in order to compare the performance of both schemes. The results suggest that the proposed scheme reduces the overall mean absolute error by almost 20%. Moreover the proposed scheme provides better predictions except for very long term predictions where both schemes yield the same performance.","prediction; bus departure time; optimization; travel time and genetic algorithm","en","conference paper","National Technical University of Athens (NTUA)","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","" "uuid:650ec0d0-4613-4dae-96b1-1f685dff0e60","http://resolver.tudelft.nl/uuid:650ec0d0-4613-4dae-96b1-1f685dff0e60","Automatic Hardware Generation for Reconfigurable Architectures","Nane, R.","Bertels, K.L.M. (promotor)","2014","Reconfigurable Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons. First, processor clock frequencies reached threshold values past which power dissipation becomes a very difficult problem to solve. As a consequence, alternatives were sought to keep improving the system performance. Second, because Field-Programmable Gate Arrays (FPGAs) technology substantially improved (e.g., increase in transistors per mm2), system designers were able to use them for an increasing number of (complex) applications. However, the adoption of reconfigurable devices brought with itself a number of related problems, of which the complexity of programming can be considered an important one. One approach to program an FPGA is to implement an automatically generated Hardware Description Language (HDL) code from a High-Level Language (HLL) specification. This is called High-Level Synthesis (HLS). The availability of powerful HLS tools is critical to managing the ever-increasing complexity of emerging RA systems to leverage their tremendous performance potential. However, current hardware compilers are not able to generate designs that are comparable in terms of performance with manually written designs. Therefore, to reduce this performance gap, research on how to generate hardware modules efficiently is imperative. In this dissertation, we address the tool design, integration, and optimization of the DWARV 3.0 HLS compiler. Dissimilar to previous HLS compilers, DWARV 3.0 is based on the CoSy compiler framework. As a result, this allowed us to build a highly modular and extendible compiler in which standard or custom optimizations can be easily integrated. The compiler is designed to accept a large subset of C-code as input and to generate synthesizable VHDL code for unrestricted application domains. To enable DWARV 3.0 third-party tool-chain integration, we propose several IP-XACT (i.e., a XML-based standard used for tool-interoperability) extensions such that hardware-dependent software can be generated and integrated automatically. Furthermore, we propose two new algorithms to optimize the performance for different input area constraints, respectively, to leverage the benefits of both jump and predication schemes from conventional processors adapted for hardware execution. Finally, we performed an evaluation against state-of-the-art HLS tools. Results show that application execution time wise, DWARV 3.0 performs, on average, the best among the academic compilers.","high-level synthesis; hardware; reconfigurable; architecture; compiler; survey; dwarv; HLS; optimization","en","doctoral thesis","CPI Koninklijke Wohrmann","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Engineering","","","","" "uuid:d063dfb9-6ec6-4c43-b315-fb98a576498a","http://resolver.tudelft.nl/uuid:d063dfb9-6ec6-4c43-b315-fb98a576498a","Model-based Feedforward Control for Inkjet Printheads","Khalate, A.A.","Babuska, R. (promotor); Bombois, X. (promotor)","2013","In recent years, inkjet technology has emerged as a promising manufacturing tool. This technology has gained its popularity mainly due to the facts that it can handle diverse materials and it is a non-contact and additive process. Moreover, the inkjet technology offers low operational costs, easy scalability, digital control and low material waste. Thus, apart from conventional document printing, the inkjet technology has been successfully applied as a micro-manufacturing tool in the areas of electronics, mechanical engineering, and life sciences. In this thesis, we investigate a piezo-based drop-on-demand (DoD) printhead which is commonly used for industrial and commercial applications due to its ability to handle diverse materials. A typical drop-on-demand (DoD) inkjet printhead consists of several ink channels in parallel. Each ink channel is provided with a piezo-actuator which on the application of an actuation voltage pulse, generates pressure oscillations inside the ink channel. These pressure oscillations push the ink drop out of the nozzle. The print quality delivered by an inkjet printhead depends on the properties of the jetted drop, i.e., the drop velocity, the drop volume and the jetting direction. To meet the challenging performance requirements posed by new applications, these drop properties have to be tightly controlled. The performance of the inkjet printhead is limited by two factors. The first one is the residual pressure oscillations. The actuation pulses are designed to provide an ink drop of a specified volume and velocity under the assumption that the ink channel is in a steady state. Once the ink drop is jetted the pressure oscillations inside the ink channel take several micro-seconds to decay. If the next ink drop is jetted before these residual pressure oscillations have decayed, the resulting drop properties will be different from the ones of the previous drop. The second limiting factor is the cross-talk. The drop properties through an ink channel are affected when the neighboring channels are actuated simultaneously. Generally, the drop consistency is improved by manual tuning of the piezo actuation pulse based on some physical insight or based on exhaustive experimental studies on the printhead. However, these ad-hoc procedures have proved to be insufficient in dealing with the above limitations. In this thesis, a model-based control approach is proposed to improve the performance of a DoD inkjet printhead. It offers a systematic and efficient means to improve the attainable performance of a DoD inkjet printhead by reducing the effect of the residual oscillations and the cross-talk. Furthermore, the models that have been developed for this purpose can also give new insights into the operation of the printhead. In order to achieve this goal, it is required to have a fairly accurate and simple model of an inkjet printhead. It is not easy to obtain a good physical model for an inkjet printhead due to insufficient knowledge of the complex interactions in the printhead. Therefore, in this thesis, we have used system identification, i.e. we use experimental measurements in order to develop a model. For this purpose, it is required that the piezo-actuator is also used as a sensor. Note that the crucial aspect in the model development is to obtain a model of the inkjet system close to its operating conditions. Therefore, we have collected measurements of the piezo sensor signal during the jetting of a series of drops at a given DoD frequency. For the printhead under investigation, we found that the dynamics of the ink channel are dependent on the DoD frequency. This phenomenon is caused by non-linearities in the droplet formation. Consequently, we have modeled the ink channel dynamics for every DoD frequency. In this thesis, it is shown that the set of local inkjet models obtained at different DoD frequencies can be encompassed by a polytopic uncertainty on the parameters of a nominal model. Using the same identification procedure, the cross-talk can also be modeled. In order to improve the printhead performance the actuation pulse was redesigned. The new drive pulse is designed to provide good performance for all models in the area of uncertainty by means of robust feedforward control. The pulse also respects the pulse shape constraints posed by driving electronics (ASICS). Besides the robust actuation pulse, our approach also introduces an optimal delay between actuation of neighboring channels to reduce the cross-talk. The current driving electronics limits the possibilities of reshaping the actuation pulse. Since it is expected that this limitation will be relaxed in the future, we have also developed procedure to design a robust pulse without pulse shape constraints. The performance improvement achieved with this unconstrained pulse has proved to be quite limited. The proposed method is also useful for inkjet practitioners who do not have any insight in the inkjet dynamics. The efficacy of our approach is demonstrated by our experimental results. The proposed method was verified in practice by jetting a series of ink drops at various DoD frequencies and also by jetting a bitmap image. For the printhead under consideration, the drop-consistency is improved by almost four times with the proposed approach when compared to the conventional methods.","inkjet printhead; identification; feedforward control; robust control; optimization","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","http://resolver.tudelft.nl/uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","Averaging Level Control to Reduce Off-Spec Material in a Continuous Pharmaceutical Pilot Plant","Lakerveld, R.; Benyahia, B.; Heider, P.L.; Zhang, H.; Braatz, R.D.; Barton, P.I.","","2013","The judicious use of buffering capacity is important in the development of future continuous pharmaceutical manufacturing processes. The potential benefits are investigated of using optimal-averaging level control for tanks that have buffering capacity for a section of a continuous pharmaceutical pilot plant involving two crystallizers, a combined filtration and washing stage and a buffer tank. A closed-loop dynamic model is utilized to represent the experimental operation, with the relevant model parameters and initial conditions estimated from experimental data that contained a significant disturbance and a change in setpoint of a concentration control loop. The performance of conventional proportional-integral (PI) level controllers is compared with optimal-averaging level controllers. The aim is to reduce the production of off-spec material in a tubular reactor by minimizing the variations in the outlet flow rate of its upstream buffer tank. The results show a distinct difference in behavior, with the optimal-averaging level controllers strongly outperforming the PI controllers. In general, the results stress the importance of dynamic process modeling for the design of future continuous pharmaceutical processes.","control; process modeling; process simulation; parameter estimation; dynamic modeling; optimization; crystallization; continuous pharmaceutical manufacturing","en","journal article","MDPI","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","","" "uuid:f30bd41b-4b44-4459-ab68-d913fffdb8e9","http://resolver.tudelft.nl/uuid:f30bd41b-4b44-4459-ab68-d913fffdb8e9","Estimation of primaries by sparse inversion incuding the ghost","Verschuur, D.J.","","2013","Today, the problem of surface-related multiples, especially in shallow water, is not fully solved. Although surface-related multiple elimination (SRME) method has proved to be successful on a large number of data cases, the involved adaptive subtraction acts as a weak link in this methodology, where primaries can be distorted due to their interference with multiples. Therefore, recently, SRME has been redefined as a large-scale inversion process, called estimation of primaries by sparse inversion (EPSI). In this process the multi-dimensional primary impulse responses are considered as the unknowns in a largescale inversion process. By parameterizing these impulse responses as spikes in the space-time domain, and using a sparsity constraint in the update step, the algorithm looks for those primaries that, together with their associated multiples, explain the total input data. As the objective function in this minimization process truly goes to zero, the tendency for distorting primaries is greatly reduced. An additional advantage is that imperfections in the data can be included in the forward model and resolved simultaneously, such as the missing near offsets. In this paper it is demonstrated that the ghost effect can also be included in the EPSI formulation after which a ghost-free primary estimate can be obtained, even in the case the ghost notch is within the desired spectrum.","acquisition; inversion; multiples; optimization; wave equation","en","journal article","Society of Exploration Geophysicists","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","","" "uuid:5ede00e1-9101-49ea-9a2f-81b99291b110","http://resolver.tudelft.nl/uuid:5ede00e1-9101-49ea-9a2f-81b99291b110","Risk approach to land reclamation: Feasibility of a polder terminal","Lendering, K.T.; Jonkman, S.N.; Peters, D.J.","","2013","New ports are mostly constructed on low lying coastal areas or shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting ‘convention-al terminal’ requires large volumes of fill material often dredged from the sea, which is costly. The terminal yard of a ‘polder terminal’ lies below the outside water level and is surrounded by a quay wall flood defense structure. This saves large amounts of reclamation cost but introduces higher damage potential during flood-ing and thus an increased flood risk. A risk-based framework is made to determine the optimal quay wall and polder level, which is an optimization (cost benefit analysis) under two variables. Overtopping failure proves to be the dominant failure mechanism for flooding. The reclamation savings prove to be larger than the in-creased flood risk demonstrating that the polder terminal could be an attractive alternative to the conventional terminal.","container terminals; flood risks; optimization; polder terminals; probabilistic design","en","conference paper","CRC Press/Balkema - Taylor & Francis Group","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:6bf9ad22-c4a5-4f5f-8006-fce525935f04","http://resolver.tudelft.nl/uuid:6bf9ad22-c4a5-4f5f-8006-fce525935f04","Cloud-Based Design Analysis and Optimization Framework","Mueller, V.; Strobbe, T.","","2013","Integration of analysis into early design phases in support of improved building performance has become increasingly important. It is considered a required response to demands on contemporary building design to meet environmental concerns. The goal is to assist designers in their decision making throughout the design of a building but with growing focus on the earlier phases in design during which design changes consume less effort than similar changes would in later design phases or during construction and occupation.Multi-disciplinary optimization has the potential of providing design teams with information about the potential trade-offs between various goals, some of which may be in conflict with each other. A commonly used class of optimization algorithms is the class of genetic algorithms which mimic the evolutionary process. For effective parallelization of the cascading processes occurring in the application of genetic algorithms in multi-disciplinary optimization we propose a cloud implementation and describe its architecture designed to handle the cascading tasks as efficiently as possible.","cloud computing; design analysis; optimization; generative design; building performance","en","conference paper","","","","","","","","","","","","","","" "uuid:7d81abad-fcbe-4094-871a-54755ee0f03e","http://resolver.tudelft.nl/uuid:7d81abad-fcbe-4094-871a-54755ee0f03e","Packing Optimization for Digital Fabrication","Dritsas, S.; Kalvo, R.; Sevtsuk, A.","","2013","We present a design-computation method of design-to-production automation and optimization in digital fabrication; an algorithmic process minimizing material use, reducing fabrication time and improving production costs of complex architectural form. Our system compacts structural elements of variable dimensions within fixed-size sheets of stock material, revisiting a classical challenge known as the two-dimensional bin-packing problem. We demonstrate improvements in performance using our heuristic metric, an approach with potential for a wider range of architectural and engineering design-built digital fabrication applications, and discuss the challenges of constructing free-form design efficiently using operational research methodologies.","design computation; digital fabrication; automation; optimization","en","conference paper","","","","","","","","","","","","","","" "uuid:76b9b6db-926c-479e-9031-ed4abf2324df","http://resolver.tudelft.nl/uuid:76b9b6db-926c-479e-9031-ed4abf2324df","A Computational Method for Integrating Parametric Origami Design and Acoustic Engineering","Takenaka, T.; Okabe, A.","","2013","This paper proposes a computational form-finding method for integrating parametric origami design and acoustic engineering to find the best geometric form of a concert hall. The paper describes an application of this method to a concert hall design project in Japan. The method consists of three interactive subprograms: a parametric origami program, an acoustic simulation program, and an optimization program. The advantages of the proposed method are as follows. First, it is easy to visualize engineering results obtained from the acoustic simulation program. Second, it can deal with acoustic parameters as one of the primary design materials as well as origami parameters and design intentions. Third, it provides a final optimized geometric form satisfying both architectural design and acoustic conditions. The method is valuable for generating new possibilities of architectural form by shifting from a traditional form-making process to a form-finding process.","interactive design method; parametric origami; acoustic simulation; optimization; quadrat count method","en","conference paper","","","","","","","","","","","","","","" "uuid:241873a0-ad14-43f8-a135-e2c133622c2f","http://resolver.tudelft.nl/uuid:241873a0-ad14-43f8-a135-e2c133622c2f","Biological Computation for Digital Design and Fabrication: A biologically-informed finite element approach to structural performance and material optimization of robotically deposited fibre structures","Oxman, N.; Laucks, J.; Kayser, M.; Uribe, C.D.G.; Duro-Royo, J.","","2013","The formation of non-woven fibre structures generated by the Bombyx mori silkworm is explored as a computational approach for shape and material optimization. Biological case studies are presented and a design approach for the use of silkworms as entities that can compute fibrous material organization is given in the context of an architectural design installation. We demonstrate that in the absence of vertical axes the silkworm can spin flat silk patches of variable shape and density. We present experiments suggesting sufficient correlation between topographical surface features, spinning geometry and fibre density. The research represents a scalable approach for optimization-driven fibre-based structural design and suggests a biology-driven strategy for material computation.","biologically computed digital fabrication; robotic fabrication; finite element analysis; optimization; CNC weaving","en","conference paper","","","","","","","","","","","","","","" "uuid:38379080-da96-4acd-a86d-f3b8f492dd1b","http://resolver.tudelft.nl/uuid:38379080-da96-4acd-a86d-f3b8f492dd1b","Algorithmic Engineering in Public Space","Hulin, J.; Pavlicek, J.","","2013","The paper reflects on a relationship between an algorithmic and a standard (intuitive) approach to design of public space. A realized project of a plaza renovation in Czech town Vsetin is described as a study case. The paper offers an overview of benefits and drawbacks of the algorithmic approach in the described study case and it outlines more general conclusions.","algorithm; public space; circle packing; optimization; pavement","en","conference paper","","","","","","","","","","","","","","" "uuid:25459ba0-fe3a-444c-847a-34ad5c41ab9f","http://resolver.tudelft.nl/uuid:25459ba0-fe3a-444c-847a-34ad5c41ab9f","Integrating Computational and Building Performance Simulation Techniques for Optimized Facade Designs","Gadelhak, M.","","2013","This paper investigates the integration of Building Performance Simulation (BPS) and optimization tools to provide high performance solutions. An office room in Cairo, Egypt was chosen as a base testing case, where a Genetic Algorithm (GA) was used for optimizing the annual daylighting performance of two parametrically modeled daylighting systems. In the first case, a combination of a redirecting system (light shelf) and shading system (solar screen) was studied. While in the second, a free-form gills surface was also optimized to provide acceptable daylighting performance. Results highlight the promising future of using computational techniques along with simulation tools, and provide a methodology for integrating optimization and performance simulation techniques at early design stages.","High performance facade; daylighting simulation; optimization; form finding; genetic algorithm","en","conference paper","","","","","","","","","","","","","","" "uuid:3bfab3e0-d826-44c5-81da-f06c33ee0299","http://resolver.tudelft.nl/uuid:3bfab3e0-d826-44c5-81da-f06c33ee0299","A Case Study in Teaching Construction of Building Design Spaces","Nicknam, M.; Bernal, M.; Haymaker, J.","","2013","Until recently, design teams were constrained by tools and schedule to only be able to generate a few alternatives, and analyze these from just a few perspectives. The rapid emergence of performance-based design, analysis, and optimization tools gives design teams the ability to construct and analyze far larger design spaces more quickly. This creates new opportunities and challenges in the ways we teach and design. Students and professionals now need to learn to formulate and execute design spaces in efficient and effective ways. This paper describes curriculum that was taught in a course 8803 Multidisciplinary Analysis and Optimization taught by the authors at Schools of Architecture and Building Construction at Georgia Tech in spring 2013. We approach design as a multidisciplinary design space formulation and search process that seeks maximum value. To explore design spaces, student designers need to execute several iterative processes of problem formulation, generate alternative, analyze them, visualize trade space, and address decision-making. The paper first describes students design space exploration experiences, and concludes with our observations of the current challenges and opportunities.","design space exploration; teaching; multidisciplinary; optimization; analysis","en","conference paper","","","","","","","","","","","","","","" "uuid:1d9c4022-dbd6-4452-9842-4649c1fdd432","http://resolver.tudelft.nl/uuid:1d9c4022-dbd6-4452-9842-4649c1fdd432","A Freight Transport Model for Integrated Network, Service, and Policy Design","Zhang, M.","Tavasszy, L.A. (promotor)","2013","“The goal of the European Transport Policy is to establish a sustainable transport system that meets society’s economic, social and environmental needs ” (ECE, 2009). This statement indicates the challenges that the European transport policy makers are faced with when facilitating an increasing freight transport demand with limited transport infrastructures. The development of an interconnected intermodal transport system has been recognized by the European Commission as an important, strategic task that will contribute to solving the dilemma between the accommodation of an increased freight flow and the need for a sustainable living environment. This thesis focuses on model-based, quantitative analysis for infrastructure network design decisions for large scale intermodal transport systems.. The involvement of public concerns, as represented by the governmental objectives on sustainability, brings additional complexity into infrastructure network design. Governments are often concerned with network design on a regional scale or a national scale. The enlargement of the network scale to an international level further increases the level of heterogeneity of the network, among other factors in terms of the number of actors involved, the diversity of transport demand and the variety of transport service supply. These new objectives and dimensions pose new challenges to freight transport infrastructure network design. This thesis proposes a new model to support policy making for an intermodal freight transport network. The model is able to simultaneously incorporate large scale, multimodal, multi-commodity and multi-actor perspectives. It can be used for integrated policy, infrastructure and service design. Results can be visualized per transport mode and per commodity value group on a geographic information system at segmental level, terminal level, corridor level, regional level, national level, and network level. Implementation of the model for a realistic scale network design is another contribution of this thesis. To this end, we calibrated the model by using two approaches: a Genetic Algorithm based method and a feedback-based method. The model was validated by comparing the modelled link flows with observations, testing the cross elasticities of the costs to demand and comparing the catchment area of the terminals with areas observed in practice. The calibration results indicate that the model adequately captures the network usage decisions on an aggregated level. The model was applied to Dutch container transport network design problems. Databases of Dutch container transport demand, features of the European multimodal freight transport infrastructure network, information about selected inland waterway transport services, and information about transport and transhipment costs, emissions and external costs were embedded in the model. After completing the theoretical and empirical specification the model was applied to policy decisions on the Dutch container transport. The thesis extensively discusses the integrated infrastructure, service, and policy design that may contribute to managing the costs of the freight flows, meanwhile ensuring a sustainable living environment. The main findings from the application are as follows. - A higher CO2 price can results in lower total transport costs, despite extra handling costs in intermodal transhipments. The costs saved by bundling freight and using intermodal transport can compensate the additional handling costs. As these cannot compensate for the internalized CO2 emission costs, the total operational costs borne by transport operators will increase. - Network efficiency can be increased by closing terminals that are not able to attract sufficient volumes of demand. However, it is not likely to happen in practice, due to the fact that the private terminal operators and the local governments have local interests to protect on those small terminals that may conflict with the objective of minimizing total network costs. - The hub-network-services assumed and tested in this study cannot compete with road transport or shuttle barge transport services in the base scenario due to the extra transhipment costs, low load factor, and low demand for IWW container transport. In a future scenario, these services are only feasible under very high traffic growth. - There is not one single optimal future infrastructure network. Instead, a good infrastructure network design mainly depends on the future demand, transport price, and development of new transport technology. Based on the conclusions drawn in this thesis, implementing the combination of CO2 pricing and terminal network configuration is more effective than solely implementing CO2 pricing, with regard to total network CO2 emissions. A range of efficient networks, forming a frontier of minimal total network costs and total network CO2 emissions, is presented in the thesis, instead of one single optimal solution. The frontier provides more options in terminal network optimization in terms of the target network performance. The question which is the optimal network will depend on the relative value placed on CO2 emissions. The thesis ends with a vision on future freight transport network design models. A potential research direction is to incorporate the dimension of time into the model. This extension will enable the model to capture dynamic demand; to be applicable for scheduling synchronized intermodal transport services; to provide more realistic estimations of transport emissions and to analyse network reliability, including network robustness and service robustness. Reference: CEC (2009) 'COMMUNICATION FROM THE COMMISSION: A sustainable future for transport: Towards an integrated, technology-led and user friendly system', Commission of the European Communities, Brussels.","freight; transport; network design; optimization; GIS; service network; transport policy","en","doctoral thesis","TRAIL Research School","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","" "uuid:0feb1f50-32ae-4e54-87ea-3b551497389e","http://resolver.tudelft.nl/uuid:0feb1f50-32ae-4e54-87ea-3b551497389e","Risk based design of land reclamation and the feasibility of the polder terminal","Lendering, K.; Jonkman, S.N.; Peters, D.J.","","2013","New ports are mostly constructed on low lying coastal areas or in shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting ‘conventional terminal’ requires large volumes of good quality fill material often dredged from the sea, which is costly. The alternative concept of a ‘polder terminal’ has a terminal yard which lies below the outside water level and is surrounded by a quay wall flood defence structure. This saves large amounts of reclamation investment but introduces a higher damage potential in case of flooding and corresponding flood risk. Important conditions for the feasibility of a polder terminal are low pervious subsoil and high reclamation cost. Further, a polder terminal requires a water storage and drainage system, against additional cost. A risk-based analysis of the optimal quay wall height and polder level is performed, which is an optimization (cost benefit analysis) under two variables. The overtopping failure mechanism proves to be the dominant failure mechanism for flooding. During overtopping the water depth in the polder terminal is larger than on the conventional terminal, resulting in higher damage potential and corresponding flood risk for the polder terminal. However, the reclamation savings prove to be larger than the increased flood risk: the ‘polder terminal’ could save 10 to 30% of the total cost (investment and risk) demonstrating that it to be an economically attractive alternative to a conventional terminal.","container terminals; flood risks; optimization; polder terminals; probabilistic design","en","conference paper","Institute for Research and Community Service","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:56a64800-0dde-42fd-a2f1-05ed7c357b0b","http://resolver.tudelft.nl/uuid:56a64800-0dde-42fd-a2f1-05ed7c357b0b","An Optimization Model for Simultaneous Periodic Timetable Generation and Stability Analysis","Sparing, D.; Goverde, R.M.P.; Hansen, I.A.","","2013","We present an optimization model which is able to generate feasible periodic timetables for networks given the line structure and the requested line frequencies, taking into account infrastructure constraints and train overtake locations. As the model uses the minimum cycle time as the objective function, the stability of the timetable is also simultaneously expressed. Dimension reduction techniques are presented taking advantage of the symmetries of periodic timetables. The model is applied to a case study of a dense corridor with heterogeneous traffic.","timetable design; timetable stability; optimization","en","conference paper","International Association of Railway Operations Research (IAROR)","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","" "uuid:3e2cb6d7-3ba2-4b45-af71-2fa106b5d189","http://resolver.tudelft.nl/uuid:3e2cb6d7-3ba2-4b45-af71-2fa106b5d189","Optimal Usage of Multiple Energy Carriers in Residential Systems: Unit Scheduling and Power Control","Ramirez-Elizondo, L.M.","Van der Sluis, L. (promotor)","2013","The world’s increasing energy demand and growing environmental concerns have motivated scientists to develop new technologies and methods to make better use of the remaining resources of our planet. The main objective of this dissertation is to develop a scheduling and control tool at the district level for small-scale systems with multiple energy carriers and to apply exergy-related concepts for the optimization of these systems. The tool is based on the energy hub approach and provides insights and techniques that can be used to evaluate new district energy scenarios. The topics that are presented include the multicarrier unit commitment framework, the multi-carrier exergy hub approach, a hierarchical multi-carrier control architecture, a comparison of multi-carrier power applications and the implementation of a multi-carrier energy management system in a real infrastructure.","optimization; multiple energy-carriers; renewables; sustainable energy","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Sustainable Energy","","","","" "uuid:fcc290f8-cf60-44a4-be68-189f29a2fb82","http://resolver.tudelft.nl/uuid:fcc290f8-cf60-44a4-be68-189f29a2fb82","Estimates of extremes in the best of all possible worlds","Van Nooyen, R.R.P.; Kolechkina, A.G.","","2012","In applied hydrology the question of the probability of exceeding a certain value occurs regularly. Often it is in a context where extrapolation from a relatively short time series is needed. It is well known that in its simplest form extreme value theory applies to independent identically distributed random variables. It is also well known that more advanced theory allows for some degrees of correlation and that techniques for coping with trends are available. However, the problem of extrapolation remains. To isolate the effect of extrapolation we generate synthetic time series of length 20, 50 and 100 from known distributions to derive empirical distributions for the 1:100 and 1:1000 exceedance.","extremes; estimators; optimization; statistical distributions","en","conference paper","STAHY","","","","","","","","Civil Engineering and Geosciences","Water Management","","","","" "uuid:93af1749-0b97-416a-ba27-907ae4921a7f","http://resolver.tudelft.nl/uuid:93af1749-0b97-416a-ba27-907ae4921a7f","Using particle packing technology for sustainable concrete mixture design","Fennis, S.A.A.M.; Walraven, J.C.","","2012","The annual production of Portland cement, estimated at 3.4 billion tons in 2011, is responsible for about 7% of the total worldwide CO2-emission. To reduce this environmental impact it is important to use innovative technologies for the design of concrete structures and mixtures. In this paper, it is shown how particle packing technology can be used to reduce the amount of cement in concrete by concrete mixture optimization, resulting in more sustainable concrete. First, three different methods to determine the particle distribution of a mixture are presented; optimization curves, particle packing models and discrete element modelling. The advantage of using analytical particle packing models is presented based on relations between packing density, water demand and strength. Experiments on ecological concrete demonstrate how effective particle packing technology can be used to reduce the cement content in concrete. Three concrete mixtures with low cement content were developed and the compressive strength, tensile strength, modulus of elasticity, shrinkage, creep and electrical resistance was determined. By using particle packing technology in concrete mixture optimization, it is possible to design concrete in which the cement content is reduced by more than 50% and the CO2-emission of concrete is reduced by 25%.","aggregate; cement spacing; concrete; flowability; particle packing; optimization","en","journal article","Heron","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","","" "uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","http://resolver.tudelft.nl/uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","Sequential robust optimization of a V-bending process using numerical simulations","Wiebenga, J.H.; Van den Boorgaard, A.H.; Klaseboer, G.","","2012","The coupling of finite element simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a generally applicable strategy for modeling and efficiently solving robust optimization problems based on time consuming simulations. Noise variables and their effect on the responses are taken into account explicitly. The robust optimization strategy consists of four main stages: modeling, sensitivity analysis, robust optimization and sequential robust optimization. Use is made of a metamodel-based optimization approach to couple the computationally expensive finite element simulations with the robust optimization procedure. The initial metamodel approximation will only serve to find a first estimate of the robust optimum. Sequential optimization steps are subsequently applied to efficiently increase the accuracy of the response prediction at regions of interest containing the optimal robust design. The applicability of the proposed robust optimization strategy is demonstrated by the sequential robust optimization of an analytical test function and an industrial V-bending process. For the industrial application, several production trial runs have been performed to investigate and validate the robustness of the production process. For both applications, it is shown that the robust optimization strategy accounts for the effect of different sources of uncertainty onto the process responses in a very efficient manner. Moreover, application of the methodology to the industrial V-bending process results in valuable process insights and an improved robust process design.","metal forming processes; finite element method; optimization; uncertainty; robustness; sequential optimization","en","journal article","Springer-Verlag","","","","","","","","Mechanical, Maritime and Materials Engineering","Materials Innovation Institute","","","","" "uuid:aa419ba5-3d31-4d73-adf3-c79870deccc7","http://resolver.tudelft.nl/uuid:aa419ba5-3d31-4d73-adf3-c79870deccc7","Optimal Adaptive Policymaking under Deep Uncertainty? Yes we can!","Hamarat, C.; Kwakkel, J.H.; Pruyt, E.","","2012","Uncertainty manifests itself in almost every aspect of decision making. Adaptive and flexible policy design becomes crucial under uncertainty. An adaptive policy is designed to be flexible and can be adapted over time to changing circumstances and unforeseeable surprises. A crucial part of an adaptive policy is the monitoring system and associated pre-specified actions to be taken in response to how the future unfolds. However, the adaptive policymaking literature remains silent on how to design this monitoring system and how to specify appropriate values that will trigger the pre-specified responses. These trigger values have to be chosen such that the resulting adaptive plan is robust and flexible to surprises in the future. Actions should be neither triggered too early nor too late. One possible family of techniques for specifying triggers is optimization. Trigger values would then be the values that maximize the extent of goal achievement across a large ensemble of scenarios. This ensemble of scenarios is generated using Exploratory Modeling and Analysis. In this paper, we show how optimization can be useful for the specification of trigger values. A Genetic Algorithm is used because of its flexibility and efficiency in complex and irregular solution spaces. The proposed approach is illustrated for the transitions of the energy system towards a more sustainable functioning which requires effective dynamic adaptive policy design. The main aim of this paper is to show the contribution of optimization for adaptive policy design.","adaptive policymaking; exploratory modeling and analysis; optimization","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","","" "uuid:a53f5bbd-2640-41cb-982d-b05a6fff9166","http://resolver.tudelft.nl/uuid:a53f5bbd-2640-41cb-982d-b05a6fff9166","Manifold mapping optimization with of without true gradients","Delinchant, B.; Lahaye, D.; Wurtz, F.; Coulomb, J.L.","","2012","This paper deals with the Space Mapping optimization algorithms in general and with the Manifold Mapping technique in particular. The idea of such algorithms is to optimize a model with a minimum number of each objective function evaluations using a less accurate but faster model. In this optimization procedure, fine and coarse models interact at each iteration in order to adjust themselves in order to converge to the real optimum. The Manifold Mapping technique guarantees mathematically this convergence but requires gradients of both fine and coarse model. Approximated gradients can be used for some cases but are subject to divergence. True gradients can be obtained for many numerical model using adjoint techniques, symbolic or automatic differentiation. In this context, we have tested several Manifold Mapping variants and compared their convergence in the case of real magnetic device optimization.","space mapping; manifold mapping; optimization; surrogate model; gradients; symbolic derivation; automatic differentiation","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:9a018e13-f29e-4597-8870-6f8ab2fa9787","http://resolver.tudelft.nl/uuid:9a018e13-f29e-4597-8870-6f8ab2fa9787","Multi-Objective Optimization for Urban Drainage Rehabilitation","Barreto Cordero, W.J.","Price, R.K. (promotor); Solomatine, D.P. (promotor)","2012","Flooding in urbanized areas has become a very important issue around the world. The level of service (or performance) of urban drainage systems (UDS) degrades in time for a number of reasons. In order to maintain an acceptable performance of UDS, early rehabilitation plans must be developed and implemented. In developing countries the situation is serious, little investment is done and there are smaller funds each year for rehabilitation. The allocation of such funds must be “optimal” in providing value for money. However this task is not easy to achieve due to the multicriteria nature of the rehabilitation process, taking into account technical, environmental and social interests. Most of the time these are conflicting, which make it a highly demanding task. The present book introduce a framework to deal with multicriteria decision making for the rehabilitation of urban drainage systems, and focuses on several aspects such as the improvement of the performance of the multicriteria optimization through the inclusion of new features in the algorithms and the proper selection of performance criteria. The use of Genetic Algorithms, parallelization and application in countries like Brazil, Colombia y Venezuela are treated in this book.","multi-objective; urban drainage; optimization; parallel computing,; genetic algorithms","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Water Management","","","","" "uuid:b4aee571-0489-42ff-ab55-d74e980f724a","http://resolver.tudelft.nl/uuid:b4aee571-0489-42ff-ab55-d74e980f724a","Shape Parameterization in Aircraft Design: A Novel Method, Based on B-Splines","Straathof, M.H.","Van Tooren, M.J.L. (promotor)","2012","This thesis introduces a new parameterization technique based on the Class-Shape-Transformation (CST) method. The new technique consists of an extension to the CST method in the form of a refinement function based on B-splines. This Class-Shape-Refinement-Transformation (CSRT) method has the same advantages as the original CST method, while also allowing for local deformations in a shape. A number of test cases were performed using two different design frameworks with low and high fidelity. The low fidelity framework was based on a commercial panel method code and coupled to various optimization algorithms. The high fidelity framework used an in-house Euler code and employed adjoint optimization.","shape; parameterization; aircraft; design; B-splines; Class-Shape-Refinement-Transformation; adjoint; euler; optimization","en","doctoral thesis","","","","","","","","2012-02-03","Aerospace Engineering","FPP","","","","" "uuid:65db30d9-206c-4661-abd2-c645482a8e2d","http://resolver.tudelft.nl/uuid:65db30d9-206c-4661-abd2-c645482a8e2d","Binaural Model-Based Speech Intelligibility Enhancement and Assessment in Hearing Aids","Schlesinger, A.","Gisolf, D. (promotor); Boone, M.M. (promotor)","2012","The enhancement of speech intelligibility in noise is still the main subject in hearing aid research. Based on the advanced results obtained with the hearing glasses, in the present research the speech intelligibility is even further improved by the application of binaural post-filters. The functionalities of these filters are related to the principles of the auditory scene analysis. A statistical analysis of binaural cues in noise at the output of different hearing aids, the utilization of a Bayesian classifier in the source separation process and an evolutionary optimization against binaural models of speech intelligibility provides a comprehensive understanding for the utilization of binaural post-filters in adverse environments. As the listening ease and a fair amount of speech quality are mandatory in speech enhancement, tradeoffs between speech intelligibility and quality were studied in terms of the preservation of natural binaural cues and the suppression of musical noise.","CASA; STI; SII; binaural; genetic algorithm; optimization; Bayesian classification","en","doctoral thesis","TU Delft","","","","","","","2011-12-23","Applied Sciences","Imaging Science and Technology","","","","" "uuid:dfaae28f-c2dd-4bdc-82d6-a1c1aa98fa26","http://resolver.tudelft.nl/uuid:dfaae28f-c2dd-4bdc-82d6-a1c1aa98fa26","Predicting Storm Surges: Chaos, Computational Intelligence, Data Assimilation, Ensembles","Siek, M.B.L.A.","Solomatine, D.P. (promotor)","2011","Accurate predictions of storm surge are of importance in many coastal areas. This book focuses on data-driven modelling using methods of nonlinear dynamics and chaos theory for predicting storm surges. A number of new enhancements are presented: phase space dimensionality reduction, incomplete time series, phase error correction, finding true neighbours, optimization of chaotic model, data assimilation and multi-model ensembles. These were tested on the case studies in the North Sea and Caribbean Sea. Chaotic models appear to be are accurate and reliable short and mid-term predictors of storm surges aimed at supporting decision-makers for flood prediction and ship navigation.","ocean wave prediction; nonlinear dynamics and chaos theory; neural networks; optimization; dimensionality reduction; phase error correction; incomplete time series; multi-model ensemble prediction; data-driven modelling; computational intelligence; hydroinformatics","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Water Management","","","","" "uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","http://resolver.tudelft.nl/uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","Event-based progression detection strategies using scanning laser polarimetry images of the human retina","Vermeer, K.A.; Lo, B.; Zhou, Q.; Vos, F.M.; Vossepoel, A.M.; Lemij, H.G.","","2011","Monitoring glaucoma patients and ensuring optimal treatment requires accurate and precise detection of progression. Many glaucomatous progression detection strategies may be formulated for Scanning Laser Polarimetry (SLP) data of the local nerve fiber thickness. In this paper, several strategies, all based on repeated GDx VCC SLP measurements, are tested to identify the optimal one for clinical use. The parameters of the methods were adapted to yield a set specificity of 97.5% on real image series. For a fixed sensitivity of 90%, the minimally detectable loss was subsequently determined for both localized and diffuse loss. Due to the large size of the required data set, a previously described simulation method was used for assessing the minimally detectable loss. The optimal strategy was identified and was based on two baseline visits and two follow-up visits, requiring two-out-of-four positive tests. Its associated minimally detectable loss was 5–12?m, depending on the reproducibility of the measurements.","progression detection; simulation; glaucoma; polarimetry; optimization; image processing","en","journal article","Elsevier","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","","" "uuid:be0f5746-ff05-42a3-805a-f4a72fef4cc6","http://resolver.tudelft.nl/uuid:be0f5746-ff05-42a3-805a-f4a72fef4cc6","Applying the shuffled frog-leaping algorithm to improve scheduling of construction projects with activity splitting allowed","Tavakolan, M.T.; Ashuri, B.; Chiara, N.","","2011","In situation of contractors competing to finish a given project with the least duration and cost, acquiring the ability to improve the project quality properties seems essential for project managers. Evolutionary Algorithm (EAs) have been applied as suitable algorithms to develop the multi-objective Time-Cost trade-off Optimization (TCO) and Time-Cost-Resource Optimization (TCRO) in the past few decades ; however, by improving EAs, the Shuffled Frog Leaping Algorithm (SFLA) has been introduced as an algorithm capable of achieving a better solution with faster convergence. Furthermore, considering splitting in execution of activities can make models closer to approximating real projects. One example has been used to demonstrate the impact of SFLA and splitting on the results of the model and to compare with previous algorithms. Current research has elucidated that SFLA improves final results and splitting allows the model find suitable solutions.","optimization; multi-objective SFLA; splitting; leveling; construction management","en","conference paper","","","","","","","","","","","","","","" "uuid:8d7290d3-a903-4cfe-8c12-0387b94a192e","http://resolver.tudelft.nl/uuid:8d7290d3-a903-4cfe-8c12-0387b94a192e","Information Theory for Risk-based Water System Operation","Weijs, S.V.","Van de Giesen, N.C. (promotor)","2011","Operational management of water resources needs predictions of future behavior of water systems, to anticipate shortage or excess of water in a timely manner. Because the natural systems that are part of the hydrological cycle are complex, the predictions inevitably are subject to considerable uncertainty. Still, definitive decisions about e.g. hydropower reservoir releases or polder pump flows have to be made looking ahead into the uncertain future. This demands risk-based approach, in which, ideally, all possible future events should be considered, along with their probabilities that represent the information and uncertainty available at the time of decision. The thesis deals with water, but the flows studied are mostly those of information. Like the flow of water, also information flows obey certain fundamental laws. These are the laws of Information Theory, which also provide guidelines for developing models, handling data, and designing statistical procedures to make predictions and decisions. The information-theoretical perspective used in the thesis leads to the conclusion that predictions should necessarily be probabilistic and should be evaluated using a relative entropy measure, of which an intuitive decomposition into three components is presented. Other chapters in the thesis deal with the use of model predictive control and stochastic dynamic programming for operational water management, the time-dynamics of information, generation of weighted ensemble forecasts that balance uncertainty and information, and a perspective on data compression as philosophy of science. Recommendations for practice and further research indicate that entropy has a bright future, not only as an ever-increasing thermodynamic measure, but also as an information-theoretical measure of uncertainty that is useful in any field where predictions and decisions have to be made in a context of complex and largely unobservable systems.","information theory; operational water management; risk; probabilistic forecasts; optimization; entropy; control; water; hydrology; water resources management","en","doctoral thesis","VSSD","","","","","","","2011-03-29","Civil Engineering and Geosciences","Watermanagement","","","","" "uuid:58f4d3c3-0a38-4640-aded-51d7bca2396e","http://resolver.tudelft.nl/uuid:58f4d3c3-0a38-4640-aded-51d7bca2396e","Analysis of near-optimal evacuation instructions","Huibregtse, O.L.; Bliemer, M.C.J.; Hoogendoorn, S.P.","","2010","In this paper, approximations of optimal evacuation instructions are analyzed. The instructions, consisting of a departure time, a destination, and a route, are for the evacuation by car of a population of a region threatened by a hazard. An optimization method presented in earlier research is applied on three different hazard scenarios resulting in an instruction set for each scenario. These instruction sets are different because of network degeneration caused by the different hazard scenarios. Analysis of the network occupancy during the evacuations as consequence of the instruction sets shows that the capacity is used in the scenarios for minimal 87%, 90%, and 87% for the period wherein the effect of the network degeneration is relatively small. Although the results are logical, no clear patterns are perceptible in the instructions leading to this network occupancy. This endorses to the viewpoint from the earlier paper, namely, that it is useful to apply an optimization method to create evacuation instructions instead of applying instructions set up by straightforward rules (like evacuating to the nearest destination). Furthermore, it shows the efficiency of this specific optimization method.","evacuation; instructions; optimization","en","journal article","Elsevier","","","","","","","","Civil Engineering and Geosciences","Transport and Planning","","","","" "uuid:ccc6e7f3-3b21-4f05-a0ca-df8cad6d0ca0","http://resolver.tudelft.nl/uuid:ccc6e7f3-3b21-4f05-a0ca-df8cad6d0ca0","Optimization of sandwich composites fuselages under flight loads","Yan, C.; Bergsma, O.; Koussios, S.; Zu, L.; Beukers, A.","","2010","The sandwich composites fuselages appear to be a promising choice for the future aircrafts because of their structural efficiency and functional integration advantages. However, the design of sandwich composites is more complex than other structures because of many involved variables. In this paper, the fuselage is designed as a sandwich composites cylinder, and its structural optimization using the finite element method (FEM) is outlined to obtain the minimum weight. The constraints include structural stability and the composites failure criteria. In order to get a verification baseline for the FEManalysis, the stability of sandwich structures is studied and the optimal design is performed based on the analytical formulae. Then, the predicted buckling loads and the optimization results obtained froma FEMmodel are compared with that from the analytical formulas, and a good agreement is achieved. A detailed parametric optimal design for the sandwich composites cylinder is conducted. The optimization method used here includes two steps: the minimization of the layer thickness followed by tailoring of the fiber orientation. The factors comprise layer number, fiber orientation, core thickness, frame dimension and spacing. Results show that the two-step optimization is an effective method for the sandwich composites and the foam sandwich cylinder with core thickness of 5 mm and frame pitch of 0.5 m exhibits the minimum weight.","sandwich; composites; stability; optimization; ANOVA","en","journal article","Springer","","","","","","","","Aerospace Engineering","Aerospace Materials and Manufacturing","","","","" "uuid:c2a93de0-21e4-490b-a18c-09f319c2da17","http://resolver.tudelft.nl/uuid:c2a93de0-21e4-490b-a18c-09f319c2da17","Rigorous simulations of emitting and non-emitting nano-optical structures","Janssen, O.T.A.","Urbach, H.P. (promotor)","2010","In the next decade, several applications of nanotechnology will change our lives. LED lighting is about to replace the common light bulb. The main advantages are its energy efficiency and long lifetime. LEDs can be much more efficient, when part of the emitted light that is currently trapped in the device, could be radiated out of the device. Other devices such as photovoltaic solar cells and biosensors can also be made more efficient and cheaper. LEDs, solar cells and biosensors have in common that they consist of small structures of the order of the wavelength of the light. With such small structures light can be manipulated in a special way. In this thesis, we describe a method to calculate the interaction of light with these small structures. It is shown that an efficient LED which radiates light, can be treated as a solar cell that absorbs as much of the incoming light as possible. On this so-called reciprocity principle, which was discovered by Henrik Antoon Lorentz, a very efficient computational optimalisation method can be based. With this method existing designs of for example LEDs can be made more efficient iteratively. This thesis shows optimized designs of LEDs, solar cells and biosensors.","FDTD; LED; plasmonics; optimization; reciprocity; biosensors","en","doctoral thesis","Optics Research Group","","","","","","","2010-11-09","Applied Sciences","Imaging Science & Technology","","","","" "uuid:f34c2606-dbae-4182-873b-8c1a99714297","http://resolver.tudelft.nl/uuid:f34c2606-dbae-4182-873b-8c1a99714297","Interval Analysis: Contributions to static and dynamic optimization","De Weerdt, E.","Mulder, J.A. (promotor)","2010","The field of global optimization has been an active one for many years. By far the most applied methods are gradient and evolutionary based algorithms. The most appearing drawback of those types of methods is that one cannot guarantee that the global solution is found within finite time. Moreover, if the global solution is found (by chance), the methods cannot provide a guaranteed feedback to the user stating that the provided solution is the global one. Therefore, no natural stopping conditions are available for most of the existing optimization algorithms. There are, however, other tools available, which do provide the guarantee that the global solution is found and that have natural stopping conditions. Interval analysis in combination with interval arithmetic is such a tool. Interval arithmetic was initially developed to cope with rounding errors in digital computers. Using interval arithmetic, one can perform reliable computing such that catastrophic numeric errors can be prevented (the explosion of the Ariane 5 rocket on June 4, 1996 was caused by a simple numeric overflow). It was soon found, that interval arithmetic could be used to form guaranteed bounds on any type of function or numeric algorithm for any domain. These bounds provide the crucial information needed to perform global optimization. Interval analysis is the group name of all methods that use the information obtained from guaranteed bounds to solve global optimization problems. Developed in the 1960’s, interval analysis gained popularity during the 90’s when digital computers became increasingly powerful. Nowadays, interval analysis has been widely applied in the field of static optimization, i.e. optimization that does not involve differential algebraic equations, and verified integration. However, interval analysis has not been applied often in the field of dynamic optimization. The goal of the research is to investigate whether interval analysis, in combination with interval arithmetic, can be used to solve non-linear, constrained, dynamic optimization problems. Moreover, the possibility of extending existing theory in the field of static optimization is investigated. The focus of the research lies on trajectory optimization (a specific case of dynamic optimization). The most important condition of the designed solvers is that the dynamic constraints, formed by the equations of motion, must be satisfied for all time instances. To reach the research objectives, the theory and application of both interval arithmetic and interval analysis have been thoroughly investigated. The work is divided into two parts. The first part is on static optimization, which includes the discussion on interval arithmetic and describes the basics regarding interval analysis. The existing theory of inclusion functions, formed via interval arithmetic, has been evaluated and extended upon. The development of the Polynomial Inclusion Function, a new type of inclusion function, shows that significant improvements are possible in this field. During the review of interval analysis, its main virtues and limitations were demonstrated. The most important advantages are the guarantee that all optimal solutions are found to any degree of accuracy and that the user knows when the solution set has been found. The main limitation is the curse of dimensionality: the computational load grows, for most problems, exponentially with al linear increase in problem dimension. The author believes that this curse is mainly caused by two aspects of the current implementation of interval analysis. The first aspect is the widening of the inclusion function due to the dependency effects. The dependency effects can be partially prevented by efficient implementation of function evaluations and through application of advanced inclusion functions. However, a generic efficient method for preventing dependency effects is still not available. The other aspect causing the curse of dimensionality is the current inefficient handling of available information. The optimization algorithms within interval analysis are commonly based on branch and bound algorithms. Through a process of elimination, one is left with a list of domains in which the optimal solution set must lie. Current methods for eliminating (part of) the domain, such as the Newton step, do not use the gathered/available information efficiently. This is mainly due to the definition of the domain and the storage of the information, i.e. keeping track of infeasible regions. It is the author’s opinion that this is the reason that the application of interval analysis is limited to solving lower dimensional problems. Despite the curse of dimensionality, interval analysis based solvers can solve complicated, non-linear, constrained problems. This has been shown in multiple chapters in the first part. Complicated problems, such as neural network output optimization and the problem integer ambiguity resolution in the field of Global Navigation Satellite Systems, are solved rigorously by interval analysis based solvers. The applications show that equality and inequality constraints are efficiently handled using interval analysis. Moreover, they show that interval analysis can be used to solve real-life problems and demonstrate that interval analysis is a strong global optimization tool. The second part of the research is on dynamic optimization, thereby focusing on trajectory optimization. The trajectory optimization problem is infinite dimensional with begin and end-point constraints, dynamic constraints (the equations of motion), and possibly additional equality and inequality constraints. The problem is infinite dimensional since the states and controls need to be specified for each time instance. In the field of trajectory optimization one can identify two classes of methods: indirect methods and direct methods. Disregarding the optimization problems for which an analytic solution is present, both classes require a transformation to make the problem solvable. Three transformation methods have been considered: control parameterization, state parameterization, and control and state parameterization. With control parameterization, the control is defined for each time step using a polynomial and the states are computed using explicit integration. For state parameterization, the states are defined and the controls are deduced via the equations of motion (implicit integration). The last method applies parameterization of both the states and controls with respect to time. Trajectories are sought that satisfy the dynamic constraints at given time instances. The nature of the transformation methods implies that the first two methods can be used to find trajectories that satisfy the dynamic constraints at all time instances, while the latter cannot be used for this purpose. Therefore, only the first two methods have been thoroughly investigated. The last method was only briefly reviewed. The main conclusion regarding the control parameterization approach is that it suffers greatly from the required explicit integration. Although verified integration is possible and sharp bounds on the trajectories can be provided, the problem is to prove the existence of a solution within a given domain of the search space. Without the ability to update the estimate of the minimal cost function value early in the optimization process, the computational load becomes very high. Despite the drawback of control parameterization, it has been demonstrate that this approach can be used to find the global solution, although, currently, only very low dimensional problems can be solved. Higher dimensional problems can be solved using the state parameterization approach. By using simplex splines, the begin- and end-point constraints can be implicitly satisfied, which significantly reduces the problem complexity. The limitation is that the approach is only suitable for fully controllable systems. For systems that are not fully controllable one needs to apply explicit integration for all dependent states. This will increase the computational load significantly and would eliminate most of the benefits of the state parameterization approach. An interval analysis based solver has been applied to solve the problem of satellite trajectory planning for formation flying. Although still suffering from the curse of dimensionality, the results demonstrate that interval analysis can be used to solve the problem rigorously. Moreover, it has been shown that the performance of the solver is superior to gradient based solvers when constraints are imposed. The main conclusion of the research is that it is possible to apply interval analysis to dynamic optimization. The current status of the solvers (in this thesis and in literature) allows one to solve only ‘lower’ dimensional problems. Radical changes in the approach of handling information and keeping track of infeasible regions must be made to make interval analysis applicable to higher dimensional problems. Despite the limitations of interval analysis, the presented results clearly demonstrate the virtues of interval analysis based solvers in the field of global optimization. Several new exciting research opportunities have been identified, such as nonlinear stability analysis using interval analysis, the combination of interval analysis and evolutionary algorithms, and a new way of forming inclusion functions to boost the efficiency of interval analysis based solvers. Overall, the potential of interval analysis is very large and the author believes that interval analysis will become one of the most important tools in the field of global optimization in the near future.","interval analysis; optimization; dynamic","en","doctoral thesis","","","","","","","","2010-09-14","Aerospace Engineering","Control and Simulation Division","","","","" "uuid:fdc2dbda-b419-450f-a305-64825a43a0c8","http://resolver.tudelft.nl/uuid:fdc2dbda-b419-450f-a305-64825a43a0c8","Global Optimization using Interval Analysis: Interval Optimization for Aerospace Applications","Van Kampen, E.","Mulder, J.A. (promotor)","2010","Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in adaptive control algorithms; and in system identification problems, such as online aircraft model identification or human perception modeling. The main goal of this thesis is to investigate how Interval Analysis (IA) can be used as a tool for aerospace related optimization problems; to examine its theoretical and practical limitations, and to explore the ways in which optimization algorithms can benefit from interval analysis. A subset of goals is to improve the solutions for a number of aerospace related optimization problems. The scientific contribution of this thesis consists of the design and implementation of interval optimization algorithms for four important aerospace problems. The first contribution concerns finding the trim points for a nonlinear aircraft model. Trim points, defined as the combination of control settings for which all linear and rotational accelerations on the aircraft are zero, are important for flight control system design, since they provide information about the flight envelope and stability properties of the aircraft. Unlike other trim algorithms, the interval based method can guarantee that all trim points are found. In the second application, an interval optimization algorithm is developed for fitting pilot input/output data from an experiment in the SIMONA Research Simulator to a multi-modal human perception model. Perception models improve the understanding of how humans perceive motion and are an essential tool in the design of flight simulators. Results show that the minimum of the cost function found by the interval method is lower than the one previously found, resulting in an improved human perception model. This second application particularly demonstrates the capabilities of IA optimization as a parameter identification tool. The third contribution is an interval based algorithm for solving the integer ambiguity problem related to Global Navigation Satellite Systems (GNSS). Phase measurements of the carrier wave of a GNSS signal are used to estimate the length and orientation of baselines between two or more antennas. This estimation procedure contains an optimization problem in which the integer number of carrier wavelengths between antennas has to be determined. The new interval method provides guarantees that correct solutions are found when the measurement noise is encapsulated by an interval number. The final contribution is an interval optimization algorithm that minimizes fuel consumption during rendezvous and docking procedures of satellites in circular orbits. To avoid integration of interval functions, an analytical solution to the system of differential equations that describes the relative motion of the satellites is used to generate trajectories resulting from a set of thruster pulses of varying amplitudes. Introduction of obstacles, in the form of forbidden areas in the path between the two satellites, makes the problem nonlinear, such that gradient-based optimization algorithms can fail to obtain the globally optimal solution. The interval algorithm always converges to the trajectory that avoids all obstacles and results in minimum fuel consumption. It can be concluded that IA is an excellent tool for solving nonlinear optimization algorithms, providing guarantees on obtaining the global minimum of the cost function.","optimization; interval analysis","en","doctoral thesis","","","","","","","","2010-09-24","Aerospace Engineering","Control and Simulation","","","","" "uuid:f272117c-e1b5-4ae6-96cb-aa86fe62a015","http://resolver.tudelft.nl/uuid:f272117c-e1b5-4ae6-96cb-aa86fe62a015","Overview of Methods for Multi-Level and/or Multi-Disciplinary Optimization","De Wit, A.J.; Van Keulen, A.","","2010","Multi-level optimization and multi-disciplinary optimization are areas of research that are concerned with developing efficient analysis and optimization techniques for complex systems that are made up of coupled elements (components). Within the field of multilevel optimization and multi-disciplinary optimization a large number of techniques have been developed for efficient analysis and optimization of complex systems. This paper presents an unified overview of main stream approaches that were found in the literature. Four general steps are distinguished in both multi-level optimization and multi-disciplinary optimization: physical coupling, optimization problem coupling, coordination and solution sequence. Via these four steps approaches are classified and possibilities for combining aspects of different methods are given. Finally, advantages and disadvantages of approaches applied to engineering problems are discussed and directions for further research are given.","multi-level; multi-disciplinary; optimization; decomposition; coordination; overview","en","conference paper","American Institute of Aeronautics and Astronautics (AIAA)","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","","" "uuid:319dffb8-3bbc-49de-a6c5-68d8972f3888","http://resolver.tudelft.nl/uuid:319dffb8-3bbc-49de-a6c5-68d8972f3888","A generic method to optimize instructions for the control of evacuations","Huibregtse, O.L.; Hoogendoorn, S.P.; Pel, A.J.; Bliemer, M.C.J.","","2010","A method is described to develop a set of optimal instructions to evacuate by car the population of a region threatened by a hazard. By giving these instructions to the evacuees, traffic conditions and therefore the evacuation efficiency can be optimized. The instructions, containing a departure time, a destination, and a route, are created using an optimization method based on ant colony optimization. Iteratively is searched for an approximation of the optimal evacuation instructions. The usefulness of the optimization method compared to other optimization methods is the simultaneous optimization of the departure time, destination, and route instructions instead of the optimization of only one or two of these variables for a dynamic instead of static evacuation problem. In a case study, the functioning of the method is illustrated. The relative high fitness in the case study of the set of instructions following from the optimization method compared with the fitness of a set of instructions set up by straightforward rules (like evacuating to the nearest destination) shows also the usefulness of applying an optimization method to create a set of evacuation instructions.","evacuation; instructions; control; optimization; ant colony optimization","en","conference paper","IFAC","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:1137ebe3-3dcb-43ca-84f7-89bbbbc2d635","http://resolver.tudelft.nl/uuid:1137ebe3-3dcb-43ca-84f7-89bbbbc2d635","Efficient particle-based estimation of marginal costs in a first-order macroscopic traffic flow model","Zuurbier, F.S.; Hegyi, A.; Hoogendoorn, S.P.","","2010","Marginal costs in traffic networks are the extra costs incurred to the system as the result of extra traffic. Marginal costs are required frequently e.g. when considering system optimal traffic assignment or tolling problems. When explicitly considering spillback in a traffic flow model, one can use a numerical derivative or resort to heuristics to calculate the marginal costs. Numerical derivatives are computationally demanding, restricting its use to simple networks. Heuristic approaches in most cases approximate the marginal costs by only considering the extra costs on the links which are traveled by the extra traffic, excluding the possibly external costs incurred on other links due to spillback. This paper proposes a novel way to estimate the true marginal costs of traffic in a dynamic discrete LWR model which correctly deals with congestion onset, spillback and dissolution. The proposed methodology tracks virtual changes in density through the network by means of particles which travel along with the characteristics of traffic. By using density based cost functions, the virtual changes in density can be directly related to the marginal costs. The computational efficiency of the methodology stems from the fact that only local conditions are considered when propagating the virtual change in density. The paper discusses the methodology and necessary model extensions, provides a numerical validation experiment illustrating the exact detail of the solution by comparison to a numerical derivative and discusses some generalizations.","optimization; dynamic traffic assignment; system optimal; LWR; marginal costs; particle","en","conference paper","IFAC","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:d8f58668-ba49-441d-bbf0-aa8c7114da4a","http://resolver.tudelft.nl/uuid:d8f58668-ba49-441d-bbf0-aa8c7114da4a","A Unified Approach towards Decomposition and Coordination for Multi-level Optimization","De Wit, A.J.","Van Keulen, A. (promotor)","2009","Complex systems, such as those encountered in aerospace engineering, can typically be considered as a hierarchy of individual coupled elements. This hierarchy is reflected in the analysis techniques that are used to analyze the physcial characteristics of the system. Consequently, a hierarchy of coupled models is to be used, accounting for different physical scales, components and/or disciplines. Numerical optimization of complex systems with embedded hierarchy is accomplished via multi-level optimization methods. Multi-level optimization methods utilize the hierarchical nature of complex systems to distribute the optimization process into smaller coupled less complex optimization problems located at the individual elements of the hierarchy. The present thesis presents a generalized approach towards decomposition and coordination for the numerical optimization of complex systems with embedded hierarchy. The developed methods are applied to numericaly maximizing the range of a supersonic business jet via multi-level optimization considering coupling between multiple engineering disciplines.","multi-level; multi-disciplinary; optimization; decomposition; coordination","en","doctoral thesis","","","","","","","","2009-11-30","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","","" "uuid:25c85feb-7ef1-4752-9810-e70f49e88802","http://resolver.tudelft.nl/uuid:25c85feb-7ef1-4752-9810-e70f49e88802","On maximum field components in the focal point of a lens","Urbach, H.P.; Pereira, S.F.; Broer, D.J.","","2009","We determine field distributions in the pupil of a high NA lens, that give, for a given power incident on the lens, the maximum electric field amplitude in focus in a specific direction. We consider in particular the cases of maximum longitudinal and maximum transverse components. The distribution of the maximum longitudinal component in the focal plane is narrower than that of the focused Airy spot and hence can give higher resolution in imaging.","High NA; beam shaping; optimization; longitudinal polarization","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:dc5b1158-be54-42d6-a4d3-b0a19462f507","http://resolver.tudelft.nl/uuid:dc5b1158-be54-42d6-a4d3-b0a19462f507","Robustness of networks","Wang, H.","Van Mieghem, P. (promotor)","2009","Our society depends more strongly than ever on large networks such as transportation networks, the Internet and power grids. Engineers are confronted with fundamental questions such as “how to evaluate the robustness of networks for a given service?”, “how to design a robust network?”, because networks always affect the functioning of a service. Robustness is an important issue for many complex networks, on which various dynamic processes or services take place. In this work, we define robustness as follows: a network is more robust if the service on the network performs better, where performance of the service is assessed when the network is either (a) in a conventional state or (b) under perturbations, e.g. failures, virus spreadings etc. In this thesis, we survey a particular line of network robustness research within our general framework: robustness quantification, optimization and the interplay between service and network. Significant progress has been made in understanding the relationship between the structural properties of networks and the performance of the dynamics or services taking place on these networks. We assume that network robustness can be quantified by a topological measure of the network. A brief overview of the topological measures is presented. Each measure may represent the robustness of a network with respect to a certain performance aspect of a service. We focus on the measure known as algebraic connectivity. Evidence collected from literature shows that the algebraic connectivity characterizes network robustness with respect to synchronization of dynamic processes at nodes, random walks on graphs and the connectivity of a network. Moreover, we illustrate that, on a given diameter, graphs with large algebraic connectivity tend to be dense in the core and sparse at the border. Such structures distribute traffic homogeneously and are thus robust in terms of traffic engineering. How do we design a robust network with respect to the metric algebraic connectivity? First, the complete graph has the maximal algebraic connectivity, while its high link density makes it impractical to use due to the cost of constructing links. Constraints on other network features are usually set up to incorporate realistic requirements. For example, constraint on the diameter may guarantee certain end-to-end quality of service levels such as the delay. We propose a class of clique chain structures which optimize the algebraic connectivity and many other robust features among all graphs with diameter D and size N. The optimal graph within the class can be determined either analytically or numerically. Second, complete replacement of an existing infrastructure is expensive. Thus, we design strategies for robustness optimization using minor topological modifications. These strategies are evaluated in various classes of graphs. The robustness quantification, or equivalently, the association of the performance of a service with a topological measure, may be implicit. In this case, we explore the interplay between topology and service in determining the overall performance. Many services on communications and transportation networks are based on shortest path routing. The weight of a link, such as delay or bandwidth, is generally a metric optimized via shortest path routing. Thus, link weight tuning, a mechanism to control traffic, is also considered as part of the service. The interplay between service (shortest path routing and link weight tuning) and topology is investigated for the following performance aspects: (a) the structure of the transport overlay network, which is the union of shortest paths between all node pairs and (b) the traffic distribution in the overlay network. Important new findings are (i) the universal phase transition in overlay structures as we tune the link weight structure over different classes of networks and (ii) the power law traffic distribution in the overlay networks when link weights vary strongly in various classes of networks. Furthermore, we consider the service that measures a network topology as the union of shortest paths among a set of testboxes (nodes). The measured topology is a subgraph of the overlay network, which is again a subgraph of the actual network. The performance in terms of the sampling bias of measuring a network topology is investigated. Our work contributes substantially to a better understanding of the effect of the service (testbox selection) and the actual network structure on the performance with respect to sampling bias. Our investigations on the interplay between service and network reveal again the association between the performance of a service and certain topological feature, and thus, contribute to the quantification of network robustness. The multidisciplinary nature of this research lies not only in the presence of robustness issues in many complex networks, but also in that advances in other disciplines such as graph theory, combinatorics, linear algebra and statistical physics are widely applied throughout the thesis to study optimization problems and the performance of large networks.","robustness; network topology; service; optimization","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","","" "uuid:c58b5999-da12-4a62-876f-95d7784edf91","http://resolver.tudelft.nl/uuid:c58b5999-da12-4a62-876f-95d7784edf91","Model-Based Control and Optimization of Large Scale Physical Systems - Challenges in Reservoir Engineering","Van den Hof, P.M.J.; Jansen, J.D.; Van Essen, G.M.; Bosgra, O.H.","","2009","Due to urgent needs to increase efficiency in oil recovery from subsurface reservoirs new technology is developed that allows more detailed sensing and actuation of multiphase flow properties in oil reservoirs. One of the examples is the controlled injection of water through injection wells with the purpose to displace the oil in an appropriate direction. This technology enables the application of model-based optimization and control techniques to optimize production over the entire production period of a reservoir, which can be around 25 years. Large scale reservoir flow models are used for optimizing production settings, but suffer from high levels of uncertainty and limited validation options. One of the challenges is the development of reduced complexity models that deliver accurate long-term predictions, and at the same time are not more complex than can be warranted by the amount of data that is available. In this paper an overview will be given of the problems and opportunities for model-based control and optimization in this field aiming at the development of a closed-loop reservoir management system.","petroleum; reservoir; optimization","en","conference paper","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:cb3de0cf-a506-4490-b988-f4d1bf00ae55","http://resolver.tudelft.nl/uuid:cb3de0cf-a506-4490-b988-f4d1bf00ae55","Model-based predictive control applied to multi-carrier energy systems","Arnold, M.; Negenborn, R.R.; Andersson, G.; De Schutter, B.","","2009","The optimal operation of an integrated electricity and natural gas infrastructure is investigated. The couplings between the electricity system and the gas system are modeled by so-called energy hubs, which represent the interface between the loads on the one hand and the transmission infrastructures on the other. To increase reliability and efficiency, storage devices are present in the multi-carrier energy system. In order to optimally incorporate these storage devices in the operation of the infrastructure, the capacity constraints and dynamics of these have to be taken into account explicitly. Therefore, we propose a model predictive control approach for controlling the system. This controller takes into account the present constraints and dynamics, and in addition adapts to expected changes of loads and/or energy prices. Simulations in which the proposed scheme is applied to a three-hub benchmark system are presented.","optimal power flow; electric power systems; model predictive control; natural gas systems; optimization","en","conference paper","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:ff8e44db-72e2-49fa-bd7f-bde923758e68","http://resolver.tudelft.nl/uuid:ff8e44db-72e2-49fa-bd7f-bde923758e68","An efficient method for reducing the sound speed induced errors in multibeam echosounder bathymetric measurements","Snellen, M.; Siemes, K.; Simons, D.G.","","2009","Nowadays extensive use is made of multibeam echosounders (MBES) for mapping the bathymetry of sea- and river-floors. The MBES is capable of covering large areas in limited time by emitting an acoustic pulse along a wide swathe perpendicular to the sailing direction. The angle and the corresponding two-way travel-time of the received signals are determined through beamsteering at reception. Water depths along the swathe can be derived from this angle and travel-time combination. In general, two sets of sound speed measurements are taken when conducting MBES measurements. The first set is used for the beamsteering and consists of the sound speeds at the MBES transducer. The second set is used for determining the propagation of the sound through the water column, needed for correctly converting the measured travel times to a depth. In general, this set of sound speed measurements consist of the complete sound speed profiles (SSPs). The quality of the sound speed measurements at the transducer position sometimes gets degraded, resulting in beam steering angles that differ from those aimed for. Also sometimes the SSPs used for converting the beam travel times to depths deviate from the true prevailing SSPs due to the, in general, limited amount of SSP measurements taken during a survey. Both above mentioned effects result in an erroneous bathymetry. Here, we present a method for eliminating these errors, without the need for additional sound speed information.","multibeam echosounder; sound speed profile; optimization","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","","" "uuid:fbc64a39-931e-4b40-8803-486466f20703","http://resolver.tudelft.nl/uuid:fbc64a39-931e-4b40-8803-486466f20703","The potential of inverting geo-technical and geo-acoustic sediment parameters from single-beam echo sounder returns","Simons, D.G.; Snellen, M.; Siemes, K.","","2009","Seafloor characterization is important in many fields including hydrography, marine geology, coastal engineering and habitat mapping. The advantage of non-invasive acoustic methods for sediment characterization over conventional bottom grabbing is the nearly continuous versus sparse sensing and the enormous reduction in survey time and costs. Among the various acoustic systems for seafloor characterization, the single-beam echo sounder is of particular interest due to its simplicity and versatility. Seafloor characterization algorithms can be roughly divided into two categories: model-based and empirical, where the latter simply relies on the observation that certain echo features, such as amplitude, duration and skewness of the echo, are correlated with sediment type. Here we apply the model-based approach where we compare the measured echo signal with theoretically modeled echo envelopes in the time domain. For modeling the received echo sounder signals use is made of a physical backscatter model that fully accounts for watersediment interface roughness and sediment volume scattering. We use differential evolution, a fast variant of a genetic algorithm, as the global optimization method to invert the model input parameters mean grain size, spectral strength of the interface roughness and volume scattering cross section. In the model grain size determines geo-acoustic parameters like sediment sound speed, density and attenuation. The analysis is applied to simulated data.","single-beam echosounder; seafloor classification; optimization","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","","" "uuid:6c6197bd-5757-428a-9d3d-e94af148ce90","http://resolver.tudelft.nl/uuid:6c6197bd-5757-428a-9d3d-e94af148ce90","A systematic analysis of the optical merit function landscape: Towards improved optimization methods in optical design","Van Turnhout, M.","Urbach, H.P. (promotor); Bociort, F. (promotor)","2009","A major problem in optical system design is that the optical merit function landscape is usually very complicated, especially for complex design problems where many minima are present. Finding good new local minima is then a difficult task. We show however that a certain degree of order is present in the optical design space, which is best observed when we consider not only local minima, but saddle points as well. With a special method, which we call Saddle-Point Construction (SPC), saddle points can be constructed in a simple way. Via saddle points, new local minima can be obtained very rapidly. When using a local optimization method, the final design after optimization highly depends on the starting configuration. We can group the initial configurations that lead to a given local minimum after local optimization into a graphical region, which shape depends on the optimization method used. However, saddle points are critical points in the merit function landscape that always remain on the boundaries, independent of the used optimization method. When the local optimization process is not chaotic, the geometric decomposition of the space of initial configurations into discrete regions has boundaries given by simple curves. But when the optimization is chaotic, the curves separating the different regions are very complicated objects termed fractals. In such cases, starting configurations, which are very close to each other, lead to different local minima after optimization. A better understanding of these instabilities can be obtained by using low damping values in a damped least-squares method.","optical system design; saddle point; optimization; fractal; chaos","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","","" "uuid:4f491cc5-cdc7-49b4-8b80-700dae2cf57c","http://resolver.tudelft.nl/uuid:4f491cc5-cdc7-49b4-8b80-700dae2cf57c","Validity improvement of evolutionary topology optimization: Procedure with element replaceable method","Zhu, J.; Zhang, W.; Bassir, D.H.","","2009","The aim of this paper is to enhance the validity of existing evolutionary topology optimization procedures. As this hard-killing scheme related to the element sensitivity values may lead to incorrect predictions of inefficient elements to be removed and the value of the objective function becomes sharply deteriorated during the iterations, a check position (CP) control is proposed to prevent the erroneous topology design generated by the rejection criteria of evolutionary methods. For this purpose, we introduce a sort of orthotropic cellular microstructure (OCM) element with moderate pseudodensity that acts as a compromising element between solid element and void OCM element. In this way, all inefficient elements removed previously are automatically replaced with the moderate OCM elements depending upon the deterioration of the objective function. Erroneously removed elements are then identified in the updated finite element model through a direct sensitivity computing of the moderate OCM elements and will be finally recovered by the bi-directional element replacement. Besides, detailed structures with checkerboard patterns are eliminated by controlling the local structural bandwidth with the so-called threshold method. Typical optimization examples of structural compliance and natural frequency that were difficult to tackle are solved by the proposed design procedure. Satisfactory numerical results are obtained.","optimization; evolutionary method; erroneous design; check position control; moderate microstructure","en","journal article","EDP sciences","","","","","","","","Aerospace Engineering","Aerospace Structures","","","","" "uuid:ff66e490-db59-4e3c-b6e2-926da4f074df","http://resolver.tudelft.nl/uuid:ff66e490-db59-4e3c-b6e2-926da4f074df","Algebraic Connectivity Optimization via Link Addition","Wang, H.; Van Mieghem, P.","","2008","","algebraic connectivity; synchronization; optimization; link addition","en","conference paper","ICST","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:a8ec762b-8e2a-422f-9978-a6e85673df40","http://resolver.tudelft.nl/uuid:a8ec762b-8e2a-422f-9978-a6e85673df40","Understanding catchment behaviour through model concept improvement","Fenicia, F.","Savenije, H.H.G. (promotor)","2008","This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of modelling and experimenting. We show a number of case studies where we demonstrate the need of combining the power of physical laws and established scientific theories with qualitative understanding of natural phenomena, which requires creativity and intuition. We emphasize the importance of the 'Art' of modelling, which is often a neglected aspect of scientific research. We address topical research issues such as reducing model structural uncertainty through progressive understanding of catchment behaviour, incorporating process knowledge in the different stages of model development, linking modelling and experimentation, and understanding the contribution of data to process understanding.","hydrological modelling; calibration; optimization; uncertainty; model structure","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:7cd0b27c-f95b-47c3-969b-36c4b7affa0d","http://resolver.tudelft.nl/uuid:7cd0b27c-f95b-47c3-969b-36c4b7affa0d","Saddle-point construction in the design of lithographic objectives, part 2: Application","Marinescu, O.; Bociort, F.","","2008","","saddle point; lithography; optimization; optical system design; EUV; DUV","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:f16b0c66-bef3-46f9-a84c-174c0e0bc449","http://resolver.tudelft.nl/uuid:f16b0c66-bef3-46f9-a84c-174c0e0bc449","Saddle-point construction in the design of lithographic objectives, part 1: Method","Marinescu, O.; Bociort, F.","","2008","","saddle point; lithography; optimization; optical system design; EUV; DUV","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:324e0e8a-527e-43bb-87c0-8e131654acc9","http://resolver.tudelft.nl/uuid:324e0e8a-527e-43bb-87c0-8e131654acc9","Performance Enhancement of Abrasive Waterjet Cutting","","Karpuschewski, B. (promotor)","2008","Abrasive Waterjet (AWJ) Machining is a recent non-traditional machining process. This technology is widely used in industry for cutting difficult-to-machine-materials, milling slots, polishing hard materials etc. AWJ machining has many advantages, e.g. it can cut net-shape parts, no heat is generated during the cutting process, it is particularly environmentally friendly as it is clean and it does not create dust. Although AWJ machining has many advantages, a big disadvantage of this technology is its relatively high cutting cost. Consequently, the reduction of the machining cost and the increase of the profit rate are big challenges in AWJ technology. To reduce the total cutting cost as well as to increase the profit rate, this research focuses on performance enhancement of AWJ cutting with two possible solutions including optimization in the cutting process and abrasive recycling. The first solution to enhance the AWJ cutting performance is the optimization of the AWJ cutting process. As a precondition, it is necessary to have a cutting process model for optimization. In order to use that model for this purpose, several important requirements are given. The most important requirement for such a model is that it can describe the âoptimum relationâ between the optimum abrasive mass flow rate and the maximum depth of cut. To develop a cutting process model which can be used for the AWJ optimization, many available models have been analyzed. Since the most important requirement for a process model (see above) can be obtained from Hoogstrate's model, an extension of this model is carried out. The extension model consists of three sub-models including pure waterjet model, abrasive waterjet model and abrasive-work material interaction model. The extension cutting process model is more accurate than the original one and it is capable to optimize AWJ systems. The influence of many process parameters, the work materials, the abrasive type and size have been taken into account. Up to now, there has not been a model for the prediction of AWJ nozzle wear. Therefore, modeling the nozzle wear rate has been carried out and a model for the wear rate of nozzles made from composite carbide has been proposed. Based on the extension cutting process model, two types of optimization applications have been carried out. They are related to technical problems and economical problems. From the results of these problems, regression models for determining the optimum nozzle exchange diameter and the optimum abrasive mass flow rate for various objectives have been proposed. The other solution to enhance the cutting performance is abrasive recycling. In this study, GMA garnet, the most popular abrasives for blast cleaning and waterjet cutting, has been chosen for the investigation. The recycling of GMA abrasives has been investigated on both technical side and economical side. On the technical side, the reusability and the cutting performance of the recycled and recharged abrasives have been analysed. The influence of the recycled and recharged abrasives on the cutting quality was studied. On the economical side, first, the prediction of the cost of recycled and recharged abrasives was done. Then, the economic comparisons for selecting abrasives have been carried out. In addition, the economics of cutting with recycled and recharged abrasives have been studied. Several suggestions for an abrasive recycling process which promises a more effective use of the grains have been proposed. By optimization in the cutting process and by abrasive recycling, the cutting performance can be increased, the total cutting cost can be reduced, and the profit rate can be enlarged considerably. Consequently, the performance of AWJ cutting can be enhanced significantly.","abrasive waterjet; waterjet; optimization; abrasive recycling; modeling","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:20b5a4b5-6419-4593-a668-48074982bcb3","http://resolver.tudelft.nl/uuid:20b5a4b5-6419-4593-a668-48074982bcb3","Model-based lifecycle optimization of well locations and production settings in petroleum reservoirs","Zandvliet, M.J.","Bosgra, O.H. (promotor); Jansen, J.D. (promotor)","2008","The coming years there is a need to increase production from petroleum reservoirs, and there is an enormous potential to do so by increasing the recovery factor. This is possible by making better use of recent technological developments, such as horizontal wells, downhole valves and sensors. However, actually making better use of these improved capabilities is difficult because of many open problems in reservoir management and production operations processes. Consequently, there is significant scope to increase the recovery factor of oil and gas fields by tailoring tools from the systems and control community to efficiently perform dynamic optimization of wells (e.g. number, locations) and their production settings (e.g. bottom-hole pressures, flow rates, valve settings) based on uncertain reservoir models, in the sense that they lead to good decisions while requiring limited time from the user. This thesis aims at developing these tools, and the main contributions are as follows. Many production setting optimization problems can be written as optimal control problems that are linear in the control. If the only constraints are upper and lower bounds on the control, these problems can be expected to have pure bang-bang optimal solutions. The adjoint method to derive gradients of a cost function with respect to production settings can be combined with robust optimization to efficiently compute settings that are robust against uncertainty in reservoir models. The gradients used in production setting optimization can be used to efficiently compute directions in which to iteratively improve upon an initial well configuration by surrounding the to-be-placed wells by pseudo wells (i.e. wells that operate at a negligible rate). The controllability and observability properties of single-phase flow reservoir model are analyzed. It is shown that pressures near wells in which we can control the flow rate or bottom-hole pressure are controllable, whereas pressures near wells in which we can measure the flow rate or bottom-hole pressure are observable. Finally, a new method of regularization in history matching is presented, based on this controllability and observability analysis.","petroleum; reservoir engineering; systems and control; optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:4f4b7fb1-4a77-46bb-9c14-ff5e4bb6477c","http://resolver.tudelft.nl/uuid:4f4b7fb1-4a77-46bb-9c14-ff5e4bb6477c","Optimization of extreme ultraviolet mirror systems comprising high-order aspheric surfaces","Marinescu, O.; Bociort, F.","","2008","","mirror systems; aspheres; extreme ultraviolet lithography; optimization; relaxation","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:5feb9aa6-d1bc-482b-8570-7e892bdf3bc5","http://resolver.tudelft.nl/uuid:5feb9aa6-d1bc-482b-8570-7e892bdf3bc5","Optimization based image registration in the presence of moving objects","Karimi Nejadasl, F.; Gorte, B.G.H.; Hoogendoorn, S.P.; Snellen, M.","","2008","","registration; optimization; Differential Evolution; Nelder-Mead; 3D Euclidean","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","","" "uuid:d50848b4-cd08-4482-a824-7d51700be44e","http://resolver.tudelft.nl/uuid:d50848b4-cd08-4482-a824-7d51700be44e","Integrated modeling of ozonation for optimization of drinking water treatment","van der Helm, A.W.C.","van Dijk, J.C. (promotor)","2007","Drinking water treatment plants automation becomes more sophisticated, more on-line monitoring systems become available and integration of modeling environments with control systems becomes easier. This gives possibilities for model-based optimization. In operation of drinking water treatment plants, the processes are usually optimized individually on the basis of ""rules of thumb"" and operator knowledge and experience. However, changes in operational conditions of individual processes can affect subsequent processes and an optimal operation, which can include a number of water quality parameters, costs and environmental impact is different for every operator. Improvement of the operation of a drinking water treatment plant is possible by using an integrated model of the entire water treatment plant as an instrument for operational support and for process control. For this purpose, it is important that explicit objectives are defined for the operation. From the research it is concluded that the objective for integrated optimization of the operation of drinking water treatment should be the improvement of water quality and not a priori reduction of environmental impact or costs. In the research an integrated model for ozonation, including ozone decay, bromate formation, assimilable organic carbon (AOC) formation, E. coli disinfection, CT and decrease in UV absorbance at 254 nm (UVA254) is developed. With the model, different control strategies for ozonation are assessed. The research also describes a newly developed design for ozone installations, the dissolved ozone plug flow reactor, (DOPFR) and the effect of character and removal of natural organic matter (NOM) prior to ozonation. The research was carried out as part of the project Promicit, a cooperation of Waternet, Delft University of Technology, DHV B.V. and ABB B.V. and was subsidized by SenterNovem, agency of the Dutch Ministry of Economic Affairs. Part of the experiments was performed in cooperation with Kiwa Water Research.","modeling; modelling; integrated; ozonation; optimization; drinking water; drinking water treatment; bromate; natural organic matter; nom; disinfection; assimilable organic carbon; aoc; life cycle assessment; lca; bottled water","en","doctoral thesis","Water Management Academic Press","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:28b2169c-2dc0-4258-b572-8c2320cf81d1","http://resolver.tudelft.nl/uuid:28b2169c-2dc0-4258-b572-8c2320cf81d1","Practical guide to saddle-point construction in lens design","Bociort, F.; Van Turnhout, M.; Marinescu, O.","","2007","Saddle-point construction (SPC) is a new method to insert lenses into an existing design. With SPC, by inserting and extracting lenses new system shapes can be obtained very rapidly, and we believe that, if added to the optical designer’s arsenal, this new tool can significantly increase design productivity in certain situations. Despite the fact that the theory behind SPC contains mathematical concepts that are still unfamiliar to many optical designers, the practical implementation of the method is actually very easy and the method can be fully integrated with all other traditional design tools. In this work we will illustrate the use of SPC with examples that are very simple and illustrate the essence of the method. The method can be used essentially in the same way even for very complex systems with a large number of variables, in situations where other methods for obtaining new system shapes do not work so well.","optical system design; optimization; saddle points","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:c05ad7d6-5504-4fa4-a14f-496e9bb20928","http://resolver.tudelft.nl/uuid:c05ad7d6-5504-4fa4-a14f-496e9bb20928","Predictability and unpredictability in optical system optimization","Van Turnhout, M.; Bociort, F.","","2007","Local optimization algorithms, when they are optimized only for speed, have in certain situations an unpredictable behavior: starting points very close to each other lead after optimization to different minima. In these cases, the sets of points, which, when chosen as starting points for local optimization, lead to the same minimum (the so-called basins of attraction), have a fractal-like shape. Before it finally converges to a local minimum, optimization started in a fractal region first displays chaotic transients. The sensitivity to changes in the initial conditions that leads to fractal basin borders is caused by the discontinuous evolution path (i.e. the jumps) of local optimization algorithms such as the damped-least-squares method with insufficient damping. At the cost of some speed, the fractal character of the regions can be made to vanish, and the downward paths become more predictable. The borders of the basins depend on the implementation details of the local optimization algorithm, but the saddle points in the merit function landscape always remain on these borders.","optimization; optical system design; saddle points; fractals; basins of attraction","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:703cd3c2-8cf4-48f7-babc-8b33cdd38949","http://resolver.tudelft.nl/uuid:703cd3c2-8cf4-48f7-babc-8b33cdd38949","Optimization technique for ED&PE","Kumar, P.; Bauer, P.","","2007","","optimization; BLDC drive","en","conference paper","Tulip","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:8eff9ef1-b509-4f3d-b1f7-7d1357c53ff8","http://resolver.tudelft.nl/uuid:8eff9ef1-b509-4f3d-b1f7-7d1357c53ff8","Structured controller synthesis for mechanical servo-systems: Algorithms, relaxations and optimality certificates","Hol, C.W.J.","Scherer, C.W. (promotor); Bosgra, O.H. (promotor)","2006","In many application areas of mechanical servo-systems the high demands on the performance often imply a tightly tuned feedback controller, that takes dynamical interaction into account. Model-based H-optimal controller synthesis is a well-suited technique for this purpose. However, the state-of-the-art synthesis approach yields controllers with high McMillan degree that can not be implemented in real-time at high sampling-rates, because of the limited computational capacity. This motivates to constrain the McMillan degree of the controller. The aim of this thesis is to provide numerical tools for H-optimal degree constrained (or otherwise structured) controller synthesis. For this problem we have developed relaxations that are based on Sum-Of-Squares polynomials. Their optimal values are lower bounds on the globally optimal structured controller synthesis problem and can be computed by solving LMI problems. It is guaranteed, that the bounds converge to best achievable performance as we improve our relaxations. To make this technique feasible for plants with high McMillan degree, we proposed a computationally less demanding scheme based on partial dualization. The Sum-Of-Squares relaxations have also been applied to robust polynomial Semi-Definite Programs (SDPs). Also for this case a sequence of relaxations has been developed, whose optimal values converge from below to the optimal value of the robust SDP. Furthermore for the structured controller synthesis problem an Interior Point algorithm has been developed. It is shown how this algorithm can be made more efficient, by exploiting the control-theoretic characteristics of the problem. Conditions have been derived to verify local optimality of the optimized controller. Finally, it has been illustrated by real-time experiments that the algorithms described in this thesis can be used to synthesize high-performing fixed-order controllers for a new prototype of a wafer stage.","controller synthesis; static output feedback; optimization; sumof-squares; matrix inequalities; bmi; lmi; interior point","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:11464f49-b10b-48ed-9075-9e281514618a","http://resolver.tudelft.nl/uuid:11464f49-b10b-48ed-9075-9e281514618a","Analytical and Numerical Developments in Optimal Shape Design for Aerospace: An overview","Pironneau, O.","","2006","","optimization; optimal shape design; gradient methods; finite element methods","en","conference paper","","","","","","","","","","","","","","" "uuid:63a75aa9-c71e-4439-9d0b-864fe8c2915d","http://resolver.tudelft.nl/uuid:63a75aa9-c71e-4439-9d0b-864fe8c2915d","A continuous adjoint formulation with emphasis to aerodynamic-turbomachinery optimization","Papadimitriou, D.I.; Giannakoglou, K.C.","","2006","This paper summarizes progress, recently made in the Lab. of Thermal Turbomachines of NTUA, on the formulation and use of the continuous adjoint methods in aerodynamic shape optimization problems. The basic features of state of the art adjoint methods and tools which are capable of handling arbitrary objective functions, cast in the form of either boundary or field integrals, are presented. Starting point of the presentation is the formulation of the continuous adjoint method for arbitrary integral objective functionals in problems governed by arbitrary, linear or nonlinear, first or second order state pde's; the scope of this section is to demonstrate that the proposed formulation is general without being restricted to aerodynamics. It is noticeable that, regardless of the type of functional (field of boundary integral) the expressions of its gradient with respect to the design variables include boundary integrals only. Thus, the derived adjoints can be used with either structured or unstructured grids and there is no need for repetitive remeshing or computation of field integrals which increase the CPU cost and deteriorate the computational accuracy. Then, the presentation focuses on aerodynamic shape optimization problems governed by the compressible fluid flow equations, numerically solved through a time-marching formulation and an upwind discretization scheme for the convection terms. Two design problems, namely the inverse design of a 2D cascade at inviscid flow conditions (used as a test bed for the assessment of three descent algorithms based on the same gradient information) and the design optimization of a 3D peripheral compressor cascade for minimum viscous losses are presented. For the latter, the flow is turbulent and the field integral of entropy generation, recently proposed by the same authors, is used as objective function.","continuous adjoint; inverse design; optimization; losses minimization; turbomachines","en","conference paper","","","","","","","","","","","","","","" "uuid:cdc345d1-a0b5-4b70-98fb-bc2235c818a6","http://resolver.tudelft.nl/uuid:cdc345d1-a0b5-4b70-98fb-bc2235c818a6","Application of sonic boom optimization to supersonic aircraft design","Daumas, L.; Dinh, Q.V.; Kleinveld, S.; Rogé, G.","","2006","Preliminary results on shape optimization of a wing-body configuration aiming at reducing sonic boom overpressure will be discussed. The optimization process uses a CAD modeler and an Euler CFD code with adjoint. Thickness, scale, twist and camber at section level were used to obtain gains in ground pressure signature.","adjoint; CAD modeller; optimization; sonic boom; supersonic aircraft design","en","conference paper","","","","","","","","","","","","","","" "uuid:8b3c60a5-4e17-4680-b7c6-252fb4ae87ca","http://resolver.tudelft.nl/uuid:8b3c60a5-4e17-4680-b7c6-252fb4ae87ca","VIVACE: Multidisciplinary Decision Support","Homsi, P.","","2006","","collaboration; multidisciplinary; optimization; decision; knowledge; data management; virtual enterprise; aeronautic; aircraft; engine","en","conference paper","","","","","","","","","","","","","","" "uuid:197e6db7-921d-4786-958d-b0c06079f1fc","http://resolver.tudelft.nl/uuid:197e6db7-921d-4786-958d-b0c06079f1fc","Realistic high-lift design of transport aircraft by applying numerical optimization","Wild, J.; Brezillon, J.; Mertins, R.; Quagliarella, D.; Germain, E.; Amoignon, O.; Moens, F.","","2006","The design activity within the EUROLIFT II project is targeted towards an improvement of the take-off performance of a generic transport aircraft configuration by a re-design of the trailing edge flap. The involved partners applied different optimization strategies as well as different types of flow solvers in order to cover a wide range of possible approaches for aerodynamic design optimization. The optimization results obtained by the different partners have been cross-checked in order to eliminate solver dependencies and to identify the best obtained design. The final selected design has been applied to the wind tunnel model and the test in the European Transonic Wind Tunnel (ETW) at high Reynolds number confirms the predicted improvements.","optimization; high-lift; application; CFD; wind tunnel testing","en","conference paper","","","","","","","","","","","","","","" "uuid:8abc533d-b860-46c1-8868-5eabdb33e415","http://resolver.tudelft.nl/uuid:8abc533d-b860-46c1-8868-5eabdb33e415","Partitioned strategies for optimization in FSI","Bletzinger, K.U.; Gallinger, T.; Kupzok, A.; Wüchner, R.","","2006","In this paper the possibility of the optimization of coupled problems in partitioned approaches is discussed. As a special focus, surface coupled problems of fluid-structure interaction are considered. Well established methods of optimization are analyzed for usage in the context of coupled problems and in particular for a solution through partitioned approaches. The main benefits expected from choosing a partitioned solution strategy as basis for the optimization are: a high flexibility in the usage of different solvers and therefore different approaches for the single-field problems as well as the possibility to apply well tested and sophisticated methods for the modeling of complex problems.","optimization; coupled problems; fluid-structure interaction; partitioned approach","en","conference paper","","","","","","","","","","","","","","" "uuid:fc982426-38af-4ba7-bc57-c3e44f14c4c6","http://resolver.tudelft.nl/uuid:fc982426-38af-4ba7-bc57-c3e44f14c4c6","Aerodynamic optimization of an airfoil using gradient based method","Mirzaei, M.; Roshanian, J.; Nasrin Hosseini, S.","","2006","A gradient based method is presented for optimization of an airfoil configuration. The flow is governed by two dimensional, compressible Euler equations. A finite volume code based on unstructured grid is developed to solve the equations. The procedure is carried out for optimizing an airfoil with initial configuration of NACA 0012. The advantage of this technique over the other gradient based methods is its speed of converging.","CFD; optimization; gradient; objective function; design variables","en","conference paper","","","","","","","","","","","","","","" "uuid:ea7af067-bd46-48c8-a147-fe4cddc936ec","http://resolver.tudelft.nl/uuid:ea7af067-bd46-48c8-a147-fe4cddc936ec","Looking for order in the optical design landscape","Bociort, F.; Van Turnhout, M.","","2006","In present-day optical system design, it is tacitly assumed that local minima are points in the merit function landscapewithout relationships between them. We will show however that there is a certain degree of order in the design landscapeand that this order is best observed when we change the dimensionality of the optimization problem and when weconsider not only local minima, but saddle points as well. We have developed earlier a computational method fordetecting saddle points numerically, and a method, then applicable only in a special case, for constructing saddle points by adding lenses to systems that are local minima. The saddle point construction method will be generalized here and wewill show how, by performing a succession of one-dimensional calculations, many local minima of a given global searchcan be systematically obtained from the set of local minima corresponding to systems with fewer lenses. As a simpleexample, the results of the Cooke triplet global search will be analyzed. In this case, the vast majority of the saddlepoints found by our saddle point detection software can in fact be obtained in a much simpler way by saddle point construction, starting from doublet local minima.","saddle point; optimization; optical system design; lithography","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:cdd281b2-0bc7-4f57-a9fb-3ddbe49c1082","http://resolver.tudelft.nl/uuid:cdd281b2-0bc7-4f57-a9fb-3ddbe49c1082","Designing lithographic objectives by constructing saddle points","Marinescu, O.; Bociort, F.","","2006","Optical designers often insert or split lenses in existing designs. Here, we present, with examples from Deep and Extreme UV lithography, an alternative method that consists of constructing saddle points and obtaining new local minima from them. The method is remarkable simple and can therefore be easily integrated with the traditional design techniques. It has significantly improved the productivity of the design process in all cases in which it has been applied so far.","saddle point; lithography; optical system design; optimization; DUV; EUV","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:b842a4d0-0708-4c37-b3e7-e86f91c72dd4","http://resolver.tudelft.nl/uuid:b842a4d0-0708-4c37-b3e7-e86f91c72dd4","Challenges for process system engineering in infrastructure operation and control","Lukszo, Z.; Weijnen, M.P.C.; Negenborn, R.R.; De Schutter, B.; Ilic, M.","","2006","The need for improving the operation and control of infrastructure systems has created a demand on optimization methods applicable in the area of complex sociotechnical systems operated by a multitude of actors in a setting of decentralized decision making. This paper briefly presents main classes of optimization models applied in PSE system operation, explores their applicability in infrastructure system operation and stresses the importance of multi-level optimization and multi-agent model predictive control. If you want to cite this report, please use the following reference instead: Z. Lukszo, M.P.C. Weijnen, R.R. Negenborn, B. De Schutter, and M. Ilic, “Challenges for process system engineering in infrastructure operation and control,” in 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering (Garmisch-Partenkirchen, Germany, July 2006) (W. Marquardt and C. Pantelides, eds.), vol. 21 of Computer-Aided Chemical Engineering, Amsterdam, The Netherlands: Elsevier, ISBN 978-0-444-52969-5, pp. 95–100, 2006.","infrastructures; optimization; multi-agent systems; model predictive control","en","report","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","http://resolver.tudelft.nl/uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","Model reduction for dynamic real-time optimization of chemical processes","Van den Berg, J.","Bosgra, O.H. (promotor)","2005","The value of models in process industries becomes apparent in practice and literature where numerous successful applications are reported. Process models are being used for optimal plant design, simulation studies, for off-line and online process optimization. For online optimization applications the computational load is a limiting factor. The focus of this thesis is on nonlinear model approximation techniques aiming at reduction of computational load of a dynamic real-time optimization problem. Two types of model approximation methods were selected from literature and assessed within a dynamic optimization case study: model reduction by projection and physics-based model reduction. Model order reduction by projection is partially successful. Even with a strongly reduced number of transformed differential equations it is possible to compute acceptable approximate solutions. Projection does not provide predictable results in terms of simulation error and stability and does not reduce the computational load of simulation. On the other hand, physics-based model reduction appeared to be very successful in reducing the computational load of the sequential dynamic optimization problem.","chemical processes; model reduction; optimization","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","","" "uuid:a29ca0b4-c17d-4a14-99c0-9672b805021e","http://resolver.tudelft.nl/uuid:a29ca0b4-c17d-4a14-99c0-9672b805021e","Uncertainty-based Design Optimization of Structures with Bounded-But-Unknown Uncertainties","Gurav, S.P.","van Keulen, A. (promotor)","2005","","uncertainty; optimization; response surface; parallel computing; MEMS","en","doctoral thesis","Delft University Press","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:7bf2a037-c8eb-44be-96ef-411529c4be0b","http://resolver.tudelft.nl/uuid:7bf2a037-c8eb-44be-96ef-411529c4be0b","Topology Optimization using a Topology Description Function Approach","de Ruiter, M.J.","van Keulen, F. (promotor)","2005","During the last two decades, computational structural optimization methods have emerged, as computational power increased tremendously. Designers now have topological optimization routines at their disposal. These routines are able to generate the entire geometry of structures, provided only with information on loads, supports, and space to work in. The most common way to do this is to partition the available space in elements, and to determine the material content of each of the elements separately. This thesis presents a different approach, namely the \emph{Topological Description Function} (TDF) approach. The TDF is a function parametrized by design variables. The function determines a geometry using a level-set approach. A finite element representation of the geometry then is used to determine how well the geometry performs with respect to objective and constraints. This information is given to an optimization program, which has the purpose of finding an optimal combination of values for the design variables. This approach decouples the geometry description of the design from the evaluation, allowing the designer to tune the detailedness of the geometry and the computational grid separately as wished. In this thesis, the concept of a TDF is explained in detail. Using a genetic algorithm for the optimization turns out to be too computationally expensive, however, it shows the validity of the TDF as a geometry description method. A method based on an intuitive updating scheme shows that the TDF approach can be used to do topology optimization.","level set method; topology; optimization; tdf; topology description function; genetic algorithm; optimality criteria method; structural optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:33282f5f-e093-4a9a-88e8-819ccfb40114","http://resolver.tudelft.nl/uuid:33282f5f-e093-4a9a-88e8-819ccfb40114","Model-based optimization of the operation procedure of emulsification","Stork, M.","Bosgra, O.H. (promotor)","2005","Emulsions are widely encountered in the food and cosmetic industry. The first food we consume is an emulsion, namely breast milk. Other common emulsions are mayonnaise, dressings, skin creams and lotions. Equipment often used for the production of oil-in-water emulsions in the food industry consists of a stirred vessel in combination with a colloid mill and a circulation pipe. Within this set-up there are two main variations: i) Configuration I where the colloid mill acts like a shearing device and at the same time as a pump. This configuration is used in the majority of the production facilities, and ii) Configuration II where the shearing and pumping action are not coupled. The operation procedure for obtaining a certain predefined emulsion quality is often established based on experience (best practice). This is most probably time-consuming (e.g. large experimental efforts for new developed products) and it is also unclear if the process is operated at its optimum (e.g. in minimum time). An other drawback is that there is no feedback during the production process. Hence, it is not possible to deal with disturbances acting on the process. A possible consequence is that, at the end of the production process, the product quality specifications are not met and the product has to be classified as off-spec. In order to be able to enlarge the efficiency of the production processes and to shorten the time to market of new products - and therewith create an advantage over competition - it is necessary to overcome these limitations of the current operation procedure. In the work reported a first step is set into this direction. A model describing the droplet size distribution (DSD) and the emulsion viscosity as function of the time was developed and several off-line optimization studies were performed. The model comprises several fit parameters and experiments were performed in order to estimate the values of these parameters. A number of additional experiments were performed to compare the simulated results with the measurements (model validation). The results of the parameter estimation and the model validation show that the simulated results are qualitatively in good agreement with the measurement data. Given the overall performance of the model it is expected that the model quality is sufficient to render practical relevant optimization results. Although the optimization studies have been performed for a model emulsion, small scale equipment and are not yet experimentally validated, the results of this work strongly suggest that it is indeed possible to minimize the production times and to shorten the product development times for new products. This overall conclusion is based on the following observations: 1) The optimization results show that it is beneficial to produce emulsions with Configuration II: - Configuration II allows the production of emulsions with a bi-modal DSD. No operation procedure was found for the production of such an emulsion in Configuration I. - The production of emulsions in Configuration II is always at least as fast as in Configuration I. 2) The followed approach allows to calculate: * If an emulsion with a certain, predefined, DSD and emulsion viscosity can be produced. * How the process should be controlled in order to produce such an emulsion. * How the process should be controlled to produce this emulsion in minimal time. 3) The optimization results show that it is possible to produce emulsions with: * A bi-modal DSD. * Less oil while maintaining a similar DSD and value of the emulsion viscosity evaluated at a shear rate of 10 1/s by adapting only the operation procedure. Hence, the addition of extra stabilizers is not considered. This offers possibilities for the production of a broader range of emulsion products and could direct product development in a new direction. Based on this, it is worthwhile and therefore recommended to expand this research work in the direction of industrial emulsions.","modeling; emulsions; emulsification; optimization; milp; parameter estimation; fryma-delmix; colloid mill; population balance equations; droplet size distribution; mayonnaise","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","","" "uuid:e15f936a-9439-4247-b0f9-051619b34cd4","http://resolver.tudelft.nl/uuid:e15f936a-9439-4247-b0f9-051619b34cd4","Finding new local minima by switching merit functions in optical system optimization","Serebriakov, A.; Bocoirt, F.; Braat, J.","","2005","","optical design; geometrical optics; optimization; merit function; aberrations","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:43fb3a2f-0c02-406a-ad7d-374ec5f71d63","http://resolver.tudelft.nl/uuid:43fb3a2f-0c02-406a-ad7d-374ec5f71d63","Optimization and analysis of deep-UV imaging systems","Serebriakov, A.G.","Braat, J.J.M. (promotor)","2005","This thesis has been devoted to two main subjects: the compensation of birefringence induced by spatial dispersion (BISD) in Deep-UV lithographic objectives and the optimization of optical systems in general.","optimization; lithography; optics","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","","" "uuid:05dfafdc-cd7c-4b17-a92f-8420e5bb78a0","http://resolver.tudelft.nl/uuid:05dfafdc-cd7c-4b17-a92f-8420e5bb78a0","Generating saddle points in the merit function landscape of optical systems","Bociort, F.; Van Turnhout, M.","","2005","Finding multiple local minima in the merit function landscape of optical system optimization is a difficult task, especially for complex designs that have a large number of variables. We discuss here a method that enables a rapid generation of new local minima for optical systems of arbitrary complexity. We have recently shown that saddle points known in mathematics as Morse index 1 saddle points can be useful for global optical system optimization. In this work we show that by inserting a thin meniscus lens (or two mirror surfaces) into an optical design with N surfaces that is a local minimum, we obtain a system with N+2 surfaces that is a Morse index 1 saddle point. A simple method to compute the required meniscus curvatures will be discussed. Then, letting the optimization roll down on both sides of the saddle leads to two different local minima. Often, one of them has interesting special properties.","saddle point; optimization; optical system design; lithography","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:ab738b03-b906-4dc7-9e9c-6ac16446af10","http://resolver.tudelft.nl/uuid:ab738b03-b906-4dc7-9e9c-6ac16446af10","Saddle points in the merit function landscape of lithographic objectives","Marinescu, O.; Bociort, F.","","2005","The multidimensional merit function space of complex optical systems contains a large number of local minima that are connected via links that contain saddle points. In this work, we illustrate a method to construct such saddle points with examples of deep UV objectives and extreme UV mirror systems for lithography. The central idea of our method is that, at certain positions in a system with N surfaces that is a local minimum, a thin meniscus lens or two mirror surfaces can be introduced to construct a system with N+2 surfaces that is a saddle point. When the optimization goes down on the two sides of the saddle point, two minima are obtained. We show that often one of these two minima can be reached from several other saddle points constructed in the same way. The practical advantage of saddle-point construction is that we can produce new designs from the existing ones in a simple, efficient and systematic manner.","saddle point; lithography; optimization; optical system design; EUV","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:1e3ce36d-f1f6-4fbd-9349-42ba2352d668","http://resolver.tudelft.nl/uuid:1e3ce36d-f1f6-4fbd-9349-42ba2352d668","The network structure of the merit function space of EUV mirror systems","Marinescu, O.; Bociort, F.","","2005","The merit function space of mirror systems for EUV lithography is studied. Local minima situated in a multidimensional merit function space are connected via links that contain saddle points and form a network. In this work we present the first networks for EUV lithographic objectives and discuss how these networks change when control parameters, such as aperture and field are varied and constraints are used to limit the variation domain of the variables. A good solution in a network obtained with a limited number of variables has been locally optimized with all variables to meet practical requirements.","network; saddle point; optical system design; EUV lithography; optimization","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:a4d313dc-81f6-4f5f-a83a-404f539aa838","http://resolver.tudelft.nl/uuid:a4d313dc-81f6-4f5f-a83a-404f539aa838","Optimization of multilayer reflectors for extreme ultraviolet lithography","Bal, M.F.; Singh, M.; Braat, J.J.M.","","2004","","multilayer; optimization; extreme ultraviolet lithography; graded multilayers; imaging","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:c253f0fa-a879-422b-8027-b3de1f91775a","http://resolver.tudelft.nl/uuid:c253f0fa-a879-422b-8027-b3de1f91775a","Avoiding unstable regions in the design space of EUV mirror systems comprising high-order aspheric surfaces","Marinescu, O.; Bociort, F.; Braat, J.","","2004","When Extreme Ultraviolet mirror systems having several high-order aspheric surfaces are optimized, the configurations often enter into highly unstable regions of the parameter space. Small changes of system parameters lead then to large changes in ray paths, and therefore optimization algorithms crash because certain sssumptions upon which they are based become invalid. We describe a technique that keeps the configuration away from the unstable regions. The central component of our technique is a finite-aberration quantity, the so-called quasi-onvariant, which has been originally introduced by H. A. Buchdahl. The quasi-invariant is computed for several rays in the system, and its average change per surface is determined for all surfaces. Small values of these average changes indicate stability. The stabilization technique consists of two steps: First, we obtain a stable initial configuration for subsequent optimization by choosing the system parameters such that the quasi-invariant change per surface is minimal. Then, if the average changes per surfaces of the quasi-invariant remain small during optimization, the configuration is kept in the safe region of the parameter space. This technique is applicable for arbitrary rotationally symmetric optical systems. Examples from the design of aspheric mirror systems for EUV lithography will be given.","mirror systems; aspheres; EUV lithography; optimization; relaxation","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","http://resolver.tudelft.nl/uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","Dynamic Optimization in Business-wide Process Control","Tousain, R.L.","Bosgra, O.H. (promotor); Backx, A.C.P.M. (promotor)","2002","The chemical marketplace is a global one with strong competition between man- ufacturers. To continuously meet the customer demands regarding product quality and delivery conditions without the need to maintain very large stor- age levels chemical manufactures need to strive for production on demand. In this thesis we research how market-oriented production can be realized for the particular class of multi-grade continuous processes. For this class of processes production on demand is particularly challenging due to the the complex trade- off between performing costly and time-consuming changeovers and maintaining high storage levels. The first requirement for market-oriented production is that production management cooperates with purchasing and sales management. We propose the use of a scheduler as a decision support system in a cooperative organization constituted by these players. In such a scheduler, decision making is represented using decision variables and their effect on the company-wide objective, which is chosen to be the added value of the company, is modeled. The scheduler then selects a decision strategy that is optimal with respect to the objective and presents this strategy to the decision makers who use it to base their actual decision taking on. The company-market interaction is modeled using a transaction-based mod- eling framework. Therein not the actual market behavior is modeled but the expected effect of the interaction of the company with the market. Two types of transactions can be modeled in this framework: orders, which result from contracts with suppliers and customers, and opportunities, which express the expected sales and purchases. Two different approaches to the modeling of production decisions are taken, the choice of which depends largely on the im- plementation of the process control hierarchy that is assumed. In the first approach, production management and control is performed by a single level controller and the control decisions are the minute to minute manipulation of the valves. This approach is academically interesting, though practically in- tractable due to the combination of long horizons and fast sampling times. In the second approach the process control hierarchy consists of a scheduling layer at which it is determined what products will be produced when, and a process control layer which determines how this production is realized. This approach is taken in the rest of the thesis.","chemical processes; optimization; supply chain","en","doctoral thesis","Delft University Press","","","","","","","","Design, Engineering and Production","","","","","" "uuid:e7367a12-2b86-4e56-931c-0e3bbcb93211","http://resolver.tudelft.nl/uuid:e7367a12-2b86-4e56-931c-0e3bbcb93211","Water Demand Management. Approaches, Experiences and Application to Egypt","Mohamed, A.S.","Van Beek, E. (promotor); Savenije, H.G. (promotor)","2001","","Egypt; demand management; conservation; reuse; new lands; framework for analysis; strategies; criteria; optimization; financial incentives; water resources management","en","doctoral thesis","Delft University Press","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:0bc0134e-c5e8-4062-956d-979d049352a8","http://resolver.tudelft.nl/uuid:0bc0134e-c5e8-4062-956d-979d049352a8","Dynamic Water-System Control - Design and Operation of Regional Water-Resources Systems","Lobbrecht, A.H.","Segeren, W.A. (promotor); Lootsma, F.A. (promotor)","1997","","water management; water resources; control system; real-time control; dynamic control; optimization; successive linear programming; interests; strategy; design","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:6b34b76a-72e7-4922-9a6a-b2f389b53877","http://resolver.tudelft.nl/uuid:6b34b76a-72e7-4922-9a6a-b2f389b53877","Verkenning genetische algorithmen, een hulpmiddel bij de inrichting van een Rijntak","Goossens, J.G.C.M.; Boogaard, H.F.P. van den","","1996","","Waal; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:d1f186a5-6601-4bfb-a72f-9e007977d6e9","http://resolver.tudelft.nl/uuid:d1f186a5-6601-4bfb-a72f-9e007977d6e9","Interior point techniques in optimization: Complementarity, sensitivity and algorithms","Jansen, B.","Lootsma, F.A. (promotor); Boender, C.G.E. (promotor)","1996","","optimization; sensitivity analysis; interior point algorithms","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:e80f3094-dbf5-4df2-b9e5-73e0937e26ec","http://resolver.tudelft.nl/uuid:e80f3094-dbf5-4df2-b9e5-73e0937e26ec","Fuzzy predictive control based on human reasoning","Babuska, R.; Sousa, J.; Verbruggen, H.B.","","1995","","predictive control; fuzzy decision making; optimization; learning","en","conference paper","Delft University of Technology","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:717630e4-194c-4d2a-b4d1-d7f3929b5608","http://resolver.tudelft.nl/uuid:717630e4-194c-4d2a-b4d1-d7f3929b5608","User's manual for the computer program CUFUS: Quick design procedure for a CUt-out in a FUSelage version 1.0","Heerschap, M.E.","","1995","","Structural design procedures; cut-outs; pressurized fuselages; finite elements; optimization; sensitivity analysis; NASTRAN; PATRAN","en","report","Delft University of Technology","","","","","","","","Aerospace Engineering","","","","","" "uuid:afd31d18-2efe-4149-afbe-a8f946c7c2c7","http://resolver.tudelft.nl/uuid:afd31d18-2efe-4149-afbe-a8f946c7c2c7","Optimization of design of IMS racing yachts","van Oossanen, P.","","1995","","optimization; yachts","","other","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","","" "uuid:a65dcff7-5005-4a96-9b25-0789d7ea095a","http://resolver.tudelft.nl/uuid:a65dcff7-5005-4a96-9b25-0789d7ea095a","Lokatiekeuze monsternamestation in de Nieuwe Waterweg: Optimalisatiestudie meetlokatie(s) en methodiek","Bleeker, F.J.; Bons, C.A.","","1993","","waterkwaliteitsmeting; water quality measurement; Nieuwe Waterweg; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:f381200a-8c95-47b7-911e-963241f5d4fc","http://resolver.tudelft.nl/uuid:f381200a-8c95-47b7-911e-963241f5d4fc","Computer aided optimum design of rubble-mound breakwater cross-sections: Manual of the RUMBA computer package, release 1","De Haan, W.","","1989","The computation of the optimum rubble-mound breakwater crosssection is executed on a micro-computer. The RUMBA computer package consists of two main parts: the optimization process is executed by a Turbo Pascal programme, the second part consists of editing functions written in AutoLISP. AutoLISP is the programming language within AutoCAD. The quarry production, divided into a number of categories, and long-term distributions of deep water wave heights and water levels, form the basis of the computation. Concrete armor units have been excluded from the computation. Deep water wave heights are converted to wave heights at site. A set of alternative cross-sections is computed based on both functional performance criteria, and Van der Meer's stability formulae for statically stable structures. Construction costs and maintenance costs are determined of each alternative. The optimum is derived by minimizing the sum of the construction costs and maintenance costs. Moreover, the programme provides means to economize the use of the quarry. At this stage the computer programme is useful for feasibility studies of harbour protection or coastal protection in regions, where use can be made of a quarry in the neighbourhood of the project site and the use of concrete armor units is excluded in advance. Briefly a method is described to extend the computer programme to the use of concrete armor units.","breakwater; armour units; optimization","en","report","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:3a4a1ebc-f64a-4fba-8d46-b62dd47ca290","http://resolver.tudelft.nl/uuid:3a4a1ebc-f64a-4fba-8d46-b62dd47ca290","Illustrative examples of optimization techniques for quantitative and qualitative water management: Report on investigation","Verhaeghe, R.J.; Tholen, N.","","1983","","waterbeheer; water resources management; waterkwaliteit; water quality; optimalisering; optimization","en","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:4d4806a8-3c2d-4e3e-abe2-f0a40476ef72","http://resolver.tudelft.nl/uuid:4d4806a8-3c2d-4e3e-abe2-f0a40476ef72","Optimalisatie op basis van lineair programmeren (LP) en dynamisch programmeren (DP): Mogelijkheden en beperkingen","Abraham, G.; Beek, E. van","","1982","","beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; programmering; programming; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:09369434-a255-45f4-a816-baa09f830394","http://resolver.tudelft.nl/uuid:09369434-a255-45f4-a816-baa09f830394","Optimalisatietechnieken in kwantitatief waterbeheer: Ontwerp van beheerstrategieën in PAWN","Samson, J.; Dijkman, J.P.M.","","1981","","beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:d42a86c7-b46c-471f-ad18-2e74cc461b74","http://resolver.tudelft.nl/uuid:d42a86c7-b46c-471f-ad18-2e74cc461b74","Optimalisatietechnieken in kwantitatief en kwalitatief waterbeheer","Verhaeghe, R.J.","","1978","","waterbeheer; water resources management; waterkwaliteit; water quality; grondwaterbeheer; groundwater management; watervoorziening; water supply; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:3bfeced0-7f7b-4cda-82a3-be291e9d8ffe","http://resolver.tudelft.nl/uuid:3bfeced0-7f7b-4cda-82a3-be291e9d8ffe","Conception de réseau iBGP","Buob, M.O.; Uhlig, S.; Meulle, M.","","","BGP is used today by all Autonomous Systems (AS) in the Internet. Inside each AS, iBGP sessions distribute the external routes among the routers. In large ASs, relying on a fullmesh of iBGP sessions between routers is not scalable, so route-reflection is commonly used. The scalability of route-reflection compared to an iBGP full-mesh comes at the cost of opacity in the choice of best routes by the routers inside the AS. This opacity induces problems like suboptimal route choices in terms of IGP cost, deflection and forwarding loops. In this work, we propose a solution to design iBGP route-reflection topologies which lead to the same routing as with an iBGP full-mesh and having a minimal number of iBGP sessions. Moreover we compute a robust topology even if a single node or link failure occurs. We apply our methodology on the network of a tier-1 ISP. Twice as many iBGP sessions are required to ensure robustness to single IGP failure. The number of required iBGP sessions in our robust topology is however not much larger than in the current iBGP topology used in the tier-1 ISP network.","BGP; route-reflection; IBGP topology design; optimization","en","conference paper","CFIP","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Network Architectures and Services","","","",""