A new parametric study, to expend the optimizing plate girders using S890 steel was conducted as well to address the usefulness of steels with higher yield strength in plate girders subjected to bending. This study again used the geometry used by Abspoel. The results showed a decrease in maximum web slenderness, but still a significant increase in bending moment capacity compared to the S690 plate girders. It was shown that using an optimized S890 plate girders compared to hot rolled section made also from S890 steel, could reduce the use of steel by more than 80%.

After the parametric studies showed increasing capacity, the geometry used to numerically model the plate girders, was critically addressed, using small scale numerical studies using FEM-software. These tests showed that not only the slenderness of the web was a factor in the bending moment capacity of a plate girder, but also the flange geometry plays a significant role. It was shown that increasing the length of the tested part of the girder, the failure mode could change from flange yielding to an instable mode in which the flange rotated around its longitudinal axis, resulting in a much lower bending moment capacity.

An extra investigation in using a hybrid steel composition resulted in showing the potential of this optimization. Because by adding lower grade steel, more ductility was shown due to these parts yielding prior to yielding of the compressive flange, resulting in possible safer design.","Steel Plate girders; optimization; bending moment capacity; plate buckling; Slender plate girders","en","master thesis","","","","","","","","","","","","","","" "uuid:6eb18a63-3cc5-43d5-a38c-a8992b9cddd1","http://resolver.tudelft.nl/uuid:6eb18a63-3cc5-43d5-a38c-a8992b9cddd1","Future City Hydrogen: Reality or Utopia?: A techno-economical feasibility study of an optimal stand-alone Solar-Electrolyzer-Battery-FuelCell system for residential utilization","Tamarzians, Michel (TU Delft Electrical Engineering, Mathematics and Computer Science)","Smets, Arno (mentor); Isabella, Olindo (graduation committee); Rueda Torres, Jose (graduation committee); Delft University of Technology (degree granting institution)","2019","The population worldwide is growing rapidly which leads to an increase of the energy demand. Simultaneously, the established energy resources are being depleted and contribute negatively to the climate. The necessity for a sustainable and inexhaustible energy source, to deal with the increasing energy demand in an ecological friendly approach, will play a key role in the 21st century. One of the most predictable and inexhaustible renewable energy sources is the Sun. Nevertheless, changing weather conditions, like rain and clouds, winter and summer, result in daily and seasonal fluctuations. A reliable stand-alone solar system requires a profound storage method to tackle the daily and seasonal fluctuations that can potentially result in deficit or dumped energy.

Generally, a battery bank is adopted in stand-alone solar systems, but the low energy density makes a battery bank not suitable as a seasonal storage method. A seasonal storage method can be implemented by the production and consumption of the chemical product hydrogen. Hydrogen has a high energy density compared to batteries (142 MJ/kg vs 0.95 MJ/kg), but the low round-trip efficiency prevents implementing hydrogen as a daily storage method. For a highly reliable and optimal sized stand-alone energy system, a combination of both a battery bank and the chemical product hydrogen are used as a profound storage method. The combined storage method can be used in times

of excess and deficit energy. This results in a so called stand-alone hybrid PV-Electrolyzer-Battery-FC energy system. In this final thesis project a stand-alone hybrid PV-Electolyzer-Battery-FC energy system is modelled and optimized to determine the current and future feasibility, both technologically and economically, for residential utilization. A simulation model of the hybrid energy system is designed in TRNSYS. The model is optimized by minimizing the loss of load probability (LLP) and levelized cost of energy (LCOE) for the stand-alone hybrid PV-Electolyzer-Batter-FC energy system at residential level in TRNOPT. Several cases are optimized based on the electrical, heat and mobility demand. The used optimization method is a combination of particle swarm optimization (PSO) and Hooke-Jeeves optimization algorithms implemented by GenOpt.

It is established that the proposed stand-alone hybrid PV-Electrolyzer-Battery-FC is technically feasible for the fulfillment of the annual electrical demand of a typical Dutch household. The feasible system size consists of 19 PV modules, battery capacity of 25.5 kWh and a tank volume of 1.24 cubic meters for a LCOE of 1.04 /kWh. If the future prices of the main components can be reduced to 0.01 €/Wp for PV, 0.01 €/Wh for battery and 0.01 €/W for electrolyzer and fuel cell the hybrid system can potentially reach a LCOE of 0.28 €/kWh. Reduction of the prices can be realized by large scale production, large scale implementation and technology maturity. In the end, a LCOE of 0.17 €/kWh can be realized by renewable energy systems if these future prices are realized and the following conditions are met: (1) fully covered roof area by PV modules and (1) the production, consumption and storage of hydrogen should be centralized to scatter the infrastructure costs over all the consumers. This can induce a so called hydrogen economy in the future, whereby the hydrogen gas can be the sustainable link between the increasing energy demand and the depleting fossil fuels.","Solar-Battery-Hydrogen System; Alkaline Electrolyzer; PEM fuel cell; Autonomous; Hybrid; optimization; Hooke-Jeeves; Particle Swarm Optimization; Residential; Netherlands","en","master thesis","","","","","","","","","","","","Sustainable Energy Technology","","" "uuid:6d0e608e-b4d6-4d7f-8f6e-1ffed2802347","http://resolver.tudelft.nl/uuid:6d0e608e-b4d6-4d7f-8f6e-1ffed2802347","An optimization based approach to autonomous drifting: A scaled implementation feasibility study","Verlaan, Bram (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)","Keviczky, Tamas (mentor); Delft University of Technology (degree granting institution)","2019","Development of the autonomous vehicle has been a trending topic over the last few years. The automotive industry is continuously developing Advanced Driver-Assistance Systems (ADAS) that partially take over the driver’s workload. This has resulted in an increase in vehicle safety and a decrease in fatal crashes [1]. Full vehicle autonomy has not yet been reached, as the control systems involved are not yet capable of handling every situation. One of these critical situations is when a vehicle enters the unstable motion of drifting. A vehicle is prone to drifting on low-friction surfaces, and also during these generally unstable maneuvers, the autonomous system should be able to remain in control. The performance of an autonomous drifting controller should be exemplified by the experience of rally drivers in how to handle a vehicle and keep control of a vehicle while drifting. The objective of this thesis is to design a control system which is capable of handling a vehicle during a drifting motion and to follow a certain desired path. Vehicle dynamics are modeled as a three-state bicycle model to simplify the complex dynamics of the vehicle and the interaction between tyre and road. The definition of longitudinal wheel slip is reformulated to a smooth alternative to accommodate gradient based solving. With the system dynamics defined, the drifting motion is analyzed and equilibrium points are identified, showing differences between low- and high friction surfaces. Initially, a Model Predictive Control (MPC) strategy is applied with the purpose of steering the vehicle to desired drifting equilibria. Hereafter, the control system is extended to provide path following properties and addition of a dynamic velocity controller allows for a larger range of equilibria to be reached. The simulation setup intends to capture the experimental environment in the Network Embedded Robotics DCSC lab (NERDlab) at the Delft Center for Systems and Control (DCSC) department. Simulating a 1:10 scaled model allows to investigate the challenges that arise when implementing the control strategy on a scaled vehicle. These simulations show that autonomous drift control using the designed MPC strategy is possible, even when accounting for possible uncertainties such as delay, noise, and model mismatch.","optimization; control; autonomous; drifting; vehicle","en","master thesis","","","","","","","","","","","","Mechanical Engineering | Systems and Control","","" "uuid:e08c31c2-1371-465d-a5bc-666433945249","http://resolver.tudelft.nl/uuid:e08c31c2-1371-465d-a5bc-666433945249","Landing Gear Design Integration for the TU Delft Initiator","van Oene, Nick (TU Delft Aerospace Engineering)","Vos, Roelof (mentor); Brügemann, Vincent (graduation committee); Veldhuis, Leo (graduation committee); Delft University of Technology (degree granting institution)","2019","The Delft University of Technology is developing an MDO tool for the conceptual design of transport aircraft. However, the current program is not able to investigate the influence of the undercarriage design on the weight, drag and geometry of transport aircraft. This research proposes a new design method for the undercarriage, for which a new design module is created and integrated

into the Initiator architecture. This new method will allow the user to investigate the influence of the undercarriage design on the weight, drag and geometry transport aircraft concept.

By designing the undercarriage for six existing aircraft, it is shown that the updated Initiator is able to reliably and consistently design an undercarriage for a given transport aircraft concept. Also, two test cases demonstrate that the new method allows the user to evaluate the impact of the undercarriage on the drag, weight and geometry of the concept.","Landing gear; Undercarriage; Concept Design; MDO; Initiator; optimization","en","master thesis","","","","","","","","","","","","Aerospace Engineering","","" "uuid:396e5112-19a0-48d3-b83d-9341a9fad583","http://resolver.tudelft.nl/uuid:396e5112-19a0-48d3-b83d-9341a9fad583","Pumping when the wind blows: Demand response in the Dutch delta","van der Heijden, Ties (TU Delft Civil Engineering and Geosciences)","Abraham, Edo (mentor); van Nooijen, Ronald (mentor); Palensky, Peter (mentor); Lugt, Dorien (mentor); Delft University of Technology (degree granting institution)","2019","This thesis investigates the potential of a large pumping station in IJmuiden, the Nether-lands, for participating in Demand Response. Due to climate change, renewable energy is onthe rise. The intermittency of energy, together with its unpredictable supply, are a big hurdlefor the energy transition. Two methods are promising solutions to this problem; large scaleenergy storage and demand response. Since large scale energy storage is not yet economi-cally feasible, demand response has an important role to play in the early days of the energytransition.Using energy when it is generated requires a data-stream from the generation facilities onproduction, which is not (yet) widely available. The market price, however, is an indicationof the scarcity of energy, since it is based on the ratio between supply and demand. Besidesthat, there is a correlation between a low energy price and sustainable energy productionsince marginal costs of sustainable energy production are lower than fossil energy produc-tion. This makes using sustainable energy cheaper that fossil energy, and gives DemandResponse a business case.In this thesis, a Model Predictive Control is created that uses energy market data to minimizeenergy costs. Multiple energy markets are analyzed with respect for their suitability for thepumping station in IJmuiden to act on them. The day ahead market is called the APX inthe Netherlands, and this is where energy is bought and sold the day before consumption.The intraday market, also called the flexibility market, is where energy can be bought andsold up to 5 minutes before consumption. A strategy combining these two markets will beevaluated. This is done by using a predicted day ahead price, generated by a SARIMA model,to create a plan. This plan will then be followed, but deviations from the plan are allowedagainst intraday market price.Due to imperfections of the market (mismatch between supply and demand), imbalances areoccurring. These imbalances result in frequency deviations of the grid, and voltage devia-tions. Tenner, the Dutch TSO (Transmission system operator), is responsible for minimizingthese imbalances. In order to minimize the imbalance, TenneT gives a real-time indication ofthe imbalance on the grid, and positive contributions are rewarded while negative contribu-tions are punished. This is done through the use of the imbalance price; a price per volumeof imbalance caused or solved. The imbalance price is based on the aFRR market, wherebids can be done on possible activation. Since the imbalance market is a fast-acting market,it is not suitable for a large pumping station like IJmuiden. However, the aFRR market willbe analyzed in this thesis.The effects of expected future development, like sea level rise and energy market changes,will be analyzed and simulated as well. A higher sea level would result in more pumping, andless discharging under gravity. Which causes the the pump schedule to become less flexible.The results show that it is possible to apply demand response to a pumping station, and theintraday market makes it possible for the MPC to adjust its energy use during the day.The aFRR market analysis shows a lot of potential for the pumping station, possibly makingup for all energy costs made through the spot markets.The conclusion of this thesis is that Rijkswaterstaat can possibly save energy costs on pump-ing, based on the fixed energy price, provided by Rijkswaterstaat, they pay now. Based ona reference scenario where the MPC only minimizes energy use, and a fixed ENDEX energyprice, the proposed MPC makes about 10% less costs in the German market scenario. TheDutch market scenario does not show cost savings. In the Netherlands there is not muchcorrelation between low energy prices and renewable energy yet, since renewable energy isnot a big part of the energy mix in the Netherlands. This correlation is expected to becomemore present when the Dutch energy mix becomes more sustainable. This is expected toresult in lower CO2emission through the energy use of the pumping station. However, moreresearch is needed to confirm this.","pumping; demand; response; side; management; smart; grid; sustainable; energy; market; day ahead; intraday; optimization; pyomo; ipopt; NLP; mpc; model; predictive; control; schedule; water; ijmuiden; pumping station; ijsselmeer; markermeer; noordzeekanaal; amsterdam-rijnkanaal; rijkswaterstaat","en","master thesis","","","","","","","","","","","","Civil Engineering | Water Management","","52.470852, 4.601499" "uuid:6dd88405-1ecc-4ec8-b2af-58d9a82f349b","http://resolver.tudelft.nl/uuid:6dd88405-1ecc-4ec8-b2af-58d9a82f349b","Convex Modeling of Pumps in Order to Optimize Their Energy Use","Horváth, K. (Eindhoven University of Technology); van Esch, B. (Eindhoven University of Technology); Vreeken, D. (Deltares); Pothof, I.W.M. (TU Delft Support Process and Energy; Deltares); Baayen, J (KISTERS Nederland B.V.)","","2019","This study presents convex modeling of drainage pumps so that real-time control systems can be implemented to minimize their energy use. A convex model is built based on pump curves and then used in mixed-integer optimization to allow pumps to be turned on or off. It is implemented as an extension to the open source software package RTC-Tools. The formulation is such that the continuous relaxations of the mixed-integer problem are convex, hence branch-and-bound techniques may be used to find a global optimum. The formulation can be used for variable-speed and constant-speed pumps. There are several possible applications, such as optimization of polder systems, pumped-storage systems, or certain water distribution networks. Finally, an example of the drainage pump is presented to compare the method to current methods and show that energy can be saved by using the proposed method.","channel; control; convex; drainage; optimization; pump","en","journal article","","","","","","","","","","","","","","" "uuid:cbde185b-7612-4915-b13f-47adb099b0b2","http://resolver.tudelft.nl/uuid:cbde185b-7612-4915-b13f-47adb099b0b2","Integration of Genetic Algorithm and Monte Carlo Simulation for System Design and Cost Allocation Optimization in Complex Network","Baladeh, Aliakbar Eslami (MAPNA Group, Tehran); Khakzad Rostami, N. (TU Delft Safety and Security Science)","","2019","Complex networks play a vital role in reliability analysis of real-world applications, demanding for precise and accurate analysis methods for optimal allocations of cost and reliability. Since the configuration of a system may change with every feasible solution of cost allocation optimization equation, finding the best arrangement of the system can become very challenging. This paper presents a novel methodology by combining Genetic Algorithm (GA) and Monte Carlo (MC) simulation approaches to simultaneously optimize cost allocation and system configuration in complex network. GA is used to generate configuration-cost pairs while MC is used to evaluate the reliability of the system for each pair. The application of the developed methodology is demonstrated for power grids as an example of critical complex networks. The results show that the proposed methodology can be readily used in practice.","complex networks; cost allocation; genetic algorithm; Monte Carlo simulation; optimization; Reliability","en","conference paper","Institute of Electrical and Electronics Engineers Inc.","978-1-7281-0238-2","","","","","","","","","","","","" "uuid:975b11c5-3b96-4e02-8129-ec2171c0114b","http://resolver.tudelft.nl/uuid:975b11c5-3b96-4e02-8129-ec2171c0114b","Optimal combined proton-photon therapy schemes based on the standard BED model","ten Eikelder, S.C.M. (Tilburg University); den Hertog, D (Tilburg University); Bortfeld, Thomas (Massachusetts General Hospital); Perko, Z. (TU Delft RST/Reactor Physics and Nuclear Materials; TU Delft RST/Fundamental Aspects of Materials and Energy; Physics Research Group; Massachusetts General Hospital)","","2019","This paper investigates the potential of combined proton-photon therapy schemes in radiation oncology, with a special emphasis on fractionation. Several combined modality models, with and without fractionation, are discussed, and conditions under which combined modality treatments are of added value are demonstrated analytically and numerically. The combined modality optimal fractionation problem with multiple normal tissues is formulated based on the biologically effective dose (BED) model and tested on real patient data. Results indicate that for several patients a combined modality treatment gives better results in terms of biological dose (up to14.8% improvement) than single modality proton treatments. For several other patients, a combined modality treatment is found that offers an alternative to the optimal single modality proton treatment, being only marginally worse but using signifcantly fewer proton fractions, putting less pressure on the limited availability of proton slots. Overall, these results indicate that combined modality treatments can be a viable option, which is expected to become more important as proton therapy centers are spreading but the proton therapy price tag remains high.","biologically effective dose (BED); intensity-modulated radiation therapy (IMRT); multi-modality treatment; optimization; proton therapy","en","journal article","","","","","","Accepted Author Manuscript","","","","","","","","" "uuid:57eb0947-760b-43a9-9826-f96312bae7d0","http://resolver.tudelft.nl/uuid:57eb0947-760b-43a9-9826-f96312bae7d0","Finding the relevance of staff-based vehicle relocations in one-way carsharing systems through the use of a simulation-based optimization tool","Santos, Gonçalo Gonçalves Duarte (Lisbon Technical University; University of Coimbra); Homem de Almeida Correia, G. (TU Delft Transport and Planning; University of Coimbra)","","2019","This paper proposes a real-time decision support tool based on the rolling-horizon principle that manages staff activities (relocations and maintenance) of a one-way carsharing system and considers carpooling the staff in the relocated carsharing vehicles for extra cost reduction. The decision support tool is composed of three elements: a forecasting model, an assignment model and a filter. Two assignment models are proposed and tested: rule-based and optimization. The rule-based model uses simple rules to respond to system status changes, and the optimization model is a mixed integer programing (MIP) model prepared to work in real-time. A simulator was designed to test the decision support tool and an application is done to the city of Lisbon, Portugal, showing that the benefits of staff relocations can be rather low. It was verified that the number of relocations that can physically be performed by each staff member in the case study provide only a small improvement in the revenues, which is unlikely to overcome the costs associated with hiring and staff activity.","Carsharing; maintenance; optimization; relocations; simulation","en","journal article","","","","","","","","","","","","","","" "uuid:bdc7d9df-33d4-449a-b331-0c60c3b2cb18","http://resolver.tudelft.nl/uuid:bdc7d9df-33d4-449a-b331-0c60c3b2cb18","Replacement optimization of ageing infrastructure under differential inflation","van den Boomen, M. (TU Delft Integral Design and Management); Leontaris, G. (TU Delft Integral Design and Management); Wolfert, A.R.M. (TU Delft Integral Design and Management)","","2019","Ageing public infrastructure assets necessitate economic replacement analysis. A common replacement problem concerns an existing asset challenged by a replacement option. Classic techniques obtained from the domain of engineering economics are the mainstream approach to replacement optimization in practice. However, the validity of these classic techniques is built on the assumption that life cycle cash flows of a replacement option are repetitive. Differential inflation undermines this assumption and therefore more advanced replacement optimization techniques are required under these circumstances. These techniques are found in the domain of operations research and require linear or dynamic programming (LP/DP). Since LP/DP techniques are complex and time-consuming, the current study develops an alternative model for replacement optimizations under differential inflation. This approach builds on the classic capitalized equivalent replacement technique. The alternative model is validated by comparison with a DP model showing to be equally accurate for a case with characteristics that apply to many infrastructure assets.","Replacement decisions; asset management; differential inflation; optimization; public infrastructure assets","en","journal article","","","","","","","","","","","","","","" "uuid:1a15154f-7d08-4c5c-bdc1-4966f958e498","http://resolver.tudelft.nl/uuid:1a15154f-7d08-4c5c-bdc1-4966f958e498","Automated dig-limit optimization through simulated annealing","Hanemaaijer, Thijs (TU Delft Civil Engineering and Geosciences)","Wambeke, Tom (mentor); van Duijvenbode, Jeroen (mentor); Buxton, Mike (mentor); Soleymani Shishvan, Masoud (mentor); Delft University of Technology (degree granting institution)","2018","","dig-limit; simulated annealing; mine planning; dig-lines; optimization; meta-heuristic; ore-waste classification; dilution; ore loss","en","master thesis","","","","","","","","","","","","Applied Earth Sciences","","" "uuid:1b747787-0319-4120-be10-0640f344ec5e","http://resolver.tudelft.nl/uuid:1b747787-0319-4120-be10-0640f344ec5e","A Graph Theoretic Approach to Optimal Firefighting in Oil Terminals","Khakzad Rostami, N. (TU Delft Safety and Security Science)","","2018","Effective firefighting of major fires in fuel storage plants can effectively prevent or delay fire spread (domino effect) and eventually extinguish the fire. If the number of firefighting crew and equipment is sufficient, firefighting will include the suppression of all the burning units and cooling of all the exposed units. However, when available resources are not adequate, fire brigades would need to optimally allocate their resources by answering the question “which burning units to suppress first and which exposed units to cool first?” until more resources become available from nearby industrial plants or residential communities. The present study is an attempt to answer the foregoing question by developing a graph theoretic methodology. It has been demonstrated that suppression and cooling of units with the highest out-closeness index will result in an optimum firefighting strategy. A comparison between the outcomes of the graph theoretic approach and an approach based on influence diagram has shown the efficiency of the graph approach.","oil storage plants; domino effect; firefighting; optimization; graph theory; influence diagram","en","journal article","","","","","","","","","","","","","","" "uuid:31642fd0-f382-4b9a-a78c-5bfdcb48fa31","http://resolver.tudelft.nl/uuid:31642fd0-f382-4b9a-a78c-5bfdcb48fa31","Optimizing closure works: A case study on the Kalpasar closure dam","de Jong, Han (TU Delft Civil Engineering and Geosciences)","Jonkman, Bas (mentor); Mooyaart, Leslie (mentor); Broos, Erik (mentor); van den Bos, Jeroen (graduation committee); Delft University of Technology (degree granting institution)","2018","Constructing a dam across a tidal basin has alway been a long-term integral solution to many water related problems of the surrounding area such as flooding, river control and fresh water storage. However, immense challenges are accompanied with the closure works of large basins. This research treats the closure strategy to close the Gulf of Khambhat in India. The project is known as ""Kalpasar"", which aims to create of a fresh water reservoir in the Gulf of Khambhat by constructing a 35 km dam across the estuary. The Kalpasar project

has been on the Indian Governments agenda since 1986. Royal Haskoning was involved in the pre-feasibility study, which was presented in 1998. However, due to an alignment change to a more northern position, earlier proposed closure work designs are now considered out of date.

To avoid irrelevance of this research through time and assist the Kalpasar development project with optimizing a new design for the closure works, this research treats the development of a fundamental parametric optimization tool to quickly perform a first-order evaluation of possible closure strategies on costs.

The tool as a product along with case results are delivered to the Kalpasar development project for further design optimization.

Closing the tidal basin involves closing a certain wet cross section along the chosen dam alignment through which large tidal currents penetrate caused by tidal differences up to 11 m. Complexity is caused by increasing tidal flow velocities due to increasing constriction of the wet cross section during the closure. The developed optimization tool can evaluate and compare six pre-programmed strategies to close a multi-sectional wet cross section in time on costs of three fundamental design requirement or ""cost factors"": Required dam material, bed protection and equipment. Using a multi-sectional storage model to compute the flow velocities in the gap, the channels and tidal flats can be individually modeled after which they are linked as a system. The model reacts as a system to changes in flow area by closing certain cross sections (a channel or a tidal flat). The individual cross sections can be closed strategically by defining their closure method (horizontal, vertical or sudden), execution phase and construction capacity. These are called ""strategic input parameters"". Defined for all sections, they determine the closure sequence of the system in time. Optimization is achieved when the strategic input parameters define a closing sequence which minimizes the combined cost of all cost factors.

Subsequent to the storage model, three computational models are introduced to quantify the required dam material, bed protection and equipment. Based on earlier research, the material model utilizes only quarried rock for gradual closures and sluice caissons for sudden closures. The equipment model utilizes large dump trucks for horizontal closures and ships or a temporary cable-way/bridge system for vertical closures. The construction capacity is linked to material and bed protection models, since both design requirements are time dependent. Increasing construction capacity can therefore decrease these requirements.

Since subsequent models largely depend on the flow velocity, an attempt to validate and calibrate the storage model was performed using results from previous research and a 2D-H Delft3D model. Deviations with respect to the Delft3D model were significantly large (factor 2-3), because storage models can only be utilized if the basin size and the remaining gap are small (usability limits). Therefore, calibration was performed by introducing an artificial contraction factor to compensate for the error in the flow velocity. An exponential relation was determined linking the error to the constriction percentage of the gap. With increasing constriction percentage, the error decreased due to increasing validity of the storage model usability limits. The artificial contraction factor can be used to optimize the closure of the Gulf of Khambhat. However, for general use, the model should be calibrated to each specific site.

Case study results show that using multiple cross sections to model the bathymetry with respect to a single cross section, the optimal strategy can change from fully vertical to a combination of horizontal and vertical with a specific capacity. Utilizing the developed model for the Kalpasar case is therefore recommendedbecause the complex bathymetry creates many possible strategies and can’t be reliably modeled with single cross-sectional models. The strategy that showed the most potential for further optimization is: First closing the tidal flats horizontally by forward dumping of rocks, while closing the channels up to 40% of their depth with dumping ships after which the remaining gap is closed vertically by a cable-way or bridge system. This strategy is commonly suggested by existing literature, thereby increasing reliability and validity of the optimization model.

A second case study showed negative effects of increasing construction capacity on the total cost. However, these case results are based on assumed costs and cost functions for equipment, which should be verified by contractors first. Bed protection requirements did decrease significantly by increasing construction capacity, showing potential for development of high capacity closure equipment to avoid these costs. Further future development should focus on vertical closure equipment to decrease both material and bed protection costs.

To conclude the recommendations, more case studies should be performed to quantify influences of parameters already included in the model, such as the permeability of the dam, the presence of a tidal power facility and the use of a sudden caisson closure to relieve the final closure. Secondly, further validation of the storage model is essential to generate more reliable results. Furthermore, research should be performed into cost functions of several existing or new high capacity equipment for vertical closures, relating costs to construction capacity to improve usability of the optimization model.

in reservoir simulation the number of parameters generally is extremely high, computation of this information is computationally expensive. Therefore, a multiscale framework is employed to improve the

computational efficiency of the forward simulation. Multiscale methods are able to solve the model equations at a computationally efficient coarse scale and can easily interpolate this solution to the fine

scale resolution. Next, we use a Lagrangian set-up together with a multiscale framework to re-derive an efficient formulation for the derivative computation. However, as the multiscale method is prone to errors, this derivative computation formulation is recast in an iterative fashion, using a residual based

iterative multiscale method to provide control of these errors. In this thesis we show that this method generates accurate gradients. In contract to the high accuracy of the method, this method comprises a computationally heavy smoothing step. This issue can be resolved by making smart use of the Lagrange

multipliers, to re-derive an efficient iterative multiscale solution strategy. The multipliers are used to identify important domains of the region for which smoothing is required and for which regions we may neglect the smoothing. We show that the newly proposed iterative multiscale goal oriented method is computationally more efficient and we show that method is promising for efficient derivative computation, but that more work is required to fully demonstrate the benefit of this method.","multiscale; gradient; computation; lagrange; multipliers; optimization; goal-oriented; adjoint; iterative mutliscale; Porous Media; Flow; reservoir simulation","en","master thesis","","","","","","","","","","","","Applied Mathematics","","" "uuid:19dd0340-faa2-416f-bc67-bbc9248a7154","http://resolver.tudelft.nl/uuid:19dd0340-faa2-416f-bc67-bbc9248a7154","Goal Oriented Optimization of Tailored Modes for Reduced Order Modelling: An alternative perspective on Large Eddy Simulation","Xavier, Donnatella Germaine (TU Delft Aerospace Engineering; TU Delft Aerodynamics, Wind Energy & Propulsion)","Hulshoff, Steven (mentor); Delft University of Technology (degree granting institution)","2018","This Masters thesis is a new perspective on Large Eddy Simulation. The capability of goal oriented model constrained optimization technique to generate stable reduced order models without any additional stabilization term or subgrid scale modelling has been demonstrated. The low dimensional projection modes sought by the optimization program comprise the dissipative scales implicitly, thereby ensuring energy balance and eliminating the need for an SGS model.","optimization; Lagrangian; large eddy simulation; variational multiscale; goal function","en","master thesis","","","","","","","","","","","","Aerospace Engineering | Aerodynamics and Wind Energy","","" "uuid:e129aa53-1cca-469e-bf09-80142d4b879c","http://resolver.tudelft.nl/uuid:e129aa53-1cca-469e-bf09-80142d4b879c","Multi-robot parcel sorting systems: Allocation and path finding","van den Heuvel, Bram (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Iersel, Leo (mentor); Delft University of Technology (degree granting institution)","2018","The logistics industry is being modernized using information technology and robots. This change encompasses a new set of challenges in warehouses. Recently, some companies have started using robot fleets to sort products and parcels. This thesis studies those systems, and researches the combinatorial problems that arise within them. Three main optimization problems are identified: 1. Finding an optimal layout of the sorting system on the warehouse floor; 2. Allocating products or parcels to be sorted to robots; 3. Finding paths that all robots can follow concurrently, without colliding. These problems are considered one by one. The first problem is understood on an intuitive level, while the other two are considered more closely. For both problems, several algorithms are considered. Some utilize greedy heuristics while others model the problem at hand precisely using integer linear programming methods. The algorithm’s real world performance is then assessed using a simulation. Slow, ILP-based algorithms are found to produce optimal solutions for small instances. However, they don’t scale well, and are unable to solve large instances. Greedy approximation algorithms solve all problem instance sizes tested, but produce solutions of lower quality.","optimization; sorting; planning; allocation; path; collision; ILP; makespan; heuristic; greedy; disjoint; parcel; robot; hamiltonian; tree-width; dynamic; programming; multi; commodity; flow; conservation; a-star; rust; integrality; gap; benchmark; test","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics","","" "uuid:da43fc88-c219-446b-999d-24cd0e830a93","http://resolver.tudelft.nl/uuid:da43fc88-c219-446b-999d-24cd0e830a93","Route Optimisation For Mobility-On-Demand Systems With Ride-Sharing","van der Zee, Menno (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)","Alonso Mora, Javier (mentor); Delft University of Technology (degree granting institution)","2018","Privately owned cars are an unsustainable mode of transportation, especially in cities. New Mobility on Demand (MoD) services should offer a convenient and sustainable alternative to privately owned cars. Notable in this field is the recent uprise of ride-sharing services such as offered by companies like Uber and Grab. Such services, especially when allowing for multiple passengers to share a vehicle, could potentially be a valuable addition to existing modes of transport to offer fast and sustainable door-to-door transportation.

The optimisation of vehicle routes for a MoD fleet is a complex task, especially when allowing for multiple passengers to share a vehicle. Recent studies have presented algorithms that can optimise routes in real-time for large scale ride-sharing systems, but have left opportunities to further enhance fleet performance. The redistribution of idle vehicles towards areas of high demand and the utilisation of high capacity vehicles in a heterogeneous fleet has received little attention. This work presents a method to continuously redistribute idle vehicles towards areas of expected demand and an analysis of fleets with both buses and regular vehicles. Furthermore, a method is proposed to optimise vehicle routes while taking into account vehicle capacities and the future locations of vehicles in anticipation to predicted demand.

In simulations with historical taxi data of Manhattan, 99.8% of transportation requests can be served with a fleet of 3000 vehicles with an average waiting time of 57.4 seconds, and an average in-car delay of 13.7 seconds. Compared to earlier work, a decrease in walk-aways of 95% is obtained for 3000 vehicles, with a 86% decrease in average in-car delay and a 37% decrease in average waiting time. For a small fleet of 1000 small busses of capacity 8 still 84.6% of requests can be served with an average waiting time of 141 seconds and an average in-car delay of 269 seconds. In comparison to prior work, a decrease in walk-aways of 15% is obtained, with a 14% decrease in average in-car delay and a 2% decrease in average waiting time. A heterogeneous fleet of 1000 vehicles consisting of 500 buses and 500 regular vehicles using this new approach can serve approximately the same number of passengers as a homogeneous fleet of 1000 buses using earlier presented algorithms.","optimisation; routing; mobility-on-demand; ride-sharing; ride-sourcing; mobility; transport; optimization; Integer Linear Programming problem; ILP; Mixed integer linear programming; MILP","en","master thesis","","","","","","","","","","","","","","" "uuid:99d5ed9a-c706-4cb6-8caa-9dbc8c9822c9","http://resolver.tudelft.nl/uuid:99d5ed9a-c706-4cb6-8caa-9dbc8c9822c9","Searching for two optimal trajectories: A study on different approaches to global optimization of gravity-assist trajectories that have a backup departure opportunity","Perdeck, Matthias (TU Delft Aerospace Engineering)","Cowan, Kevin (mentor); Delft University of Technology (degree granting institution)","2018","In interplanetary space missions, it is convenient to have a second departure opportunity in case the first is missed. Two distinct approaches to minimizing the maximum of the two Delta-V budgets of such a trajectory pair, are developed. The first (‘a priori’) approach optimizes the variables of both trajectories at once. The second (‘a posteriori’) approach first minimizes Delta-V budgets for a range of discrete departure epochs, and then selects the pair of which the highest Delta-V is minimum. Furthermore, five different pruning and biasing methods are developed, these prove critical for computational efficiency (number of objective function evaluations). Application to three different gravity-assist (and deep space maneuver) trajectories to Saturn, reveals that the a priori approach is more computationally efficient on a trajectory with few variables (3) and that the a posteriori approach is more computationally efficient on a trajectory with many variables (22).","interplanetary; trajectory; optimization; optimisation; gravity-assit; space; flight; flyby","en","master thesis","","","","","","","","2023-06-07","","","","","","" "uuid:44dda417-a658-47d3-998b-48c082c9e989","http://resolver.tudelft.nl/uuid:44dda417-a658-47d3-998b-48c082c9e989","A tensor approach to linear parameter varying system identification","Gunes, Bilal (TU Delft Data-Driven Control)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2018","","tensor; LPV; identification; data-driven; wind; turbine; statistics; subspace; optimization; tensor decompositions; multi-linear algebra; SVD; MLSVD; HOSVD; tensor trains; tensor networks; polyadic; engineering; wind energy","en","doctoral thesis","","","","","","","","","","","","","","" "uuid:6be0d327-6da6-419f-a6a5-19fd44b1245d","http://resolver.tudelft.nl/uuid:6be0d327-6da6-419f-a6a5-19fd44b1245d","Optimization of water allocation in the Shatt al-Arab River under different salinity regimes and tide impact","Abdullah, A.D.A. (University of Missan); Castro Gama, M.E. (UNESCO-IHE); Popescu, Ioana (UNESCO-IHE; Politehnica University of Timisoara); van der Zaag, P. (TU Delft Water Resources; UNESCO-IHE); Karim, Usama F.A. (University of Twente); Al Suhail, Qusay (University of Basrah)","","2018","Wastewater effluents from irrigation and the domestic and industrial sectors have serious impacts in deteriorating water quality in many rivers, particularly in areas under tidal influence. There is a need to develop an approach that considers the impact of human and natural causes of salinization. This study uses a multi-objective optimization–simulation model to investigate and describe the interactions of such impacts in the Shatt al-Arab River, Iraq. The developed model is able to reproduce the salinity distribution in the river given varying conditions. The salinity regime in the river varies according to different hydrological conditions and anthropogenic activities. Due to tidal effects, salinity caused by drainage water is seen to intrude further upstream into the river. The applied approach provides a way to obtain optimal solutions where both river salinity and deficit in water supply can be minimized. The approach is used for exploring the trade-off between these two objectives.","drainage water; optimization; salinity; Shatt al-Arab River; tidal influence; water management","en","journal article","","","","","","","","2019-03-31","","","","","","" "uuid:66eca2a7-321d-44cd-ba1d-9ad501f80177","http://resolver.tudelft.nl/uuid:66eca2a7-321d-44cd-ba1d-9ad501f80177","Evaluation and optimization of the control system of the Symphony Wave Power Device","Sfikas, Ilias (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Electrical Sustainable Energy)","Polinder, Henk (mentor); Dong, Jianning (graduation committee); Smets, Arno (graduation committee); Delft University of Technology (degree granting institution)","2018","Raising environmental concerns have stimulated the development of renewable energy, including energy from the oceans, which contain a huge potential. In this thesis, particular emphasis is given to wave energy, which can deliver up to 2 TW on a global scale. The aim of this thesis is to optimize the control system of the Symphony Wave Power Device, which is a point absorber, so that the energy that is being delivered to the electrical grid is maximal and the device functions in a stable way. The device is analytically described in terms of structural parts, operating principle and presentation of all the forces that act on the moving part, which is called the floater. The device is in fact a mass-spring-damper system, for which the spring constant needs to be tuned according to the period of the incoming waves, so as to maximize the energy extraction. For this tuning, not only the actual mass of the floater, but also the added equivalent mass due to the inertia of the inner turbine need to be taken into account.

The whole device is modelled with the help of a Matlab/Simulink programme, in which simulations can be performed, to observe the motion and make certain calculations. The already existing PI controller, which makes use of an energy error, is briefly described and the relevant calculations for the energy extraction are presented. The energy losses in the electrical parts also need to be taken into account.

To evaluate the current controller, it is necessary to calculate the upper boundary of the energy that the Symphony can obtain from a certain wave. This is done with the help of the GAMS software. The code, as well as the necessary assumptions and approximations, are presented in a mathematical way. The results, both in numerical and graphical form, provide a good insight as to how the ideal theoretical control system looks like.

Next, simulations are performed in the Matlab programme and comparisons with the GAMS results are made. The essential parts of the controller are tuned to their optimal values. Only a proportional part for the PI controller is needed and the energy should not flow in two directions.

The results show that, with correct tuning of the proportional part, as well as of the spring constant, the Symphony operates very well in all realistic sea states at the location where it will be placed. A high percentage of the theoretical energy boundary is being extracted from the waves and the motion of the floater is close to the optimal pattern. It is thus concluded that the existing controller has a remarkable performance, if regulated correctly. Finally, recommendations for future research on many levels are given.

This graduation study looks at optimization of OWF installation procedure with a targeted completion date as a priority. In this thesis, an optimization approach is built around an ECN in-house software, developed for simulating various OWF installation strategies. Ultimately, the result of the dissertation is to have a method that provides added flexibility to simulate different OWF installation planning while still obtaining optimal installation costs. A concise literature review describes the significance of the current research and the potential that metaheuristic approaches bring to solve installation scheduling problems. Thus, the genetic algorithm is chosen as the optimization procedure to use for current work. The objective of the optimization procedure throughout the research is minimizing the total installation cost. The target end date in this study is implemented in the form of a constraint to steer the optimizer solution within the specified limit. A new methodology is proposed to generate an automated planning for the different installation procedures to facilitate the link between the optimizer and ECN tool. The project also considers uncertainty introduced due to weather and describes the considerations made to account for the same. The new approach shows the potential of introducing an optimization procedure in OWF installation logistics and ultimately assisting in lowering the overall project costs.

In this report, new design water levels for Addicks and Barker Reservoir are calculated based on inﬂowing discharge into the reservoirs and precipitation directly onto the reservoirs, including data of Hurricane Harvey. These calculated design water levels are compared with the critical water levels calculated based on the failure mechanisms of the dams. This study shows that the original design water level of the dams, based on the Probable Maximum Flood, are 2.83 m and 1.01 m higher than the critical water level for which failure of the dams can occur due to piping for Addicks and Barker Reservoir. However, the maximum allowed water level which is currently maintained by the United State Army Corps of Engineers, is 2.19 m and 2.46 m below the calculated critical water level. During Hurricane Harvey, these maximum allowed water levels were exceeded with 3.46 m and 1.93 m.

The damage of residential properties upstream and downstream of the reservoirs are minimized based on the distribution of excess volume from the inﬂow of creeks and precipitation onto the reservoirs. The ratio of the amount of volume which should remain upstream of the dams and the volume discharged into the Buffalo Bayou is calculated for every considered event with its duration and return period. The ratio of Addicks Reservoir is the dominant ratio, which should be used for both reservoirs. Run-off alone already produces damage, especially for the 12h and 24h precipitation, so the Addicks and Barker Reservoirs should not release discharge into the Buffalo Bayou for small durations. For events with a longer duration, it would cause less damage to open the outlets of the reservoirs than to keep them closed. However, if the water level in the reservoir exceeds the critical water level for piping, it is advised to discharge more to the downstream area to prevent breaching of the dams. Since the critical water level is reached for approximately 25% of the events at Addicks Reservoir, mitigations against piping should be taken to improve the minimization of damage. For Barker Reservoir, the critical water level is not reached in the optimization. During big events, people living upstream will be more affected by the ﬂooding than people living downstream since this optimization is based on the damage minimization of residential properties.

The resulting set up was not able to find a solution for the complete trajectory. The trajectory was therefore split in three phases: take-off, acceleration and pull-up. The first two stages were optimized successfully and resulted in similar payload capacities as found in the literature with traditional methods. The final pull-up stage needs to be further investigated. Although this research has shown that global optimizers can be used for the ascent trajectory optimization, further research is required before the methods can be applied effectively.","space plane; optimization; trajectory","en","master thesis","","","","","","","","","","","","","","" "uuid:6bc3aacf-a97b-44bc-82f9-e7fc542852ad","http://resolver.tudelft.nl/uuid:6bc3aacf-a97b-44bc-82f9-e7fc542852ad","Energy-Optimized Toed Walking on Flexible Soles for Humanoids","van der Planken, Jonathan (TU Delft Mechanical, Maritime and Materials Engineering)","Vallery, Heike (mentor); Delft University of Technology (degree granting institution)","2017","In this research the role of thick flexible soles in energy-efficient humanoid walking is analyzed. It is

hypothesized that, the addition of underactuated degrees of freedom under the foot gives the robot

the potential to execute a pseudo-passive walking motion 1, which yields a decrease in ankle torque

and energy expenditure. Furthermore it is hypothesized that, if these principles are applied to toed

gait walking patterns, instead of flat foot walking patterns the decreases will be larger in magnitude.

To isolate the effects of adding a sole, a toe joint and both at the same time, four walking types are

compared in simulation; flat foot and toed gait walking, both with and without sole. To asses the cases

without sole, energy-optimized walking pattern generation is used. For walking on soles, the optimized

walking patterns are used as input for a deformation estimator that calculates the sole compression.

Simulation results show that the rolling motion of the sole reduces the ankle torque and the energy

consumption. The results prove that the reduction effects are especially large for toed gait walking,

thereby validating both the hypotheses.","flexible; Sole; Energy; optimization; gait; generation; humanoid; deformation; estimation; HRP-4; pseudo-passive; passive; walking","en","master thesis","","","","","","","","","","","","","","" "uuid:e6ab0d7e-5e43-4297-9369-cfe07c623eeb","http://resolver.tudelft.nl/uuid:e6ab0d7e-5e43-4297-9369-cfe07c623eeb","Quantized Distributed Optimization Schemes: A monotone operator approach","Jonkman, Jake (TU Delft Electrical Engineering, Mathematics and Computer Science)","Heusdens, Richard (mentor); Sherson, Thom (mentor); Delft University of Technology (degree granting institution)","2017","Recently, the effects of quantization on the Primal-Dual Method of Multipliers were studied.

In this thesis, we have used this method as an example to further investigate the effects of quantization on distributed optimization schemes in a much broader sense. Using monotone operator theory, the effect of quantization on all distributed optimization algorithms that can be cast as a monotone operator was researched for two different problem subclasses. The averaging problem was used as an example of a quadratic problem, while the Gaussian channel capacity problem was an example of the non-linear problem subclass. A fixed bit rate quantizer was used in combination with a dynamic cell width, to analyse the robustness of distributed optimization schemes against quantization effects. In particular, we have shown that for practical implementations it is possible to incorporate fixed bit rate quantization with dynamic cell width in a distributed optimization algorithm without loss of performance for both problem classes.

of all, the mesh should contain elements of good shapes and sizes. In addition, the sharp interfaces should coincide with the edges of the elements instead of intersecting with them. These requirements are formulated as an optimization problem with three terms, measuring the difference between the actual and prescribed scaling field, shape quality, and the area between prescribed curves and the nearest triangle edges. The solution of the optimization problem should provide the desired mesh. The mesh generator MESH2D was applied to obtain an initial mesh. The Matlab function minFunc was used to search for the minimum of the constructed objective function. Three weights balance the three terms in the objective function. When it comes to complicated models, these weights have to be chosen carefully to produce a reasonable mesh.","triangulation; optimization; seismic; finite-element","en","master thesis","","","","","","","","","","","","Applied Geophysics","","" "uuid:dabadd38-19f4-413e-b597-e8777a9bbb88","http://resolver.tudelft.nl/uuid:dabadd38-19f4-413e-b597-e8777a9bbb88","Using Topology Optimization for Actuator Placement within Motion Systems","Broxterman, Stefan (TU Delft Mechanical, Maritime and Materials Engineering)","Langelaar, Matthijs (mentor); Delft University of Technology (degree granting institution)","2017","Topology optimization is a strong approach for generating optimal designs which cannot be obtained using conventional optimization methods. Improving structural characteristics by changing the internal topology of a design domain has been fascinating scientists and engineers for years. Topology optimization can be described as a distribution of a given amount of material in a specified design domain, which is subjected to certain loading and boundary conditions. This domain can then be optimized to minimize specified objectives, for example compliance. For static problems, topology optimization is extensively used. The distribution of material, void and solid regions, can be used to solve several problems within the mechanical domain. However, this method of optimization is also used to optimize structures with respect to their resonant dynamics.

Design of actuator placement is used to determine the most optimal actuator layout for a given objective, for example reducing responses. Combined with topology optimization, both design variables can influence each other, and be optimized towards the wanted behavior. This is done in a static domain. When material is removed, the force layout is updated, which influences the material distribution again. It is shown that the combination of these design variables in the optimization process, contributes to a better result; weight reduction can be achieved, while large deformations are preserved.

Design of actuator placement, combined with topology optimization is also implemented in a dynamic domain. Since topology changes result in frequency response changes, the force placement is more sensitive. On the other hand, forces can be placed in a smart way, to ensure some mode shapes are not excited, whereas others are. By enabling positive and negative forces these forces can even be used to counteract or minimize certain modal responses. When implementing for example a harmonic excitation, the weight and total force can be linked together, to ensure accelerations are feasible. A weight reduction can thus lead to force reduction, which on its turn leads to less deformations. Especially in the high-precision industry, smart placement of actuators, including weight reduction can be very helpful. The combination of these phenomena could provide a new insight in creating accurate wafer stages.

tot) is considered as a single objective function. After optimizing objectives with GA, the optimal design parameters of three types of MC LED modules are determined. The results show that the thickness of MCPCB has a stronger influence on the total thermal resistance than other parameters. In addition, the sensitivity analysis is performed based on the optimum data. It reveals thatR

Er worden twee modellen vergeleken: in Model 1 is het systeem vrij om verzoeken te accepteren of te weigeren, terwijl in Model 2 per zone beslist wordt of alle ritten al dan niet worden. De taxiservice wordt eerst toegepast op kleine schaal, waarna enkele aanpassingen gedaan worden om het odel ook op grote schaal te kunnen toepassen. Bij de toepassing op kleine schaal wordt altijd verlies gemaakt, omdat de ratio taxi's per zone erg hoog is. Voor de toepassing op grote schaal blijkt dat het model voor veel taxi's sterk overeenkomt met het model van Liang, maar voor kleinere hoeveelheden taxi's minder omdat het genereren van ritten in Liang's model minder homogeen gedaan wordt. Het optimale aantal taxi's om te gebruiken is altijd 20 of 40.","taxi; optimization; taxiservice; Autonomous Vehicles; Modelling","nl","bachelor thesis","","","","","","","","","","","","","","" "uuid:e6fc3865-531f-4ea9-aeff-e2ef923ae36f","http://resolver.tudelft.nl/uuid:e6fc3865-531f-4ea9-aeff-e2ef923ae36f","Modeling, design and optimization of flapping wings for efficient hovering flighth","Wang, Q. (TU Delft Structural Optimization and Mechanics)","van Keulen, A. (promotor); Goosen, J.F.L. (copromotor); Delft University of Technology (degree granting institution)","2017","Inspired by insect flights, flapping wing micro air vehicles (FWMAVs) keep attracting attention from the scientific community. One of the design objectives is to reproduce the high power efficiency of insect flight. However, there is no clear answer yet to the question of how to design flapping wings and their kinematics for power-efficient hovering flight. In this thesis, we aim to answer this research question from the perspectives of wing modeling, design and optimization.

Quasi-steady aerodynamic models play an important role in evaluating aerodynamic performance and designing and optimizing flapping wings. In Chapter 2, we present a predictive quasi-steady model by including four aerodynamic loading terms. The loads result from the wing's translation, rotation, their coupling as well as the added-mass effect. The necessity of including all four of these terms in a quasi-steady model to predict both the aerodynamic force and torque is demonstrated. Validations indicate a good accuracy of predicting the center of pressure, the aerodynamic loads and the passive pitching motion for various Reynolds numbers. Moreover, compared to the existing quasi-steady models, the proposed model does not rely on any empirical parameters and, thus, is more predictive, which enables application to the shape and kinematics optimization of flapping wings.

For flapping wings with passive pitching motion, a shift in the pitching axis location alters the aerodynamic loads, which in turn change the passive pitching motion and the flight efficiency. Therefore, in Chapter 3, we investigate the optimal pitching axis location for flapping wings to maximize the power efficiency during hovering flight. Optimization results show that the optimal pitching axis is located between the leading edge and the mid-chord line, which shows a close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to optimized traditional wings used by most of the flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. In addition, the optimized pitching axis enables the drive system to recycle more energy during the deceleration phases as compared to their counterparts. This observation underlines the particular importance of the wing pitching axis location for energy-efficient FWMAVs when using kinetic energy recovery drive systems.

The presence of wing twist can alter the aerodynamic performance and power efficiency of flapping wings by changing the angle of attack. In order to study the optimal twist of flapping wings for hovering flight, we propose a computationally efficient fluid-structure interaction (FSI) model in Chapter 4. The model uses an analytical twist model and the quasi-steady aerodynamic model introduced in Chapter 2 for the structural and aerodynamic analysis, respectively. Based on the FSI model, we optimize the twist of a rectangular wing by minimizing the power consumption during hovering flight. The power efficiency of the optimized twistable wings is compared with corresponding optimized rigid wings. It is shown that the optimized twistable wings can not dramatically outperform the optimized rigid wings in terms of power efficiency, unless the pitching amplitude at the wing root is limited. When this amplitude decreases, the optimized twistable wings can always maintain high power efficiency by introducing certain twist while the optimized rigid wings need more power for hovering.

Considering the high impact of the root stiffness on flapping kinematics and power consumption, we present an active hinge design which uses electrostatic force to change the hinge stiffness in Chapter 5. The hinge is realized by stacking three conducting spring steel layers which are separated by dielectric Mylar films. The theoretical model shows that the stacked layers can switch from slipping with respect to each other to sticking together when the resultant electrostatic force between layers, which can be controlled by the applied voltage, is above a threshold value. The switch from slipping to sticking will result in a dramatic increase of the hinge stiffness (about 9x). Therefore, a short duration of the sticking can still lead to a considerable change in the passive pitching motion. Experimental results successfully show the decrease of the pitching amplitude with the increase of the applied voltage. Flight control based on the electrostatic force can be very power-efficient since there is ideally no power consumption due to the control operations.

In Chapter 6, we retrospect and discuss the most important aspects related to the modeling, design and optimization of flapping wings for efficient hovering flight. In Chapter 7, the overall conclusions are drawn and recommendations for further study are provided.","flapping wing; passive pitching; pitching axis; aerodynamic model; power efficiency; optimization","en","doctoral thesis","","978-94-92516-57-2","","","","","","","","","","","","" "uuid:6b07b0c4-5534-42e4-9067-0dcb23fc0646","http://resolver.tudelft.nl/uuid:6b07b0c4-5534-42e4-9067-0dcb23fc0646","A Many-objective Tactical Stand Allocation: Stakeholder Trade-offs and Performance Planning: A London Heathrow Airport Case Study","Földes, Gergely István (TU Delft Aerospace Engineering)","Roling, Paul (mentor); Verhees, Martijn (graduation committee); Melkert, Joris (graduation committee); Curran, Richard (mentor); Delft University of Technology (degree granting institution)","2017","Airports are highly complex systems that can generate economic growth

on their own. Accordingly, airports should take proactive actions to

create a status quo between the stakeholders(the airport itself,

airlines, passengers) in the tactical planning of the aircraft stand

allocation. Namely, the harmonization between the stakeholders’

interests is either reactively or not at all considered, so one cannot

be certain that the objectives of the stakeholders are met. For that

reason, a methodology is developed using Weight Space Search on a

many-objective tactical stand allocation model to establish a

reference performance set from which decision alternatives are created

using the k-means clustering algorithm. Decision makers then can

proactively assess and choose decision alternatives on the performance

of the tactical stand allocation to identify how the different

stakeholders can achieve their goals in (partial) synergy. The airport

can also apply the concept of empathetic negotiation to establish a

favorable status quo.","airport; tactical stand allocation; planning; optimization","en","master thesis","","","","","","","","2019-06-23","","","","","","" "uuid:ef49b460-a456-433e-841a-acb793febc53","http://resolver.tudelft.nl/uuid:ef49b460-a456-433e-841a-acb793febc53","Optimization of the skidded load-out process","Verhoef, Nick (TU Delft Mechanical, Maritime and Materials Engineering)","Kaminski, Mirek (mentor); van Woerkom, Paul (graduation committee); Bos, Reinier (graduation committee); van Kester, Maurice (graduation committee); Delft University of Technology (degree granting institution)","2017","The structures which HMC installs offshore are fabricated onshore and subsequently moved onto a barge or ship, seafastened and then transported to the offshore location. The process of moving a structure from the onshore quayside to the barge or ship is called the load-out. This load-out can be performed by lifting, skidding or using a trailer (SPMT). This thesis research focused only on a skidded load-out onto a barge.

During the load-out the weight of the jacket or topside is gradually transferred from the quay to the barge. The barge gradually takes more of the load so ballast water needs to be continuously pumped or discharged depending on the location of the structure and the location of the ballast tank concerned. Improper ballasting during this process will cause the alignment between the quay and the barge to be disrupted which in turn causes peak loads in the topside or jacket and the barge. It is questioned if there are more suitable ballasting methods or a structural solution in order to lower these peak loads? It is also modeled what the effects are of quayside stiffness and the best method to model this stiffness.

Therefore a 2-D representation of the entire load-out is made. This model will be made using the finite element method, via a numerical model, in MATLAB. A base case load-out of a topside will be applied to this model. Using this model, optimizing the ballast configuration will be researched. Several different criteria for the optimization were tested and its different effects on the forces during the load-out were researched and quantified. The structural solution of relocating the skidbeams to an area of lower deck stiffness was also tested and the results studied. The effects of the quayside stiffness and modelling methods were also quantified using the 2-D MATLAB model.

The conclusion derived from the optimizations is that there are other ballast configurations which perform better in reducing the peak forces experienced during the load-out. The key to these optimizations is that they keep the barge-quay alignment as perfect as possible. If a critical element is present in the load-out the ballast configuration can be adjusted to lower the forces in this specific element. The results of the simulation in which the skidbeams were relocated show that this approach has no beneficial effects in reducing the forces during the load-out, mainly due to the presence of the transverse bulkheads in het barge. Furthermore for the modelling of the quayside it was proven that especially when using a low stiffness quayside, modelling the quayside without taking into account the foundation layer stiffness is inaccurate and can lead to lower forces in the model than which occur in reality.","load-out; optimization; skidded; ballast","en","master thesis","","","","","","","","","","","","","","" "uuid:696e112f-697b-49e8-a524-c5efbe0663da","http://resolver.tudelft.nl/uuid:696e112f-697b-49e8-a524-c5efbe0663da","Support Structure Optimization: On the use of load estimations for time efficient optimization of monopile support structures of offshore wind turbines","Maljaars, J.L.","Langelaar, M. (mentor)","2017","Over the years, the installed capacity of offshore wind turbines is increasing rapidly. However, the Levelized Costs Of Energy (LCOE) is still higher than the LCOE of traditional energy production methods like nuclear power or energy from coals or gas. This research focuses on a further decrease of the LCOE, by minimizing the mass of a monopile support structure of a wind turbine. This is done in a so called integrated way: Optimizing the tower and the foundation together. The design variables used in this research are the wall thickness and the diameter of every +-3 meter section. These can even be cylindrical or conical. To simplify the problem, a parametrization of the designs is used, which reduces the design variables from around 180 to 28. This is checked with existing designs. Due to the interaction between mostly the first eigenfrequency and eigenmode, the diameter and the waves, it is expected that several local optima exist. Therefore, the proposed optimization strategy is a Particle Swarm Optimization which can be used for a global search for an initial position for a gradient based optimization to find a local optimum, which is possibly the global optimum. In this research the focus is on the Particle Swarm Optimization. The constraints of the optimization are Fatigue, Buckling, the maximum deflection of the monopile, the angle of the conical parts and the D/t-ratio of the monopile. These are used in the initial design of support structures, so that the optimized designs are realistic. To take the constraints into account, the objective is taken as the mass extended by the penalized constraints. To reduce the optimization time, the evaluations of the objective function are done by using load estimations instead of extensive load calculations. Several methods are compared on a theoretical basis: Response Surface Methodology, Radial Basis Functions, Kriging, Support Vector Regression, Multi-adaptive Regression Splines and Non-Uniform Regression B-Splines. The performance of a selection of methods is checked on the problem, to come up with reliable estimation methods. To improve the accuracy of the estimations, interaction of Particle Swarm Optimization and the estimators is proposed via estimator updating. During this research, an optimization tool for monopile support structures is developed. This tool is able to use calculations or estimations of the loads. In order to study the behaviour of the proposed optimization approach and to compare it with the traditional design approach, several case studies are formulated based on a realistic design problem. These are optimized with the optimization tool. Using a constant tower diameter, the optimization tool is able to reduce the mass of the support structure with 13\%. Using the tower diameter also as design variable in the optimization gives a further reduction of the mass with 4\%. Several test runs are done, to check whether a global optimum is found or not.","wind energy; wind turbine; offshore wind turbine; support structure; optimization; estimators; radial basis functions; kriging; support vector regression; nurbs; response surface methodology; estimator updating; integrated optimization","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision & Microsystems Engineering (PME)","","Engineering Mechanics","","" "uuid:642f1076-2f8a-4ad3-91eb-ea7b6c40f2df","http://resolver.tudelft.nl/uuid:642f1076-2f8a-4ad3-91eb-ea7b6c40f2df","Tractable Reserve Scheduling Formulations for Alternating Current Power Grids with Uncertain Generation","ter Haar, O.A.","Keviczky, T. (mentor); Rostampour Samarin, V. (mentor)","2017","The increasing penetration of wind power generation introduces uncertainty in the behaviour of electric power grids. This work is concerned with the problem of day-ahead reserve scheduling (RS) for power systems with high levels of wind power penetration, and proposes a novel set-up that incorporates an alternating current (AC) Optimal Power Flow (OPF) formulation. The OPF-RS problem is non-convex and in general hard to solve. Using a convex relaxation technique, we focus on systems with uncertain generation and formulate a chance-constrained optimization problem to determine the minimum cost of production and reserves. Following a randomization technique, we approximate the chance constraints and provide a-priori feasibility guarantees in a probabilistic sense. However, the resulting problem is computationally intractable, due to the fact that the computation time complexity grows polynomially with respect to the size of the power network and scheduling horizon. In this thesis, we first use the so-called scenario approach to approximate a convex set which contains almost surely the probability mass distribution of underlying random events. We rely on the special property of reserve scheduling problems which leads to linear constraint functions with respect to the uncertain parameters. We can therefore formulate a robust problem for only the vertices of the approximated set. Using the proposed approach, the number of scenarios is reduced significantly which is beneficial for the tractability. Such a formulation requires the power network state to only be feasible for all vertices of the convex approximated set. To even further relax such a requirement, we develop a novel RS formulation by considering the network state as a non-linear parametrization function of the uncertainty. By using a conic combination of matrices, only three positive semidefinite constraints per time step are considered. Unlike existing works in RS, our proposed parametrization has a practical meaning and is directly related to the distribution of reserve power. Such a reformulation yields a reduction in computational complexity of OPF-RS problems. Finally, we extend our results to a more realistic size of power grids, using sparsity pattern and spatiality (multi-area) decomposition of the power networks, leading to a decomposed semidefinite programming (SDP) problem. To solve the SDP in a distributed setting, we formulate a distributed consensus optimization problem, and then the alternating direction method of multipliers (ADMM) algorithm is employed to coordinate local OPF-RS problems between neighbouring areas. The theoretical developments in aforementioned cases were validated on a realistic benchmark system and a discussion on the tractability of the resulting optimization problems by means of computational time analysis is presented.","power system; optimization; uncertainty; renewable energy; wind power generation; reserve scheduling; optimal power flow; reserve requirements; scenario approach; alternating direction method of multipliers; distributed solving; vertex enumeration; conic parametrization","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","","" "uuid:faa2c6dc-e5ca-4486-b607-d963f650dad2","http://resolver.tudelft.nl/uuid:faa2c6dc-e5ca-4486-b607-d963f650dad2","Improved Flexible Runway Use Modeling: A Multi-Objective Optimization Concerning Pairwise RECAT-EU Separation Minima, Reduced Noise Annoyance and Fuel Consumption at London Heathrow","van der Meijden, S.A.","Roling, P.C. (mentor); Visser, H.G. (mentor)","2017","A minimization of disturbance caused by aircraft noise events and a reduction of fuel consumption during the initial and final phase of flight. These are the two objectives that play an important role in the Flexible Runway Allocation Model. By taking into account fuel consumption alongside noise annoyance, this model enables to analyze and optimize runway allocation from a broader perspective. This study aims to identify the improvements that can be made with respect to the initial Flexible Runway Use Model. Accordingly, these enhancements should be implemented and quantified in order to establish the Improved Flexible Runway Allocation Model. The improvements that are found in this study relate to both objectives in the mixed integer linear programming optimization as well as particular linear constraints. A major contribution is made to the runway occupancy constraint, which has shown a transition from a single aircraft computational method to a pairwise flight separation approach based on RECAT-EU. The proposed Improved Flexible Runway Allocation Model is applied to a case study that represents daily operations at London Heathrow Airport. This model shows that, by assigning a small delay to inbound and/or outbound flights, significant contributions can be made with respect to noise annoyance in the vicinity of the airport as well as the overall fuel consumption from the airline’s perspective. By allowing opposite direction operations, flexibility is added to the use of the airport’s runway ends, which results in a more efficient utilization of the available capacity. The results of this analysis are visualized by means of a Pareto front, indicating the Pareto optimal solutions to a runway allocation assignment based on a differentiation in objective weights.","runway; allocation; capacity; MILP; Linear Programming; Heathrow; London; optimization; fuel; noise; noise annoyance; Pareto; RECAT-EU; separation minima; opposite direction operations; flexible; flexible runway allocation","en","master thesis","","","","","","","","","Aerospace Engineering","Control & Operations","","Air Transport & Operations (ATO)","","" "uuid:03af3d1b-98d8-4c14-99ff-a448b4f4b2d0","http://resolver.tudelft.nl/uuid:03af3d1b-98d8-4c14-99ff-a448b4f4b2d0","Models, Solutions and Relaxations of the Asymmetrical Capacitated Vehicle Routing Problem","Kerckhoffs, L.","Aardal, K.I. (mentor)","2017","In this thesis, we take a look at the Asymmetrical Capacitated Vehicle Routing Problem (ACVRP). We will take a look at different possible formulations for the problem and choose one based on the ease of implementation, the computation speed of solving it, and the available relaxations. The problem, and its relaxations, will be modeled and solved using AIMMS, a commercial modeling software. Using the methods described above, we model different cases and instances of the problem using a Two-Index Vehicle Flow formulation. We apply an Assignment Problem relaxation and a Linear Programming relaxation to each of the instances. We find that the problem is easiest to solve when all customers are relatively close to each other (as opposed to being placed in separate clusters that are relatively far from each other), and that the LP relaxation gives bounds with a fairly good quality in short periods of time.","TSP; Vehicle Routing; VRP; ACVRP; optimization; online supermarket; relaxation","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","Optimization","","" "uuid:69719e2d-5649-47da-a39e-e9107487eab1","http://resolver.tudelft.nl/uuid:69719e2d-5649-47da-a39e-e9107487eab1","Creating an optimal OR schedule regarding downstream resources","Carlier, M.","Van Essen, T. (mentor)","2017","A high percentage of hospital admissions is due to surgical interventions. The operating theatre, which holds the operating rooms (ORs), is therefore one of the key resources in hospitals. Managing the operating theatre and finding an optimal OR schedule is complex due to the many factors that influence it. Scheduling a surgery in an OR influences downstream facilities like the post anaesthesia care unit, intensive care unit and general patient wards. This research was conducted at Leiden University Medical Centre (LUMC), an academic teaching hospital in Leiden, the Netherlands. During the week, the LUMC experiences a large variation in bed occupancy at the patient wards due to the way surgeries are scheduled. The large variation in bed occupancy causes surgeries to be cancelled, because there are no beds available at the ward. Because the OR theatre is such an expensive resource, we want to find a schedule that utilises the OR optimally during opening times. In this research, we develop a clustering method to cluster surgical procedures into surgery groups based on surgery duration and the length of stay. Then, we extend a model that analytically determines the patient distributions over the wards and intensive care for a given OR schedule. We define a mixed integer programming model with the objective to maximise the OR utilisation and minimise the variation in bed occupancy at the wards and intensive care. The model produces an OR schedule with the defined surgery groups assigned to days in the OR. We use two different methods to solve the model: a global approach and a local search heuristic, i.e., simulated annealing. The model has one nonlinear constraint and a complex objective function. Therefore, we linearise the constraint and the objective function, which results in a mixed integer linear program that is proven to be 𝑁𝑃-hard. Both approaches are tested on a dataset provided by the LUMC. Furthermore, several scenarios are evaluated. We conclude that the mixed integer linear programming method performs better and faster than the simulated annealing procedure. To obtain an even better solution it is possible to use a combination of both. By using this method, the OR utilisation of the LUMC can improve by 11% and the variation in bed occupancy can be decreased by 80%.","master surgery schedule; Operating room scheduling; bed occupancy; mixed integer linear programming; simulated annealing; length of stay; optimization; hospital","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","","","" "uuid:5321a5d4-ab40-4403-b09c-70c617abfc77","http://resolver.tudelft.nl/uuid:5321a5d4-ab40-4403-b09c-70c617abfc77","Optimization of Island Electricity System: Transition to a sustainable electricity supply system on islands through the implementation of a hybrid system including ocean energy technologies","van Velzen, L.","Blok, K. (mentor)","2017","Climate change without adequate countermeasures has become one of humanity's greatest threat. Energy production by means of renewable energy sources is therefore one of the crucial measures that will play a paramount role in reducing the pollutant emissions of fossil fuel dependency. Small islands in particular are an exemplary case of the extraordinary dependence on oil, the energy system often being entirely dependant on diesel generators. The relative high cost of sustaining this practice in combination with the geoeconomic properties of islands provides a unique incentive for the transition to renewable energy. By definition, islands are surrounded by water, making them highly vulnerable to the effects of climate change. In addition to the risk of being surrounded by water, it also provides a vast set of possibilities. Harnessing energy from waves, tides and the difference in seawater temperatures (OTEC) are just some of the examples. In this thesis, the effect of ocean energy integration is investigated. A simulation and optimization model of the electricity supply system is developed. A multi-objective genetic algorithm optimization regarding cost (LCOE) and renewable energy integration is performed. The model covers; PV solar, wind, tidal, wave and OTEC as well as battery storage as components of a renewable energy system. The resulting model is applied to two case study islands (Shetland and Aruba), the effect of the hybrid system including ocean energy technologies is determined. The cost optimal system was found to produce energy with an LCOE below the conventional fossil fuel energy cost. This corresponds to a renewable energy share of approximately 65%, consisting solely of wind energy. The cost was determined to have a significant influence on the system configuration. Currently, due to the high cost of energy based on their pre-commercial stage, ocean energy sources are added to the energy mix at high renewable energy shares (above 75% renewable coverage). The hybrid systems including the ocean energy sources displayed an evenly spread energy production. Based on this study, the future of integrating ocean energy provides an encouraging outlook. Cost will need to be reduced further for ocean energy to become economically viable. With the right investments in ocean energy, this process can be accelerated and will become viable.","Ocean energy; renewable energy; electricity system; optimization; simulation","en","master thesis","","","","","","","","","Technology, Policy and Management","Engineering, Systems and Services","","","","" "uuid:1f228e88-c7e7-431d-96af-df1abb195edd","http://resolver.tudelft.nl/uuid:1f228e88-c7e7-431d-96af-df1abb195edd","Maintenance Optimization of Tidal Energy Arrays: Design of a Probabilistic Decision Support Tool for Optimizing the Maintenance Policy","De Nie, R.C.","Wolfert, A.R.M. (mentor); Jarquin Laguna, A. (mentor); Leontaris, G. (mentor); Hoogendoorn, C.F.D. (mentor)","2016","The increasing demand for electricity offers many opportunities for renewable energy production, of which one alternative is tidal stream energy. Several feasibility studies have shown that the global tidal stream energy potential can contribute significantly to producing renewable energy. This tidal energy can mostly be produced at the 'tidal hotspots', where the kinetic energy density is very high due to fast flowing tidal currents. However, the tidal technology is not yet cost competitive in comparison with other renewables, such as photovoltaic and wind energy, which is why further cost reductions and efficiency improvements are to be achieved. Interviews with existing tidal system developers provided insight in the cost breakdown and showed that maintenance accounts for a significant share of the total project costs. This is due to the harsh environmental conditions that impose a large uncertainty, which increase the complexity of selecting an optimal maintenance policy. Damen Shipyards has shown interest in entering the tidal industry and is exploring the cost reduction possibilities by developing their own tidal system. This thesis contributes to Damen Shipyards' research by performing a time series analysis of a tidal hotspot to identify and model the multivariate dependence of the governing environmental phenomena. A probabilistic decision support tool is developed for selecting the optimal maintenance policy. The decision support tool primarily determines when and to what extent corrective maintenance should be performed. The corresponding overall maintenance costs are also calculated and secondary information regarding the activity duration is given. By means of the probabilistic approach, which captures the weather window uncertainty due to the environmental randomness, the results can be interpreted by the user based on the desired confidence level. In this research the weather window uncertainty is implemented by simulating a large number of random, but statistically identical environmental time series, which are based on available measurement data of the tidal field at EMEC, located at the Orkney Islands in the United Kingdom. The multivariate dependence between the significant wave height, wave peak period, wind velocity and current velocity is identified in the measurement set and fully represented in the generated time series by means of a pair-copula construction simulation. The necessity for having time independence cannot be met in the original dataset, which is why a new simulation approach is developed. This method consists of a sequential simulation of pair-copula constructions to include both the time dependence and multivariate dependence in the synthetic time series. Simulation of the set of synthetic time series showed to be more effective for describing uncertainty with respect to exclusively using the original dataset, due to the possibility of including more environmental realizations. The tidal array is represented as a semi-Markov decision process, which captures all costs and transition processes related to the deterioration and maintenance decisions. A policy optimization algorithm can then be used to find the optimal set of decisions and the corresponding maintenance cost rate which includes both the direct and indirect maintenance costs. The novel tidal system design of Damen Shipyards is then plugged into the decision support tool to determine the optimal maintenance policy and maintenance costs. The effect of different levels of detail for representing the tidal system have been compared and the benefits in terms of cost reductions of using this decision support tool with respect to less advanced approaches have been highlighted. Furthermore, multiple scenarios have been elaborated to identify the sensitivities in the cases of accounting for unreliability in the failure rates, varying the number of platforms in the array and including the economic fluctuations of the maintenance vessel day rates.","probabilistic; tidal energy; maintenance policy; optimization; semi-Markov decision process; copula, multivariate dependence; decision support tool","en","master thesis","","","","","","","","2021-12-16","Mechanical, Maritime and Materials Engineering","Offshore & Dredging Engineering","","","","" "uuid:b10a0d00-3949-4122-a3db-6996d5596afb","http://resolver.tudelft.nl/uuid:b10a0d00-3949-4122-a3db-6996d5596afb","Supporting MDO through dynamic workflow (re)generation","Augustinus, R.","Hoogreef, M.F.M. (mentor)","2016","The use of advancements in computing technology has enabled designers to perform more thorough and more detailed design studies. Multidisciplinary Design Optimization (MDO) architectures provide users with guidelines on how to structure their MDO problem, including the linking of disciplines and how to perform the optimization. However, complex MDO problems can consist of tens of disciplines and hundreds of design variables. Thus, the set-up of these problems can be complex and time consuming. In an attempt to reduce the time required and complexity of this set up, the main goal in this thesis is: ""To develop and demonstrate a methodology for automatic workflow (re)generation to support MDO"". The method to fulfill these requirements consists of three main steps. The first is the automatic generation of microworkflows, workflows representing the different disciplines of the problem. The user will need to specify the inputs, outputs and operations, after which the workflows are automatically generated. The second step involves the automatic storage of workflows. Workflows are stored in a graph database, allowing the addition of semantics to the data. Adding semantics allows a reasoner to understand what the data means, enabling the inferring of data not explicitly defined. OWL (Web Ontology Language) ontologies are used to supply structure to the workflow data and add semantics. In addition, materialization scripts are present to regenerate stored workflows. The final step of the implementation involves the automatic generation of simulation workflows according to different MDO architectures. This generation involves the materialization and adjustment of microworkflows and the creation of a ‘higher level’ workflow that links the disciplines and performs the optimization. The implementation of the automatic architecture generation has been validated using three case studies of varying complexity, amount of disciplines and discipline couplings. These case studies have shown a reduction of 93 to 98 % of time spent on the generation of simulation workflows representing the problem using an MDO architecture. In addition, the approach reduces the required user expertise and minimizes the amount of information the user needs to provide.","automation; MDO; MDO architectures; simulation workflows; optimization; PIDO","en","master thesis","","","","","","","","","Aerospace Engineering","Flight Performance and Propulsion","","","","" "uuid:87dc296d-57f4-4506-829b-2c1d33982e15","http://resolver.tudelft.nl/uuid:87dc296d-57f4-4506-829b-2c1d33982e15","Fast MPC Solvers for Systems with Hard Real-Time Constraints","Zhang, X.","Keviczky, T. (mentor); Ferranti, L. (mentor)","2016","Model predictive control (MPC) is an advanced control technique that offers an elegant framework to solve a wide range of control problems (regulation, tracking, supervision, etc) and handle constraints on the plant. The control objectives and constraints are usually formulated as an optimization problem that the MPC controller has to solve (either offline or online) to return the control command for the plant. This master thesis proposes a novel primal-dual interior-point (PDIP) method for solving quadratic programming problems with linear inequality constraints that typically arise from MPC applications. Convergence of PDIP is studied both in primal and dual framework. We show that the solver converges quadratically to a suboptimal solution of the MPC problem. PDIP solvers rely on two phases: the damped and the pure Newton phases. Compared to state-of-the-art PDIP method, this new solver replaces the initial (linearly convergent) damped Newton phase (usually used to compute a medium-accuracy solution) with a dual solver based on Nesterov's fast gradient scheme (DFG) that converges super-linearly to a medium-accuracy solution. The switching strategy to the pure Newton phase, compared to the state of the art, is computed in the dual space to exploit the dual information provided by the DFG in the first phase. Removing the damped Newton phase has the additional advantage that this solver saves the computational effort required by backtracking line search. The effectiveness of the proposed solver is demonstrated by simulating it on a 2-dimensional discrete-time unstable system.","optimization; predictive control; model-based control; suboptimal control","en","master thesis","","","","","","","","2016-12-02","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","","" "uuid:7f63baf4-98e4-4b79-9307-577299d843e6","http://resolver.tudelft.nl/uuid:7f63baf4-98e4-4b79-9307-577299d843e6","Local Alternative for Energy Supply: Performance Assessment of Integrated Community Energy Systems","Koirala, B.P. (TU Delft Energy & Industry); Chaves Avila, J.P. (Comillas Pontifical University); Gomez, T. (Comillas Pontifical University); Hakvoort, R.A. (TU Delft Energy & Industry); Herder, P.M. (TU Delft Engineering, Systems and Services)","","2016","Integrated community energy systems (ICESs) are emerging as a modern development to re-organize local energy systems allowing simultaneous integration of distributed energy resources (DERs) and engagement of local communities. Although local energy initiatives, such as ICESs are rapidly emerging due to community objectives, such as cost and emission reductions as well as resiliency, assessment and evaluation are still lacking on the value that these systems can provide both to the local communities as well as to the whole energy system. In this paper, we present a model-based framework to assess the value of ICESs for the local communities. The distributed energy resources-consumer adoption model (DER-CAM) based ICES model is used to assess the value of an ICES in the Netherlands. For the considered community size and local conditions, grid-connected ICESs are already beneficial to the alternative of solely being supplied from the grid both in terms of total energy costs and CO2 emissions, whereas grid-defected systems, although performing very well in terms of CO2 emission reduction, are still rather expensive.","distributed energy resources (DERs); energy communities; smart grids; multi-carrier energy systems; optimization; OA-Fund TU Delft","en","journal article","","","","","","","","","","Engineering, Systems and Services","Energy & Industry","","","" "uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","http://resolver.tudelft.nl/uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","Computationally efficient analysis & design of optimally compact gear pairs and assessment of gear compliance","Amani, A. (TU Delft Emerging Materials)","Spitas, C. (promotor); Spitas, Vasilios (promotor)","2016","","gear design; spur gear; design parameters; pitch compatibility; interference; corner contact; pointed tip; undercutting; non-standard; non-dimensional; design guidelines; highest point of single tooth contact (HPSTC); finite element analysis; stress analysis; bending strength; compact gears; optimization; centre distance; deviation; tolerance zone; computational modelling; compact gear drive; compliance; bending compliance; foundational compliance; Hertzian compliance; non-dimensional modelling; Saint-Venant's Principle; cubic Hermitian interpolation","en","doctoral thesis","","978-94-6186-739-1","","","","","","2018-11-15","","","","","","" "uuid:5849d327-fa7a-4591-a468-0368b2713374","http://resolver.tudelft.nl/uuid:5849d327-fa7a-4591-a468-0368b2713374","Shading design workflow for architectural designers","López Ponce de Leon, L.E.","Turrin, M. (mentor); Van den Ham, E.R. (mentor)","2016","","building technology; computational design; climate design; optimization; virtual reality; workflow","en","master thesis","","","","","","","","2016-11-04","Architecture and The Built Environment","Building Technology","","","","" "uuid:d059fea6-2861-49b4-ae36-5d31db109231","http://resolver.tudelft.nl/uuid:d059fea6-2861-49b4-ae36-5d31db109231","Density Tapering for Sparse Planar Spiral Antenna Arrays","Keijsers, J.G.M.","Yarovyi, O. (mentor)","2016","Increasing demands for mobile internet access have led to exponential developments in mobile communications technologies. The next generation mobile technology is expected to exploit electronic beam steering and to have a higher operating frequency to facilitate a higher bandwidth. This places a heavy burden on the base station antenna arrays, which should be sparse to accommodate passively cooling the system. Conventional sparse array topologies suffer from undesirable radiation pattern characteristics such as grating lobes. Therefore, this work focused on exploring methods to synthesize the antenna elements' geometrical parameters to enhance the radiation pattern and to explore the limitations that arise due to the array's sparseness. To this end, both a deterministic and a stochastic method were proposed. Starting with an analytical window function as a continuous current distribution and approximating this by adjusting the antenna elements' radial coordinates results in the fact that the desired window's radiation pattern is only approximated in a limited field of view, depending on the sparseness. Full electromagnetic wave simulations are performed to show that downscaling the topology to make it more dense gives rise to increased coupling effects that deteriorate the array's performance. In addition to the deterministic method, a genetic algorithm optimization method is employed to stochastically obtain the optimal current distribution window. Approximating the optimal continuous current distribution again leads to the array factor following the optimal window's radiation pattern in a limited field of view. Furthermore, it is shown that for the conditions used in this work, the optimum continuous current distribution is also the optimum current distribution for finite element arrays, implying that only one optimization needs to be executed when designing such an array. Concluding, the applicability of density tapering to sparse arrays is limited. The inherent undersampling causes a limited realization of the window function's characteristics. Density tapering does improve the absolute performance of a sparse array in terms of peak sidelobe level, but may be useful if the region of interest is concentrated near the main beam. The requirements and in particular the region of interest of the application determine whether density tapering can be effectively employed.","antenna array; sparse array; density tapering; space tapering; optimization; genetic algorithm; feko; planar; spiral; sunflower; mutual coupling","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","Microwave Sensing, Signals & Systems / track: Telecommunications & Sensing Systems","","" "uuid:e9b513c8-751b-45a7-9d99-a51177c918a2","http://resolver.tudelft.nl/uuid:e9b513c8-751b-45a7-9d99-a51177c918a2","Optimizing truck driver schedules with dependent working shifts, drivers' legislation, and multiple time windows","Van Alphen, M.N.","van Essen, J.T. (mentor); Aardal, K.I. (mentor); Haneyah, S. (mentor)","2016","In logistics, minimizing the resource expenses can be done by minimizing the total number of hours that each truck driver has to work. This sum of working hours is referred to as the total schedule duration for all truck drivers. Minimizing this total schedule duration is the main goal in the optimization problem that we consider. The considered minimization problem is called the Total Schedule Duration with Dependent resource, Multiple Time Windows and European drivers’ legislation problem, i.e., the TSDDMTW-EU problem. A literature study is given on this TSDDMTW-EU problem. Different solution approaches and Mixed Integer Linear Programs (MILP) are discussed regarding in the scope of our project. We compose a model for the TSDDMTW-EU problem by giving a MILP, based on a model by Kopfer and Meyer(2008). Two different modeling approaches are suggested which are assessed on their performance. Furthermore, we prove that the TSDDMTWEU problem is NP-hard, and to conclude, a heuristic is evaluated with respect to its performance in objective values. Our main research contributions can be given by the following three aspects. First, a MILP is given for the complete European drivers’ legislation. All extensions in the legislation regarding a single truck driver are included. Second, knowledge is gained on the influence of dependent truck drivers on a Total Schedule Duration problem. And finally, we prove that adding the complete European drivers’ legislation to a problem, results in a NP-hard problem.","NP-hard; MILP; optimization; Europen drivers' legislation; dependent resources; schedule duration; multiple time windows; heuristic, complexity","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","","","" "uuid:c7baa01f-eb37-4bf1-aceb-3f58c575bdd1","http://resolver.tudelft.nl/uuid:c7baa01f-eb37-4bf1-aceb-3f58c575bdd1","Parallel Approach to Derivative-Free Optimization: Implementing the DONE Algorithm on a GPU","Munnix, J.H.T.","Verhaegen, M. (mentor)","2016","Researchers at Delft University of Technology have recently developed an algorithm for optimizing noisy, expensive and possibly nonconvex objective functions for which no derivatives are available. The data-based online nonlinear extremum-seeker (DONE) was originally developed for sensorless wavefront aberration correction in optical coherence tomography (OCT) and optical beam forming network (OBFN) tuning. In order to make the DONE algorithm suitable for large-scale problems, a parallel implementation using a graphics processing unit (GPU) is considered. This master thesis aims to develop such a parallel implementation which performs faster than the existing sequential implementation without much change in obtained accuracy. Since OBFN tuning is a problem that may involve a large amount of parameters, an OBFN simulation is to be used to compare the parallel implementation to the sequential implementation. The key of the DONE algorithm is solving a regularized linear least-squares problem in order to construct a smooth and low-cost surrogate function which does provide derivatives and can be optimized fairly easily. This master thesis first discusses the basics of parallel computing, after which several linear least-squares methods and several numerical optimization methods are investigated. These methods are compared and the most suitable methods for parallel computing are implemented and tested for increasing dimensions. The final parallel DONE implementation combines the recursive least-squares (RLS) method with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and optimizes the largescale OBFN simulation almost twice as fast as the sequential DONE implementation, without much change in obtained accuracy.","derivative-free; optimization; numerical; algorithm; linear; least-squares; random; fourier; expansion; rfe; data-based; online; nonlinear; extremum-seeker; done; parallel; parallelization; graphics; processing; unit; gpu; compute; unified; device; architecture; cuda","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","","" "uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","http://resolver.tudelft.nl/uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems","Fitiwi, D.Z. (TU Delft Energy & Industry)","Herder, P.M. (promotor); Rivier Abbad, M. (promotor)","2016","","transmission expansion planning; uncertainty and variability; optimization; stochastic programming; moments technique; clustering","en","doctoral thesis","","978-84-608-9955-6","","","","","","","","","","","","" "uuid:0010fdac-32ec-459b-bb9b-3e6327a85496","http://resolver.tudelft.nl/uuid:0010fdac-32ec-459b-bb9b-3e6327a85496","Gradient-based optimization of flow through porous media: Version 3","Jansen, J.D. (TU Delft Geoscience and Engineering)","","2016","These notes form part of the course material for the MSc course AES1490 ""Advanced Reservoir Simulation"" which has been taught at TU Delft over the past decade as part of the track ""Petroleum Engineering and Geosciences"" in the two-year MSc program ""Applied Earth Sciences"".

The notes cover the gradient-based optimization of subsurface flow. In particular they treat optimization methods in which the gradient information is obtained with the aid of the adjoint method, which is, in essence, an efficient numerical implementation of implicit differentiation in a multivariate setting.

Chapter 1 reviews the basic concepts of multivariate optimization and demonsrates the equivalence of the Lagrange multiplier method for constrained optimization and the use of implicit differentiation to obtain gradients in the presence of constraints.

Chapter 2 introduces the use of Lagrange multipliers and implicit differentiation for the optimization of large-scale numerical systems with the adjoint method. In particular it addresses the optimization of oil recovery from subsurface reservoirs represented as reservoir simulation models, i.e. space- and time-discretized numerical representations of the nonlinear partial differential equations that govern multi-phase flow through porous media. It also covers the use of robust adjoint-based optimization to cope with the inherent uncertainty in subsurface flow models and addresses some numerical implementation aspects.

Chapter 3 gives a brief overview of various further topics related to gradient-based optimization of subsurface flow, such as closed-loop reservoir management and hierarchical optimization of short-term and long term reservoir performance.

97%) with any given configuration (capacity, data width and frequency). Besides these better than worst-case current measures, we also propose a generic post-manufacturing power and performance characterization methodology for DRAMs that can help identify the realistic current estimates and optimized set of timing measures for a given DRAM device, thereby further improving the accuracy of the power and energy estimates for that particular DRAM device. To optimize DRAM power consumption, we propose a set of performance-neutral DRAM power-down strategies coupled with a power management policy that for any given use-case (access granularity, page policy and memory type) achieves significant power savings without impacting its worst-case performance (bandwidth and latency) guarantees. We verify the pessimism in DRAM currents and four critical DRAM timing parameters as provided in the datasheets, by experimentally evaluating 48 DDR3 devices of the same configuration. We further derive optimal set of timings using the performance characterization algorithm, at which the DRAM can operate successfully under worst-case run-time conditions, without increasing its energy consumption. We observed up to of 33.3% and 25.9% reduction in DRAM read and write latencies and 17.7% and 15.4% improvement in energy efficiency. We validate DRAMPower model against a circuit-level DRAM power model and verify it against real power measurements from hardware for different DRAM operations. We observed between 1-8% difference in power estimates, with an average of 97% accuracy. We also evaluated the power-management policy and power-down strategies and observed significant energy savings (close to theoretical optimal) at very marginal average-case performance penalty without impacting any of the original latency and bandwidth guarantees.","DRAM; power; energy; estimation; optimization; modeling; variation","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","","","" "uuid:b97923b6-2d5b-4763-a89d-9017747b1c8d","http://resolver.tudelft.nl/uuid:b97923b6-2d5b-4763-a89d-9017747b1c8d","Entering an integrated cluster","Bijloo, M.","Bots, P.W.G. (mentor)","2014","A model-based approach to support an utility provider in its investment decision-making to enter an integrated cluster.","non-technical factors; optimization; experimental design; utility system; industrial cluster","en","master thesis","","","","","","","","","Technology, Policy and Management","Policy Analysis","","","","" "uuid:1cffab12-1f16-4d9d-a8d2-770d15030f11","http://resolver.tudelft.nl/uuid:1cffab12-1f16-4d9d-a8d2-770d15030f11","Aerodynamic and Aeroelastic Design of Low Wind Speed Wind Turbine Blades","Ramirez Gutierrez, C.A.","Timmer, W.A. (mentor); Shen, W.Z. (mentor)","2014","A large number of wind energy installations exist on rich wind resource sites. Nevertheless, estimates show that about 50% of the world’s wind energy resource has a wind speed of 7 m/s or less. For these low wind speed resource areas, low wind turbine technology is required. For this reason, this DTU Wind Energy master project, in cooperation with Ming Yang Wind Power European R&D Centre ApS, looks into the design of a low wind speed wind turbine blade. The project’s goal is to design a wind turbine blade for a 2 MW wind turbine, with a rotor diameter of 115 meters. A site, in China, is also proposed for the wind turbine design. The project focuses on the design of a blade for low wind speed wind turbine applications, on sites with a mean wind speed of about 7 m/s. The project includes several stages. First an introduction to the blade design and blade optimisation methods are introduced. Afterwards, the provided site in China is assessed and key parameters are selected for the next project stages. The next step, involves the wind turbine design, provided by Ming Yang Wind Power. This one is reviewed by doing an aerodynamic and aero-elastic performance analysis. With a cost of energy approach, a new wind turbine blade, for a wind turbine with a rated power of 2MW, is designed. Finally, an aerodynamic and aero-elastic performance analysis of the new blade, under different wind conditions, is performed to assess its feasibility. The framework is carried out with HAWC2, developed by DTU, and compared to GH Bladed, at some of the design stages.","aeroelastic; aerodynamic; china; optimization; design; low wind; blade; windenergy","en","master thesis","","","","","","","","","Aerospace Engineering","DUWIND","","European Wind Energy","","" "uuid:94455039-e532-4219-b223-b759cc317046","http://resolver.tudelft.nl/uuid:94455039-e532-4219-b223-b759cc317046","A New Strategy for Combined Topology and Fiber Angle Optimization","Yap, T.T.","Langelaar, M. (mentor); Van Keulen, A. (mentor)","2014","The use of composite materials is of increasing importance over the past years. Especially unidirectional fibrous laminates are nowadays widely applied in industry. They provide mechanical advantages in terms of stiffness to weight ratios, strength and resistance against fatigue. These properties make them suitable for high-end applications as for example the aerospace industry. Topology optimization is a mathematical technique which has recently gained importance as well. As an optimization technique with a large design freedom, it is able to design complex structures with high performance beyond human abilities. Together with the latest improvements on manufacturing techniques, the application of topology optimized structures intensifies in various fields. This research focuses on topology optimization on unidirectional fibrous laminate structures. The problem of combined topology and fiber direction optimization is researched over the past years by a number of groups. The problem formulation where the fiber angles are directly used as design variables is highly non-convex and is likely destined to end up in a local optimum far from the global optimum. Two other alternatives are described in literature: a discrete and continuous problem formulation. In the discrete approach, called Discrete Material Optimization (DMO), a finite number of candidate materials per element represents the different fiber orientations and penalization is applied to end up with a clear distinction between the candidate materials. The discrete formulation has the drawback that the solution is limited to the predefined candidate materials and that the number of design variables easily becomes large. Furthermore, the global optimum could never be guaranteed due to the required penalization. The continuous approach uses lamination parameters as design variables and the optimization problem becomes convex. A shortest-distance approach is used to determine the closest realistic laminate configuration for the global optimal set of lamination parameters. Using this technique, continuous variable stiffness panels can be designed with a reasonable amount of design variables. However, the realistic laminate configuration to a set of lamination parameters is not known analytically for more complex problems. Therefore, the determination of a physically meaningful configuration may be a difficult task, and may go with a loss of performance. Given both the pro's and con's of the methods from literature, there seems to be a demand for a method that can provide detailed results (continuous variable stiffness), with a reasonable amount of design variables, which also directly provides a physically realistic laminate configuration. In this research a new method called the Adaptive Angle Set Method (AASM) is proposed. AASM solves a sequence of DMO-like subproblems for fiber angle optimization, but the associated design variables are not penalized. A separate set of density variables performs the topology optimization and the combined problem is solved simultaneously. Every subproblem in AASM is analogue to a non-penalized DMO problem with three candidate materials for every element, representing a set of three different fiber angfiles. In the initial subproblem, the angle set is equal for all elements and given by 60° 0° 60°, spanning the entire domain of 180° of possible fiber angles. This subproblem is solved to optimality and the subsolution is used to formulate the succeeding subproblem. Based on the subsolution of design variables, a combination of update functions estimates a new fiber angle for every element, which is defined as the middle angle of the element's new angle set. The two other angles are valued from this middle angle plus and minus a certain offset (range) and the new subproblem is again solved to optimality. However, the range between the three candidate materials is tightened with the formulation of every new subproblem, such that the sequence of problems converges to angle sets where the three candidate materials are close to each other. This can be as close as 1° difference in the final subproblem. At the final stage, penalization is applied to create a clear distinct solution between the candidate materials, but this only causes a minimal loss of performance due to the small range in the angle set. Using this approach, the number of design variables is constant for every subproblem, namely three fiber angle design variables and one density variable per element. In the final stage, a high angle resolution is obtained with a directly known laminate configuration. The way in which a new subproblem is formulated highly depends on the estimation of the new angle for every element. The determination of the optimal new angle using an optimization routine would be equal to solving the overall fiber angle problem, which can not be solved efficiently with a gradient based optimizer. Therefore, two heuristic update functions are introduced to estimate the new angle. The first update function makes a linear combination of the previous angle set with the corresponding optimal design vector. The second update function sets the new angle equal to the largest principal stress direction for that element. A number of test cases showed that a mixed application of both update functions yielded the best results. The final configuration was tested on a number of compliance minimization problems, which were kept planar and single loaded during this research. For small problems, the AASM results could be compared to brute force global optima of the underlying fiber angle integer problem. Results equal or close to the global optimum were obtained. For larger problems and multiple layer laminates, AASM provided promising results as well, which were obtained faster than a comparable DMO-formulation. The promising results obtained by AASM makes the method worthwhile for further investigation on larger and more complex problems, including other objective functions, bending elements and manufacturing constrained problems.","topology; optimization","en","master thesis","","","","","","","","2014-11-12","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","","" "uuid:3beba71b-7e19-4277-bdd7-752c43f867af","http://resolver.tudelft.nl/uuid:3beba71b-7e19-4277-bdd7-752c43f867af","Cost optimal river dike design using probabilistic methods","Bischiniotis, K.; Kanning, W.; Jonkman, S.N.","","2014","This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and the requirements that are set by the user. Even though there are many ways that may provoke the dike failure, the literature study showed that the failure mechanisms that contribute most to the failure of the typical Dutch river dikes are overflowing, piping and inner slope stability. Based on these, the most important design variables of the dike cross-section dimensions are set and following probabilistic design methods, the probability of failure of many different dike cross-sections is estimated taking into account the abovementioned failure mechanisms. Different cross-section configurations may all comply with a set target probability of failure. Of these, the cross-section that results in the lowest cost is considered the optimal. This approach is applied to several representative dikes, each of which gives a different optimal design, depending on the local boundary conditions. The method shows that the use of probabilistic optimization gives more cost-efficient designs than the traditional partial safety factor designs.","river dike; optimization; probabilistic design; cross-section; failure probability","en","conference paper","Brazilian Water Resources Association and Acquacon Consultoria.","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:86815e55-bbba-45b4-915b-6f321b485940","http://resolver.tudelft.nl/uuid:86815e55-bbba-45b4-915b-6f321b485940","Imitation learning for a robotic precision placement task","Van der Spek, I.T.","Babuska, R. (mentor); Kuijpers, J. (mentor)","2014","In industrial environments robots are used for various tasks. At this moment it is not feasible for companies to deploy robots for productions with a limited batch size or for products with large variations. The use of robots for such environments can become feasible through a new generation of robots and software which can adapt quickly to new situations and learn from their mistakes while being programmable without needing an expert. A concept that can enable the transition to flexible robotics is the combination of imitation learning and reinforcement learning. The purpose of imitation learning is to learn a task by generalizing from observations. The power of imitation learning is that the robot is programmed in an intuitive way while the insight of the teacher is incorporated in the execution of the task. This research studies the combination of imitation and reinforcement learning, the research is applied to an industrial use-case. The research question of this study is: ""Can imitation learning be combined with reinforcement learning to achieve a successful application in an industrial robotic precision placement task?"" To imitate the demonstrated trajectories, Dynamic Movement Primitives (DMPs) are used. The DMPs are used to encode the observed trajectories. DMPs can be seen as a spring-damper like system with a non-linear forcing term. The forcing term is a sum of Gaussian basis functions with each its corresponding weight. Reinforcement learning can be applied to these weights to alter the shape of the trajectory created by a DMP. Policy Gradients with Parameter based Exploration (PGPE) is used as reinforcement learning algorithm to optimize the recorded trajectories. Experiments done on a UR5 show that without the learning step, the DMPs are able to provide a trajectory that results in a successful execution of a robotic precision placement task. The experiments also show that the learning algorithm is not able to remove noise from a demonstrated trajectory or complete a partial demonstrated trajectory. Therefore it can be concluded that the PGPE algorithm is not suited for reinforcement learning in robotics in its current form. It is therefore recommended to apply a data-efficient version of the PGPE algorithm in order to achieve better learning results.","reinforcement learning; imitation learning; policy gradient; pgpe; dynamic movement primitives; precision placement; dmp; optimization","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","Embedded Systems","","" "uuid:69942211-7216-4c09-b9e5-e5e452240a5b","http://resolver.tudelft.nl/uuid:69942211-7216-4c09-b9e5-e5e452240a5b","The role of electrical energy storage in a future sustainable electricity grid","Van Staveren, R.J.M.","Herder, P.M. (mentor); De Vries, L.J. (mentor); Cunningham, S.W. (mentor); Verzijlbergh, R.A. (mentor); Aalbers, R. (mentor)","2014","The call for lower CO2 emissions has increased the integration of renewable energy sources in the electricity system. However, these intermittent sources do not follow the cycles of demand and are unpredictable in their nature. As the electrical system needs to constantly balance supply and demand, these renewable sources cause problems in the operations of the grid. Electrical energy storage is proposed as a solution for these issues. The research uses an optimization model to test the effects of energy storage on the operation of the electrical system. It shows that the development of storage can be beneficial in systems with a large amount of renewables. The value of storage is mostly dependent on the amount of renewables in the electricity system. Low amounts of renewables give too little opportunities to load while too much renewables give the storage only few periods to unload. Secondly, the value of storage is dependent on the amount of available transmission capacity. In some situations, investments in transmission can be replaced by investments in storage. As the transmission system operator (TSO) is responsible for system balance, he should have the possibility to choose between different investments and pick the optimal one.","electrical energy storage; renewable energy; optimization","en","master thesis","","","","","","","","","Technology, Policy and Management","Engineering Systems and Services","","Energy and Industry","","" "uuid:95cfae42-d59d-4183-a3db-43f66fc45ee1","http://resolver.tudelft.nl/uuid:95cfae42-d59d-4183-a3db-43f66fc45ee1","Air freight transportation configurations: Exploration of optimization possibilities in the freight chain within Europe of KLM Cargo","Hemmes, A.F.","Tavasszy, L.A. (mentor); Rezaei, J. (mentor); Warnier, M.E. (mentor)","2014","KLM Cargo transports freight by truck from European outstations to the Schiphol Hub. This freight is transported palletized. It is unexplored what the impact on the KPI’s of KLM Cargo is of changing pallet composition or the transportation method (palletized vs. Loose) of this export freight flow. The objective of this research is to provide insights into these effects.","air cargo; logistic chain; optimization; palletized freight","en","master thesis","","","","","","","Campus only","2015-08-28","Technology, Policy and Management","Transport & Logistics","","Systems Engineering, Policy Analysis and Management","","" "uuid:9dff055c-eb6d-4005-a052-fce8aaeea792","http://resolver.tudelft.nl/uuid:9dff055c-eb6d-4005-a052-fce8aaeea792","Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity","Maher, G.D.; Hulshoff, S.J.","","2014","The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain values for the parameters of subgrid-scale models that are nonlinear in their coefficients. Specifically, the Newton-Raphson method is employed. A least-squares minimization formulation of the Germano Identity is developed to resolve issues that occur when the residual is positive and negative over different regions of the domain. In this case a Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is used to solve the minimization problem. The developed method is applied to the one-dimensional unsteady forced Burgers’ equation and the two-dimensional steady Stokes’ equations. It is shown that the Newton-Raphson method and BFGS algorithm generally solve, or minimize the residual of, the Germano relations in a relatively small number of iterations. The optimized subgridscale models are shown to outperform standard SGS models with respect to a L2 error. Additionally, the nonlinear SGS models tend to achieve lower L2 errors than the linear models.","subgrid-scale model; variational multiscale method; variational Germano identity; optimization; turbulence","en","conference paper","CIMNE","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy & Propulsion","","","","" "uuid:dc2a5b72-afe0-4fd9-9a12-a804a855408a","http://resolver.tudelft.nl/uuid:dc2a5b72-afe0-4fd9-9a12-a804a855408a","Implications of dredge mine design on mine optimizations and discussing possible approaches","Bijmolt, M.J.","Benndorf, J. (mentor); Wambeke, T. (mentor)","2014","The development of dredging as a major player for surface mine applications has led Royal IHC, a large equipment supplier and consultant for dredging and mining operations, and the TU Delft to work on more advanced optimizations techniques for the design of dredge mines. Three implications of the design of a dredge mine were found to be crucial for optimizations, namely: 1) depth control, 2) mining direction and 3) creation of multiple ponds. The conventional approach for open pit mines, in which a series of nested pits are created to determine an optimal mining sequence, was tested using the core module of Whittle and showed not to be readily applicable on dredge mines, because 1) multiple ponds may be created, 2) the depth to be mined for a certain area changes in time and 3) the nested pits expand randomly towards high graded zones. A new method for optimizing dredge mines was introduced as a second approach, which determines an ultimate depth per stacked block model, based on the cumulative values and finds an optimal route for a pond through these stacked blocks by using an adapted version of the Nearest Neighbour algorithm. Four limitations for this approach are recognized: 1) the depth difference between stacked blocks could become impractical, 2) full utilization of the field is not possible because it may reach a premature dead-end or it may enclose a group of non-mined blocks, 3) the blocks have to meet the same length and width requirements of a pond; therefore not incorporating the accuracy of the data and 4) it lacks the function ability to mine the area in layers. Project Alpha indicated that the new approach finds an optimal mine design; however, the long lifetime of the mine (>60 years) results in a low recovery of 65%. Decreasing the lifetime of the mine would result in a higher recovery. The conventional approach showed to be impractical for the design of a dredge mine, while it created multiple thin deposits. The NPV of the worst case scenario of Whittle turns out slightly lower than the NPV of the optimal route determined by the second approach.","dredging; mining; optimization","en","bachelor thesis","","","","","","","","2014-07-03","Civil Engineering and Geosciences","Geoscience & Engineering","","Resource Engineering/Mine optimization","","" "uuid:4f9ed7f0-05e1-4cbc-8992-d91dc6c914d7","http://resolver.tudelft.nl/uuid:4f9ed7f0-05e1-4cbc-8992-d91dc6c914d7","Validation and Optimization of a Design Formula for Stable Geometrically Open Filter Structures","Van de Sande, S.A.H.; Uijttewaal, W.S.J.; Verheij, H.J.","","2014","Granular filters are used for protection against scour and erosion of base material. For a proper functioning it is necessary that at the interfaces between the filter structure, the subsoil and the water flowing above the filter structure no material will be transported. Different types of granular filters can be distinguished, this paper focuses on stable geometrically open filter structures under current attack. Hoffmans (2012) developed a design formula for stable geometrically open filters. This paper presents the validation and an optimization of the design formula based on performed model tests. It is shown that the current design formula is too conservative. The proposed improvements allows for a wider range of applicability.","filter; granular filter; geometrically open filter; open filter; interface stability; bed protection; design formula; stability; optimization; ICCE 2014","en","conference paper","Coastal Engineering Research Council","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:3efae2c3-b092-4c68-9dcf-4ab0d4cff398","http://resolver.tudelft.nl/uuid:3efae2c3-b092-4c68-9dcf-4ab0d4cff398","Optical System Optimization Using Genetic Algorithms","HesamMahmoudiNezhad, N.H.M.N.","Thijssen, J.T. (mentor); Bociort, F.B. (mentor)","2014","The goal of this project is to investigate the performance of the Genetic Algorithms (GA) and the influence of their parameters on optical system optimization. We have developed our own code on optical system optimization using the MatLab GA module to accomplish this task. To evaluate the optical part of our code we checked the outcomes of each step with the commercial lens-design software-package Zemax. We tested different tuning parameters of the GA. We found that the mutation and the crossover parameters are the most critical parameters. Choosing inappropriate values of these parameters causes the optimization routine to never reach a good result, even by increasing the population size and the number of generations to high numbers. As an alternative to GA, we studied the Artificial Bee Colony (ABC) method. This is one of the newest methods for global optimization which is claimed by some authors to perform better than GA. We combined an existing ABC code with our optical code. According to the results, for the optical system we consider, we found the GA to be superior over the ABC method.","optical system; optimization; genetic algorithms","en","master thesis","","","","","","","","2014-06-10","Applied Sciences","Imaging Science & Technology","","Applied Physics","","" "uuid:cb6544e8-02f9-403c-8540-698b7af9a185","http://resolver.tudelft.nl/uuid:cb6544e8-02f9-403c-8540-698b7af9a185","Rolling horizon predictions of bus trajectories","Oshyani, M.F.; Cats, O.","","2014","Bus travel times are subject to inherent and recurrent uncertainties. A real-time prediction scheme regarding how the transit system evolves will potentially facilitate more adaptive operations as well as more adaptive passengers’ decisions. This scheme should be tractable, sufficiently fast and reliable to be used in real time applications. For this purpose, a heuristic hybrid scheme for departure time estimation is proposed in this study. The predic-tion generated by the proposed hybrid scheme consists of three travel time components: schedule, instantaneous and historical data sources. Genetic algorithm is applied in order to specify the contribution of each data source component to the prediction scheme. The pro-posed scheme was applied for a trunk bus line in Stockholm, Sweden. In addition, the current-ly deployed scheme was replicated in order to compare the performance of both schemes. The results suggest that the proposed scheme reduces the overall mean absolute error by almost 20%. Moreover the proposed scheme provides better predictions except for very long term predictions where both schemes yield the same performance.","prediction; bus departure time; optimization; travel time and genetic algorithm","en","conference paper","National Technical University of Athens (NTUA)","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","" "uuid:13dab427-a77c-476b-a352-f1cb7cf6a0e1","http://resolver.tudelft.nl/uuid:13dab427-a77c-476b-a352-f1cb7cf6a0e1","Optimization strategy for conceptual airplane design","Vasseur, P.T.","Vos, R. (mentor)","2014","Due to the ever growing demand for more efficient aircraft novel aircraft concepts have to be explored. By improving design tools the potential of unconventional configurations can be further studied. This requires improvement of conceptual design tools such that more knowledge can be gathered on alternative solutions as early in the design process as possible. Multidisciplinary design optimization (MDO) can support this process by providing an environment in which the various disciplines can be designed and optimized concurrently, while a certain level of consistency is maintained. An optimization design tool has been created to assess the potential performance gains of novel aircraft configurations. It connects with the Initiator design tool, which is a conceptual design framework. As such, it can also be used as a means to expose any design issues that may exist in the Initiator. With the optimizer tool the following four case studies were performed: a conventional Airbus A320, a forward-swept canard aircraft, a threesurface aircraft and an oval-fuselage aircraft. For this purpose the genetic algorithm, a gradient algorithm and a hybrid genetic algorithm were used. From the case studies followed that large improvements can be obtained with unconventional aircraft configurations when compared to the initial aircraft design proposed by the Initiator design tool. Up to 20% improvement was found with the three-surface and canard aircraft. The oval-fuselage aircraft could be improved by a solid 10%, while a 5% improvement was obtained with the conventional A320. Among all cases the most contributing factors were the wing position, sweep angle and aspect ratio. There is a tendency towards lower sweep angles due to the positive effect on the weight of the wing and an underestimation of the drag rise. With the forward-swept canard relatively high sweep angles were found, from which followed that the weight penalty of forward swept wings is underestimated. The sizing routine of the control surfaces is found to be inadequate, since the Initiator derives most parameters directly from the wing and does not properly take into account control and stability requirements. Results have shown that this mainly regards the sweep and dihedral angle. These sizing issues also affect the static margin. It was found that class II design information was not fed back to the control surface sizing. From the used optimization algorithms can be concluded that the gradient algorithm was the least effective as it had difficulties with the noise. It sometimes stopped prematurely or started oscillating. The genetic algorithm was found to be the best option due its robustness. It proved to be far less sensitivity to noise. Its computational cost could be significantly reduced by applying parallel optimization and using a caching mechanism. The hybrid algorithm was found to be too computational expensive. The obtained increase in objective value did not outweigh the added cost.","optimization; aircraft design; mdo","en","master thesis","","","","","","","","2014-06-12","Aerospace Engineering","Aerospace Design, Integration & Operations","","Aerospace Structures and Design Methodologies","","52.009507, 4.360515" "uuid:650ec0d0-4613-4dae-96b1-1f685dff0e60","http://resolver.tudelft.nl/uuid:650ec0d0-4613-4dae-96b1-1f685dff0e60","Automatic Hardware Generation for Reconfigurable Architectures","Nane, R.","Bertels, K.L.M. (promotor)","2014","Reconfigurable Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons. First, processor clock frequencies reached threshold values past which power dissipation becomes a very difficult problem to solve. As a consequence, alternatives were sought to keep improving the system performance. Second, because Field-Programmable Gate Arrays (FPGAs) technology substantially improved (e.g., increase in transistors per mm2), system designers were able to use them for an increasing number of (complex) applications. However, the adoption of reconfigurable devices brought with itself a number of related problems, of which the complexity of programming can be considered an important one. One approach to program an FPGA is to implement an automatically generated Hardware Description Language (HDL) code from a High-Level Language (HLL) specification. This is called High-Level Synthesis (HLS). The availability of powerful HLS tools is critical to managing the ever-increasing complexity of emerging RA systems to leverage their tremendous performance potential. However, current hardware compilers are not able to generate designs that are comparable in terms of performance with manually written designs. Therefore, to reduce this performance gap, research on how to generate hardware modules efficiently is imperative. In this dissertation, we address the tool design, integration, and optimization of the DWARV 3.0 HLS compiler. Dissimilar to previous HLS compilers, DWARV 3.0 is based on the CoSy compiler framework. As a result, this allowed us to build a highly modular and extendible compiler in which standard or custom optimizations can be easily integrated. The compiler is designed to accept a large subset of C-code as input and to generate synthesizable VHDL code for unrestricted application domains. To enable DWARV 3.0 third-party tool-chain integration, we propose several IP-XACT (i.e., a XML-based standard used for tool-interoperability) extensions such that hardware-dependent software can be generated and integrated automatically. Furthermore, we propose two new algorithms to optimize the performance for different input area constraints, respectively, to leverage the benefits of both jump and predication schemes from conventional processors adapted for hardware execution. Finally, we performed an evaluation against state-of-the-art HLS tools. Results show that application execution time wise, DWARV 3.0 performs, on average, the best among the academic compilers.","high-level synthesis; hardware; reconfigurable; architecture; compiler; survey; dwarv; HLS; optimization","en","doctoral thesis","CPI Koninklijke Wohrmann","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Engineering","","","","" "uuid:034c1314-3d9a-425a-be9c-6cf73756f9f1","http://resolver.tudelft.nl/uuid:034c1314-3d9a-425a-be9c-6cf73756f9f1","Optimal use of the subsurface for ATES systems in busy areas","Qian, L.","Olsthoorn, T.N. (mentor); Bloemendal, J.M. (mentor); Timmermans, J.S. (mentor); Van Beek, H.J.M. (mentor)","2014","With the incentive to reach the energy saving and CO2 emission reduction targets of the Netherlands, the application of Aquifer Thermal Energy Storage (ATES) is expected to increase sharply during this decade [2]. With limited aboveground and underground space, well arrangement is becoming difficult in busy areas with a rapidly growing number of ATES systems. Master plans were proposed to achieve optimal use of the subsurface, especially in such busy areas. This study aims at improving the robustness of such master plans. A two-stage method is proposed to obtain such robust master plans. It was applied to one of the seven available and investigated master plans, i.e. the Parooldriehoek in Amsterdam. The studied master plan was optimized in the first stage by replacing its design parameters with their best alternatives. In the second stage this so-optimized plan was tested to assess its flexibility to handle climate change and additional future users. As a result, the studied master plan could successfully be lifted to a higher level of robustness compared to the original.","ATES; master plan; arrangement; subsurface use; optimization","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Water Management","","Water Resources","","" "uuid:cc25fc31-20ac-46e0-9173-b8e53ebef400","http://resolver.tudelft.nl/uuid:cc25fc31-20ac-46e0-9173-b8e53ebef400","Optimization of the operational use of entrance channels based on channel depth requirements","Dobrochinski, J.P.H.","Vellinga, T. (mentor); De Jong, M. (mentor); Groeneweg, J. (mentor)","2014","Large capital and maintenance dredging operations are required to ensure the accessibility of many ports. The expenses associated with the dredging operations can have a significant impact on the finances of these ports. Therefore, considerable attention to the design of the width and depth aspects of access channels is justifiable. This study considered this topic within the framework of an Additional Master Thesis (3-month internship). The objectives of the study are: i) to verify the influence of different processes and sources of uncertainties in the evaluation of minimum depth requirements; and ii) to investigate the advantages and drawbacks of different methods of depth requirement evaluation. The Port of Tubarão (Southeast Brazil) is used as a case study to verify processes and methods. Four different approaches were considered to evaluate depth requirements for the access channel of the Port. These methods are based on either deterministic and/or probabilistic methods, and on approaches without or including wave influences. The results for the case study indicate that ship motions due to waves have a minor influence on the required channel depth at that location during most of the time. However, in certain wave conditions (not only in terms of wave height, but also wave period and wave direction relative to the manoeuvring ship) vertical ship motions become the dominant issue regarding depth requirements; consequently waves should be included in a practical evaluation over time. In probabilistic approaches more knowledge can be incorporated in the analysis, however, this requires detailed information. The deterministic approach, on the other hand, is simpler to use and gives good insight about the main driving variables. Although, the main drawback of deterministic methods is that the reliability of the evaluation cannot be accessed, or that conservative assumptions need to be made. This may be uneconomical. The use of a probabilistic method for the case study led to a more optimized use of the channel in terms of accessibility in comparison to the results obtained with the deterministic method. Nevertheless, those results depend largely on the safety factors assumed in the deterministic computations relative to the probability distributions considered in the deterministic approach. Alternatively, the safety margins can be computed or calibrated for specific cases based on probabilistic calculations. In that case the results of deterministic and probabilistic methods can be similar, ensuring the required reliability of the practical deterministic approach, but not being excessively restrictive.","depth requirements; access channel; probabilistic; optimization","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","Ports and Waterways","","" "uuid:a6f1539b-d6b0-4995-9bca-777a277e1295","http://resolver.tudelft.nl/uuid:a6f1539b-d6b0-4995-9bca-777a277e1295","Development of a Low-Thrust Earth-Centered Transfer Optimizer for the Preliminary Mission Design Phase","Boudestijn, E.","Noomen, R. (mentor)","2014","Develop the basis for a TU Delft Astrodynamics Toolbox (Tudat)-based software tool that comprises the fundamental functionalities required in order to optimize low-thrust Earth-centered orbit transfer trajectories for the preliminary mission design phase. Motivation: contribution to Tudat and facilitate case studies for the MicroThrust consortium.","space; optimization; Tudat; low-thrust","en","master thesis","","","","","","","","","Aerospace Engineering","Astrodynamics & Space Missions","","","","" "uuid:d1d56fec-c63a-4920-b961-c5ef0244588c","http://resolver.tudelft.nl/uuid:d1d56fec-c63a-4920-b961-c5ef0244588c","Freeform Follows Functions","Smidt, D.M.","Borgart, A. (mentor); De Ruiter, P. (mentor); Sonneveld, P. (mentor); Bittermann, M.S. (mentor)","2014","This research is twofold. Firstly it treats the complexity of design using a computational intelligent method to achieve, with regard to a limited set of goals, high performing designs. Secondly, an architectural and structural challenge to conceptually design a freeform roof framework, which integrates structural rigidity and non-standard tessellation.","freeform architecture; tessellation; rigidity; structural analysis; complexity; computation; tiling; generative design; parametric design; multi-objective optimization; optimization; performance based design; Grasshopper","en","master thesis","","","","","","","","","Architecture and The Built Environment","Architectural Engineering + Technology","","Design & Technology - Computation & Performance","","" "uuid:519b5492-9356-4914-8391-c39614a2567d","http://resolver.tudelft.nl/uuid:519b5492-9356-4914-8391-c39614a2567d","Cost optimal river dike design using probabilistic methods","Bischiniotis, K.","Kok, M. (mentor); Jonkman, S.N. (mentor); Jommi, C. (mentor); Kanning, W. (mentor)","2014","This research follows a fully probabilistic approach in order to estimate the optimal design for a river dike cross-section, taking into account the investment costs. From the theory studied, the failure mechanisms that contribute most to the failure of river dikes are identified. These are overflowing, wave overtopping, piping and inner slope stability. The most important design variables of the dike cross-section dimensions are set and following probabilistic design methods, the probability of failure of many different dike cross-sections is estimated based on the abovementioned failure mechanisms. The aim of the study is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and the requirements that are set by the user.","river dike; cost optimal; optimization; overflowing; piping; macro-instability; DGeoStability; matlab; cross-section; probabilistic","en","master thesis","","","","","","","","2014-01-22","Civil Engineering and Geosciences","Hydraulic Engineering","","Water Management and engineering","","" "uuid:287de608-564e-4751-b806-b59be0505a53","http://resolver.tudelft.nl/uuid:287de608-564e-4751-b806-b59be0505a53","Constraint Handling in Life-cycle Optimization Using Ensemble Gradients","Alim, M.","Jansen, J.D. (mentor); Leeuwenburg, O. (mentor); Egberts, P. (mentor)","2013","Constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables themselves or on some function of those variables. This thesis focused on using the Ensemble Optimization method to improve the NPV (Net Present Value) as the objective function of waterflooding a reservoir with an L-shaped sealing fault under constraints. The optimization controls are injection rates for the input-constrained optimization and valves opening for the output-constrained optimization. The constraints are field injection rate for the input-constrained optimization and field production rate for the output-constrained optimization. Three Matlab optimization methods were tested, of which the SQP (Sequential Quadratic Programming) method performed the best. For dealing with the constraints, it is better to let the optimizer handle them instead of the simulator. Two ways to help the optimizer to have a better constraint adherence are by using the constraint scaling and improving the quality of the gradients. Having too many variables may lead to a lower objective function due to the approximate gradients’ inaccuracies. Regularization (smoothing) can help to improve the objective function in this problem.","constraint; optimization; ensemble; gradients; life-cycle","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","Petroleum Engineering","","" "uuid:d063dfb9-6ec6-4c43-b315-fb98a576498a","http://resolver.tudelft.nl/uuid:d063dfb9-6ec6-4c43-b315-fb98a576498a","Model-based Feedforward Control for Inkjet Printheads","Khalate, A.A.","Babuska, R. (promotor); Bombois, X. (promotor)","2013","In recent years, inkjet technology has emerged as a promising manufacturing tool. This technology has gained its popularity mainly due to the facts that it can handle diverse materials and it is a non-contact and additive process. Moreover, the inkjet technology offers low operational costs, easy scalability, digital control and low material waste. Thus, apart from conventional document printing, the inkjet technology has been successfully applied as a micro-manufacturing tool in the areas of electronics, mechanical engineering, and life sciences. In this thesis, we investigate a piezo-based drop-on-demand (DoD) printhead which is commonly used for industrial and commercial applications due to its ability to handle diverse materials. A typical drop-on-demand (DoD) inkjet printhead consists of several ink channels in parallel. Each ink channel is provided with a piezo-actuator which on the application of an actuation voltage pulse, generates pressure oscillations inside the ink channel. These pressure oscillations push the ink drop out of the nozzle. The print quality delivered by an inkjet printhead depends on the properties of the jetted drop, i.e., the drop velocity, the drop volume and the jetting direction. To meet the challenging performance requirements posed by new applications, these drop properties have to be tightly controlled. The performance of the inkjet printhead is limited by two factors. The first one is the residual pressure oscillations. The actuation pulses are designed to provide an ink drop of a specified volume and velocity under the assumption that the ink channel is in a steady state. Once the ink drop is jetted the pressure oscillations inside the ink channel take several micro-seconds to decay. If the next ink drop is jetted before these residual pressure oscillations have decayed, the resulting drop properties will be different from the ones of the previous drop. The second limiting factor is the cross-talk. The drop properties through an ink channel are affected when the neighboring channels are actuated simultaneously. Generally, the drop consistency is improved by manual tuning of the piezo actuation pulse based on some physical insight or based on exhaustive experimental studies on the printhead. However, these ad-hoc procedures have proved to be insufficient in dealing with the above limitations. In this thesis, a model-based control approach is proposed to improve the performance of a DoD inkjet printhead. It offers a systematic and efficient means to improve the attainable performance of a DoD inkjet printhead by reducing the effect of the residual oscillations and the cross-talk. Furthermore, the models that have been developed for this purpose can also give new insights into the operation of the printhead. In order to achieve this goal, it is required to have a fairly accurate and simple model of an inkjet printhead. It is not easy to obtain a good physical model for an inkjet printhead due to insufficient knowledge of the complex interactions in the printhead. Therefore, in this thesis, we have used system identification, i.e. we use experimental measurements in order to develop a model. For this purpose, it is required that the piezo-actuator is also used as a sensor. Note that the crucial aspect in the model development is to obtain a model of the inkjet system close to its operating conditions. Therefore, we have collected measurements of the piezo sensor signal during the jetting of a series of drops at a given DoD frequency. For the printhead under investigation, we found that the dynamics of the ink channel are dependent on the DoD frequency. This phenomenon is caused by non-linearities in the droplet formation. Consequently, we have modeled the ink channel dynamics for every DoD frequency. In this thesis, it is shown that the set of local inkjet models obtained at different DoD frequencies can be encompassed by a polytopic uncertainty on the parameters of a nominal model. Using the same identification procedure, the cross-talk can also be modeled. In order to improve the printhead performance the actuation pulse was redesigned. The new drive pulse is designed to provide good performance for all models in the area of uncertainty by means of robust feedforward control. The pulse also respects the pulse shape constraints posed by driving electronics (ASICS). Besides the robust actuation pulse, our approach also introduces an optimal delay between actuation of neighboring channels to reduce the cross-talk. The current driving electronics limits the possibilities of reshaping the actuation pulse. Since it is expected that this limitation will be relaxed in the future, we have also developed procedure to design a robust pulse without pulse shape constraints. The performance improvement achieved with this unconstrained pulse has proved to be quite limited. The proposed method is also useful for inkjet practitioners who do not have any insight in the inkjet dynamics. The efficacy of our approach is demonstrated by our experimental results. The proposed method was verified in practice by jetting a series of ink drops at various DoD frequencies and also by jetting a bitmap image. For the printhead under consideration, the drop-consistency is improved by almost four times with the proposed approach when compared to the conventional methods.","inkjet printhead; identification; feedforward control; robust control; optimization","en","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","http://resolver.tudelft.nl/uuid:8d1abf33-74d0-4042-bae9-6e4468b7bb81","Averaging Level Control to Reduce Off-Spec Material in a Continuous Pharmaceutical Pilot Plant","Lakerveld, R.; Benyahia, B.; Heider, P.L.; Zhang, H.; Braatz, R.D.; Barton, P.I.","","2013","The judicious use of buffering capacity is important in the development of future continuous pharmaceutical manufacturing processes. The potential benefits are investigated of using optimal-averaging level control for tanks that have buffering capacity for a section of a continuous pharmaceutical pilot plant involving two crystallizers, a combined filtration and washing stage and a buffer tank. A closed-loop dynamic model is utilized to represent the experimental operation, with the relevant model parameters and initial conditions estimated from experimental data that contained a significant disturbance and a change in setpoint of a concentration control loop. The performance of conventional proportional-integral (PI) level controllers is compared with optimal-averaging level controllers. The aim is to reduce the production of off-spec material in a tubular reactor by minimizing the variations in the outlet flow rate of its upstream buffer tank. The results show a distinct difference in behavior, with the optimal-averaging level controllers strongly outperforming the PI controllers. In general, the results stress the importance of dynamic process modeling for the design of future continuous pharmaceutical processes.","control; process modeling; process simulation; parameter estimation; dynamic modeling; optimization; crystallization; continuous pharmaceutical manufacturing","en","journal article","MDPI","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","","","" "uuid:d9af64cb-5d2c-41ad-af88-3acf937b49d7","http://resolver.tudelft.nl/uuid:d9af64cb-5d2c-41ad-af88-3acf937b49d7","The Effects of Multi-Criteria Routing on Dynamic Traffic Management","Zhang, F.","Hoogendoorn, S.P. (mentor); Knoop, V.L. (mentor); Chen, Y.S. (mentor); Goni Ros, B. (mentor); Hajiahmadi, M. (mentor); Wiggenraad, P.B.L. (mentor)","2013","For the degree of Master of Science in Transport & Planning at Delft University of Technology.","DTM; green traffic; emission; dynasmart-P; multi-criteria routing; optimization","en","master thesis","","","","","","","","2013-11-27","Civil Engineering and Geosciences","Transport & Planning","","Transport and Planning","","" "uuid:ae68639f-c231-4750-9098-76cb029f7227","http://resolver.tudelft.nl/uuid:ae68639f-c231-4750-9098-76cb029f7227","Parametric Massing Optimization Tools","Christodoulou, A.","Van den Dobbelsteen, A. (mentor); Coenders, J. (mentor); Rolvink, A. (mentor); Van den Ham, E. (mentor); Den Hollander, J.P. (mentor)","2013","The separation between architect and engineer is a relatively recent event, in comparison to the long history of human constructions. In modern times the separation of the two professions and the involvement of engineers in later design stages has proved problematic, because of the large effort needed to make changes in later design stages. On the other hand, the rising importance of engineering and financial objectives that building projects have to meet, calls for a more integrated design approach since the very early design stages. Contemporary parametric tools give the possibility to enhance multidisciplinary communication by providing the ability to quickly extract needed values from preliminary design geometries (or “massings”) and assess them through properly defined evaluation scripts. This thesis investigated this prospect, focusing on the aspect of energy demand, which emerges as a central design consideration in contemporary architecture. The thesis report identified the main objectives that would serve as fitness values for it’s assessment optimization systems, including in them the main parameters of influence for each of these objectives. These objectives have been the minimization of solar gains, annual heating and cooling demand, annual total energy demand per GFA, annual total energy demand per NFA, and embodied + operational (for 1, 10, 50 years) CO2 emissions. The choice of the optimization objective and thus the optimization system that has to be set to assess it, was proved to have great influence on the optimization process and results. Because of that, this thesis concluded that it this is a point that has to be considered carefully, according to the design’s priorities, to find out which specific objective is set as fitness value, for each design project. That is because an extension of the optimization in unneeded areas, might diminish the accuracy of the results and increase the computational demand needed. To support these assessment and optimization systems, a parametric toolbox has been developed, named MEOtoolbox (MEO derived from the initials of the words Massing Energy Optimization). The components developed mainly aimed to facilitate the calculation of the annual demand for heating and cooling, using the quasi-steady state method for energy demand calculations described in the ISO13790 international standard. The MEOtoolbox will be made available to download after the end of this thesis project, through MEOtoolbox.blogspot.com. Possible design scenarios where the MEOtoolbox could be particularly useful, have been outlined through design dilemmas that also formed the case studies of this thesis. To validate the results of these case studies, results, relevant to the case studies, have been beforehand compared to results of similar studies and software. The design case studies have investigated the effect of tilting the facades of a recreation centre in Paris (France), and the effect of orientation and the effect of self shading in a design of a highrise building for the European Union in Brussels (Belgium). The study showed that: Tilting downwards a south facing facade, in Paris, can reduce the solar load in half during summer, while not greatly reducing solar load in winter. For the climate of Brussels, the maximum effect that orientation could have, for the particular design geometry, was an increase of 5% in the annual cooling demand and 1% in the annual heating demand. The effect of shifting in order to self-shade building geometries proved that it is an effective way of reducing cooling demand, without increasing greatly the heating demand. The two case studies also exemplified some of the additional benefits and shortcomings of the parametric tools. In the advantages, it has been shown how design choices can be visually supported, forming arguments for a specific design decision. On the shortcomings, the unavailability of tools to assess the multiplicity of parameters that a designer might consider and the sensitivity of the results in certain parameters, (which could, if not set properly, lead to invalid feedback) are issues that have to be addressed by parametric design software developers, for example through detailed manuals. For the comparative research, six basic building typologies (“Warehouse”, “Cube”, “Tower”, “Caterpillar”, “Fence”, “Slab”) were compared with regards to their operational energy per area in different climates and with different glass percentages in the facades. The thesis concluded that: For all the typologies studied, with the absence of external shading and for the glass percentages studied, cooling demand seems to be more critical for the determination of optimal energy massing, due to it’s greater fluxuation depending on the typology. As far as the absolute energy demand values are concerned, location seems to be the most largely influencing parameter, followed by glass percentage. Orientation and programmatic function seem to have much less influence on the absolute value of the energy demand of the typologies. As far as the ratio between the typologies is concerned, the switch of the assessment value from Energy per GFA to Energy per NFA, strongly influences the energy demand per area ratio between the typologies, as spaces with less rentable space often seem to be good energy solutions. Minimizing the expected Energy/NFA gives different results than Energy/GFA, taking into account also space efficiency. Since NFA is usually a primary goal for construction and real-estate companies, it is a realistic aim to try to minimize energy costs to cover a specific programmatic NFA demand. Location also influences largely the ratio between typologies, which seems to be similar in locations with similar ratio between the needs in heating and cooling. The typology study showed that the energy per NFA can be reduced in the magnitude of 40% by selecting an optimal typology for the climate of the Netherlands For the climate of Amsterdam, and for the characteristics for facade and structure employed for the analysis, embodied energy seems to correspond, roughly, to 10 years of operational energy for all of the typologies. The fact that this was the result for all the typologies studied, suggests that it could potentially be used as a rule of thumb, when assessing the importance of embodied energy for a specific project, depending on it’s expected functional lifetime.","parametric; optimization; energy performance; built environment; building engineering","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","Building Technology & Physics","","" "uuid:f30bd41b-4b44-4459-ab68-d913fffdb8e9","http://resolver.tudelft.nl/uuid:f30bd41b-4b44-4459-ab68-d913fffdb8e9","Estimation of primaries by sparse inversion incuding the ghost","Verschuur, D.J.","","2013","Today, the problem of surface-related multiples, especially in shallow water, is not fully solved. Although surface-related multiple elimination (SRME) method has proved to be successful on a large number of data cases, the involved adaptive subtraction acts as a weak link in this methodology, where primaries can be distorted due to their interference with multiples. Therefore, recently, SRME has been redefined as a large-scale inversion process, called estimation of primaries by sparse inversion (EPSI). In this process the multi-dimensional primary impulse responses are considered as the unknowns in a largescale inversion process. By parameterizing these impulse responses as spikes in the space-time domain, and using a sparsity constraint in the update step, the algorithm looks for those primaries that, together with their associated multiples, explain the total input data. As the objective function in this minimization process truly goes to zero, the tendency for distorting primaries is greatly reduced. An additional advantage is that imperfections in the data can be included in the forward model and resolved simultaneously, such as the missing near offsets. In this paper it is demonstrated that the ghost effect can also be included in the EPSI formulation after which a ghost-free primary estimate can be obtained, even in the case the ghost notch is within the desired spectrum.","acquisition; inversion; multiples; optimization; wave equation","en","journal article","Society of Exploration Geophysicists","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","","" "uuid:5ede00e1-9101-49ea-9a2f-81b99291b110","http://resolver.tudelft.nl/uuid:5ede00e1-9101-49ea-9a2f-81b99291b110","Risk approach to land reclamation: Feasibility of a polder terminal","Lendering, K.T.; Jonkman, S.N.; Peters, D.J.","","2013","New ports are mostly constructed on low lying coastal areas or shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting ‘convention-al terminal’ requires large volumes of fill material often dredged from the sea, which is costly. The terminal yard of a ‘polder terminal’ lies below the outside water level and is surrounded by a quay wall flood defense structure. This saves large amounts of reclamation cost but introduces higher damage potential during flood-ing and thus an increased flood risk. A risk-based framework is made to determine the optimal quay wall and polder level, which is an optimization (cost benefit analysis) under two variables. Overtopping failure proves to be the dominant failure mechanism for flooding. The reclamation savings prove to be larger than the in-creased flood risk demonstrating that the polder terminal could be an attractive alternative to the conventional terminal.","container terminals; flood risks; optimization; polder terminals; probabilistic design","en","conference paper","CRC Press/Balkema - Taylor & Francis Group","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:6bf9ad22-c4a5-4f5f-8006-fce525935f04","http://resolver.tudelft.nl/uuid:6bf9ad22-c4a5-4f5f-8006-fce525935f04","Cloud-Based Design Analysis and Optimization Framework","Mueller, V.; Strobbe, T.","","2013","Integration of analysis into early design phases in support of improved building performance has become increasingly important. It is considered a required response to demands on contemporary building design to meet environmental concerns. The goal is to assist designers in their decision making throughout the design of a building but with growing focus on the earlier phases in design during which design changes consume less effort than similar changes would in later design phases or during construction and occupation.Multi-disciplinary optimization has the potential of providing design teams with information about the potential trade-offs between various goals, some of which may be in conflict with each other. A commonly used class of optimization algorithms is the class of genetic algorithms which mimic the evolutionary process. For effective parallelization of the cascading processes occurring in the application of genetic algorithms in multi-disciplinary optimization we propose a cloud implementation and describe its architecture designed to handle the cascading tasks as efficiently as possible.","cloud computing; design analysis; optimization; generative design; building performance","en","conference paper","","","","","","","","","","","","","","" "uuid:7d81abad-fcbe-4094-871a-54755ee0f03e","http://resolver.tudelft.nl/uuid:7d81abad-fcbe-4094-871a-54755ee0f03e","Packing Optimization for Digital Fabrication","Dritsas, S.; Kalvo, R.; Sevtsuk, A.","","2013","We present a design-computation method of design-to-production automation and optimization in digital fabrication; an algorithmic process minimizing material use, reducing fabrication time and improving production costs of complex architectural form. Our system compacts structural elements of variable dimensions within fixed-size sheets of stock material, revisiting a classical challenge known as the two-dimensional bin-packing problem. We demonstrate improvements in performance using our heuristic metric, an approach with potential for a wider range of architectural and engineering design-built digital fabrication applications, and discuss the challenges of constructing free-form design efficiently using operational research methodologies.","design computation; digital fabrication; automation; optimization","en","conference paper","","","","","","","","","","","","","","" "uuid:76b9b6db-926c-479e-9031-ed4abf2324df","http://resolver.tudelft.nl/uuid:76b9b6db-926c-479e-9031-ed4abf2324df","A Computational Method for Integrating Parametric Origami Design and Acoustic Engineering","Takenaka, T.; Okabe, A.","","2013","This paper proposes a computational form-finding method for integrating parametric origami design and acoustic engineering to find the best geometric form of a concert hall. The paper describes an application of this method to a concert hall design project in Japan. The method consists of three interactive subprograms: a parametric origami program, an acoustic simulation program, and an optimization program. The advantages of the proposed method are as follows. First, it is easy to visualize engineering results obtained from the acoustic simulation program. Second, it can deal with acoustic parameters as one of the primary design materials as well as origami parameters and design intentions. Third, it provides a final optimized geometric form satisfying both architectural design and acoustic conditions. The method is valuable for generating new possibilities of architectural form by shifting from a traditional form-making process to a form-finding process.","interactive design method; parametric origami; acoustic simulation; optimization; quadrat count method","en","conference paper","","","","","","","","","","","","","","" "uuid:241873a0-ad14-43f8-a135-e2c133622c2f","http://resolver.tudelft.nl/uuid:241873a0-ad14-43f8-a135-e2c133622c2f","Biological Computation for Digital Design and Fabrication: A biologically-informed finite element approach to structural performance and material optimization of robotically deposited fibre structures","Oxman, N.; Laucks, J.; Kayser, M.; Uribe, C.D.G.; Duro-Royo, J.","","2013","The formation of non-woven fibre structures generated by the Bombyx mori silkworm is explored as a computational approach for shape and material optimization. Biological case studies are presented and a design approach for the use of silkworms as entities that can compute fibrous material organization is given in the context of an architectural design installation. We demonstrate that in the absence of vertical axes the silkworm can spin flat silk patches of variable shape and density. We present experiments suggesting sufficient correlation between topographical surface features, spinning geometry and fibre density. The research represents a scalable approach for optimization-driven fibre-based structural design and suggests a biology-driven strategy for material computation.","biologically computed digital fabrication; robotic fabrication; finite element analysis; optimization; CNC weaving","en","conference paper","","","","","","","","","","","","","","" "uuid:38379080-da96-4acd-a86d-f3b8f492dd1b","http://resolver.tudelft.nl/uuid:38379080-da96-4acd-a86d-f3b8f492dd1b","Algorithmic Engineering in Public Space","Hulin, J.; Pavlicek, J.","","2013","The paper reflects on a relationship between an algorithmic and a standard (intuitive) approach to design of public space. A realized project of a plaza renovation in Czech town Vsetin is described as a study case. The paper offers an overview of benefits and drawbacks of the algorithmic approach in the described study case and it outlines more general conclusions.","algorithm; public space; circle packing; optimization; pavement","en","conference paper","","","","","","","","","","","","","","" "uuid:25459ba0-fe3a-444c-847a-34ad5c41ab9f","http://resolver.tudelft.nl/uuid:25459ba0-fe3a-444c-847a-34ad5c41ab9f","Integrating Computational and Building Performance Simulation Techniques for Optimized Facade Designs","Gadelhak, M.","","2013","This paper investigates the integration of Building Performance Simulation (BPS) and optimization tools to provide high performance solutions. An office room in Cairo, Egypt was chosen as a base testing case, where a Genetic Algorithm (GA) was used for optimizing the annual daylighting performance of two parametrically modeled daylighting systems. In the first case, a combination of a redirecting system (light shelf) and shading system (solar screen) was studied. While in the second, a free-form gills surface was also optimized to provide acceptable daylighting performance. Results highlight the promising future of using computational techniques along with simulation tools, and provide a methodology for integrating optimization and performance simulation techniques at early design stages.","High performance facade; daylighting simulation; optimization; form finding; genetic algorithm","en","conference paper","","","","","","","","","","","","","","" "uuid:3bfab3e0-d826-44c5-81da-f06c33ee0299","http://resolver.tudelft.nl/uuid:3bfab3e0-d826-44c5-81da-f06c33ee0299","A Case Study in Teaching Construction of Building Design Spaces","Nicknam, M.; Bernal, M.; Haymaker, J.","","2013","Until recently, design teams were constrained by tools and schedule to only be able to generate a few alternatives, and analyze these from just a few perspectives. The rapid emergence of performance-based design, analysis, and optimization tools gives design teams the ability to construct and analyze far larger design spaces more quickly. This creates new opportunities and challenges in the ways we teach and design. Students and professionals now need to learn to formulate and execute design spaces in efficient and effective ways. This paper describes curriculum that was taught in a course 8803 Multidisciplinary Analysis and Optimization taught by the authors at Schools of Architecture and Building Construction at Georgia Tech in spring 2013. We approach design as a multidisciplinary design space formulation and search process that seeks maximum value. To explore design spaces, student designers need to execute several iterative processes of problem formulation, generate alternative, analyze them, visualize trade space, and address decision-making. The paper first describes students design space exploration experiences, and concludes with our observations of the current challenges and opportunities.","design space exploration; teaching; multidisciplinary; optimization; analysis","en","conference paper","","","","","","","","","","","","","","" "uuid:1e5ca95d-df73-44aa-b856-855c142d84ef","http://resolver.tudelft.nl/uuid:1e5ca95d-df73-44aa-b856-855c142d84ef","Improving the economic performance of AkzoNobel's EVB plant","Borren, A.J.","Herder, P.M. (mentor); Lukszo, Z. (mentor); Stougie, L. (mentor); De Bruijne, M.L.C. (mentor)","2013","AkzoNobel Energie Voorzienings Bedrijf (EVB – Energy Supply Company) is located at the Botlek business park in Rotterdam. EVB produces and supplies energy and utilities to other production plants at the Botlek site. Due to the intertwined value chain and reuse of each other’s residual and waste products, the situation at the Botlek is complicated. This report describes the development of a decision support model that contributes to an improved economic efficiency of AkzoNobel’s EVB plant. The decision support model calculates the optimal production settings that minimize the variable costs and assure that at all times the critical operational conditions are met. By comparing the results of the optimization model with the base case model, it can be concluded that there is a savings potential of more than 6% of the variable costs of the EVB plant. From the optimized production settings a pattern is distinguished. This pattern is translated into a set of operational rules that can be applied to the EVB plant to realize the savings potential. The next step is to carefully analyze the consequences of the operational rules for the operations and the implications for customers. Only then the new operational rules can be incorporated and the savings potential can be realized.","optimization; decision support model; chemical industry; economic efficiency","en","master thesis","","","","","","","","2014-03-01","Technology, Policy and Management","Energy & Industry","","SEPAM","","" "uuid:597b318c-a1af-4fde-865f-4422f548336b","http://resolver.tudelft.nl/uuid:597b318c-a1af-4fde-865f-4422f548336b","ORM Optimization through Automatic Prefetching in WebDSL","Gersen, C.M.","Groenewegen, D.M. (mentor); Visser, E. (mentor)","2013","Object-Relational Mapping (ORM) frameworks can be used to fetch entities from a relational database. The entities that are referenced through properties are normally not fetched initially, instead they are fetched automatically by the ORM framework, when they are used by the application. This is called lazy-fetching and can result in many queries, causing overhead. The number of queries can be reduced by prefetching multiple entities at once. There are two types of prefetching techniques, static and dynamic. Static techniques perform optimization during compilation and dynamic techniques collect information during runtime in order to perform prefetching. Multiple static prefetching techniques are implemented into WebDSL that all use the same static code analysis, however, they generate different queries. The static analysis determines the entities that are going to be used and should be prefetched. These static techniques are compared to the dynamic techniques already present inside the Hibernate ORM framework. The evaluation is performed using the OO7 benchmark and complete WebDSL applications. The results of the OO7 benchmark show a response time improvement of up to 69% over lazy-fetching. On complete web applications some of the static techniques implemented in WebDSL improve the performance on average, however, the performance may be improved further, using a more fine-grained method of choosing an optimization technique.","optimization; prefetching; ORM; DSL; database","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","","" "uuid:1d9c4022-dbd6-4452-9842-4649c1fdd432","http://resolver.tudelft.nl/uuid:1d9c4022-dbd6-4452-9842-4649c1fdd432","A Freight Transport Model for Integrated Network, Service, and Policy Design","Zhang, M.","Tavasszy, L.A. (promotor)","2013","“The goal of the European Transport Policy is to establish a sustainable transport system that meets society’s economic, social and environmental needs ” (ECE, 2009). This statement indicates the challenges that the European transport policy makers are faced with when facilitating an increasing freight transport demand with limited transport infrastructures. The development of an interconnected intermodal transport system has been recognized by the European Commission as an important, strategic task that will contribute to solving the dilemma between the accommodation of an increased freight flow and the need for a sustainable living environment. This thesis focuses on model-based, quantitative analysis for infrastructure network design decisions for large scale intermodal transport systems.. The involvement of public concerns, as represented by the governmental objectives on sustainability, brings additional complexity into infrastructure network design. Governments are often concerned with network design on a regional scale or a national scale. The enlargement of the network scale to an international level further increases the level of heterogeneity of the network, among other factors in terms of the number of actors involved, the diversity of transport demand and the variety of transport service supply. These new objectives and dimensions pose new challenges to freight transport infrastructure network design. This thesis proposes a new model to support policy making for an intermodal freight transport network. The model is able to simultaneously incorporate large scale, multimodal, multi-commodity and multi-actor perspectives. It can be used for integrated policy, infrastructure and service design. Results can be visualized per transport mode and per commodity value group on a geographic information system at segmental level, terminal level, corridor level, regional level, national level, and network level. Implementation of the model for a realistic scale network design is another contribution of this thesis. To this end, we calibrated the model by using two approaches: a Genetic Algorithm based method and a feedback-based method. The model was validated by comparing the modelled link flows with observations, testing the cross elasticities of the costs to demand and comparing the catchment area of the terminals with areas observed in practice. The calibration results indicate that the model adequately captures the network usage decisions on an aggregated level. The model was applied to Dutch container transport network design problems. Databases of Dutch container transport demand, features of the European multimodal freight transport infrastructure network, information about selected inland waterway transport services, and information about transport and transhipment costs, emissions and external costs were embedded in the model. After completing the theoretical and empirical specification the model was applied to policy decisions on the Dutch container transport. The thesis extensively discusses the integrated infrastructure, service, and policy design that may contribute to managing the costs of the freight flows, meanwhile ensuring a sustainable living environment. The main findings from the application are as follows. - A higher CO2 price can results in lower total transport costs, despite extra handling costs in intermodal transhipments. The costs saved by bundling freight and using intermodal transport can compensate the additional handling costs. As these cannot compensate for the internalized CO2 emission costs, the total operational costs borne by transport operators will increase. - Network efficiency can be increased by closing terminals that are not able to attract sufficient volumes of demand. However, it is not likely to happen in practice, due to the fact that the private terminal operators and the local governments have local interests to protect on those small terminals that may conflict with the objective of minimizing total network costs. - The hub-network-services assumed and tested in this study cannot compete with road transport or shuttle barge transport services in the base scenario due to the extra transhipment costs, low load factor, and low demand for IWW container transport. In a future scenario, these services are only feasible under very high traffic growth. - There is not one single optimal future infrastructure network. Instead, a good infrastructure network design mainly depends on the future demand, transport price, and development of new transport technology. Based on the conclusions drawn in this thesis, implementing the combination of CO2 pricing and terminal network configuration is more effective than solely implementing CO2 pricing, with regard to total network CO2 emissions. A range of efficient networks, forming a frontier of minimal total network costs and total network CO2 emissions, is presented in the thesis, instead of one single optimal solution. The frontier provides more options in terminal network optimization in terms of the target network performance. The question which is the optimal network will depend on the relative value placed on CO2 emissions. The thesis ends with a vision on future freight transport network design models. A potential research direction is to incorporate the dimension of time into the model. This extension will enable the model to capture dynamic demand; to be applicable for scheduling synchronized intermodal transport services; to provide more realistic estimations of transport emissions and to analyse network reliability, including network robustness and service robustness. Reference: CEC (2009) 'COMMUNICATION FROM THE COMMISSION: A sustainable future for transport: Towards an integrated, technology-led and user friendly system', Commission of the European Communities, Brussels.","freight; transport; network design; optimization; GIS; service network; transport policy","en","doctoral thesis","TRAIL Research School","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","" "uuid:7867b8d1-28df-49fe-b002-7e6b8367bda4","http://resolver.tudelft.nl/uuid:7867b8d1-28df-49fe-b002-7e6b8367bda4","A predictive sourcing model for multi Export Credit Agency financed large industrial projects","Jansen, P.R.","Cunningham, S.W. (mentor); Storm, S.T.H. (mentor); Thissen, W.A.H. (mentor)","2013","CB&I is experiencing an issue in a new project to be executed in Russia, named NKNK. Despite the rich experience CB&I has with projects, there is a continuous struggle with the sourcing process in projects which it involves financing by multiple export credit agencies. The issue at stake is, CB&I does not know beforehand in which countries it is most likely to source its equipment to achieve to lowest possible sourcing costs. However, budgets available in countries will be set in an inception phase of a project. A preliminary estimation method is needed to determine the amount of budget needed in multiple countries, in order to increase the probability of minimizing total sourcing costs. In order to accomplish this, a new cost estimation methodology is needed. This combines strategic sourcing theory, descriptive statistics on suppliers, cost differentials among countries of manufacturing, macroeconomic theory, the role of export credit agencies in trade finance, conventional cost estimation methods, linear optimization, and Monte Carlo simulations. The importance of strategic sourcing is underpinned in this thesis. Theoretical optimal sourcing strategies are suggested on the basis of the level of perceived competition. The perceived level of competition within different industries is acquired through questionnaires with industry experts. The suggested sourcing strategies are tested on their practical applicability in large industrial projects. It turns out that there are serious limitations in applying multiple sourcing strategies, due to the nature of the highly customized equipment needed in these projects. Predominantly, single sourcing strategies are used, in which a number of suppliers is inquired for a bid. It is shown, through a linear regression analysis, there is a significant positive correlation between the perceived level of competition and the number of suppliers inquired for a bid. Descriptive statistics on suppliers involve per equipment type (more formally known as purchase order category), the number of suppliers selected and their most likely country of manufacturing. It is discussed that there are multiple restrictions in selecting potential suppliers for a project. Firstly, suppliers can only be selected and inquired for a bid, if they are stated in an ‘Approved Vendor List’. Secondly, ECA involved financing limits the budget available in each country to a certain extent. Therefore, selecting suppliers in a country where probably no budget is available, is a waste of effort. Thirdly, the increasing administrative burden in selecting larger numbers of suppliers poses limitations. Through a comparison on descriptive statistics on suppliers in two very similar projects, but with different project contexts, the effects of these limitations are determined. It is hypothesized there are sourcing cost differences among countries for particular purchase order categories. Through a literature review, macroeconomic factors that could explain these cost differentials are determined. These are categorized in economic-, infrastructural-, labor, supply based, and political factors. For each macroeconomic category indicators are selected to represent these. A total of twelve indicators per country are reduced to two factor scores per country, through a dimension reduction technique (principal component analysis). Based on quotations submitted by suppliers for a completed project in the near past, significant cost differentials among countries are determined using categorical variables in a linear regression. A statistical refinement has been done to place countries in a cost category. Factor scores per country and descriptive statistics on suppliers are used to substantiate these cost rankings. Combining cost differentials, macroeconomic indicators, and descriptive statistics proved to be a valuable tool to determine in which country one is most likely to receive the least expensive quotations. The role of export credit agencies (ECAs) in project finance is explored through a literature review. ECAs cover political and commercial risks for exporters and credit providing entities. ECAs are heterogeneous and there is no definitive model for ECAs. For terms associated with project finance (medium- to long-term), the most widely used mechanism by ECAs is buyer credit. ECAs are involved by issuing insurance, for defaults, directly to the exporter’s bank. ECAs are also involved in buyer credit by offering a precompletion risk facility. A recourse agreement is included, meaning defaults caused by the exporter can be reclaimed from the exporter and disbursed to the lending bank. To quantitatively compare differences in terms and conditions of ECAs, a new methodology is developed in this thesis. This methodology involves a discounted ‘Interest Rate Coefficient’, which incorporates ECA premiums rolled over into the loan in the financing period, and terms and conditions involved in the repayment period. Through a questionnaire terms and conditions applicable to the NKNK project are acquired, which are mainly budgetary constraints, insurance premiums, and interest rates. Combining the results of the questionnaire and the interest rate coefficient, necessary inputs are obtained for linear optimization and Monte Carlo simulations. The basis of the newly developed preliminary sourcing cost estimation methodology is a ‘sourcing allocation table’, which can be used as a direct input in a linear optimization model developed in line with this thesis. The methodology starts with listing all purchase orders for a project in the sourcing allocation table. Next, it is evaluated which data is readily available, with respect to suppliers, supplier countries, quotation values, and purchase order value estimates. Data which is not readily available on suppliers and supplier countries are estimated per purchase order category, based on the descriptive statistics on number of potential suppliers and their distribution among countries. For purchase orders of which no quotations or estimates are available, conventional estimation techniques are used. The order of magnitude method is used on a reference project, which is indexed to accommodate the inflationary impact of time. Dummy quotations are generated to fill in the missing data on suppliers, their countries, and quotation values. These dummy quotations take significant cost differentials among countries per purchase order category into account. In these quotations, values are randomly generated according to the average spread of quotation values, using a uniform distribution. Trade finance estimates are also included in the sourcing allocation table. Now the sourcing allocation tables contains, based on live data and dummy quotations, for each purchase order a number of suppliers, their country in manufacturing, and quotation values. As there are numerous randomly generated parameters, there is no definitive optimized value. Rather there is a range of possible outcomes, determined by doing a Monte Carlo simulation with the linear optimization model. The output of these simulations are, a probability distribution of the total optimized value, a probability distribution of the expenditures within each country, and an average distribution of ECA budgetary flows towards sourcing countries. The new methodology for preliminary estimation of sourcing costs is seen by CB&I as a valuable tool to determine in an early phase of the project where budgets are most likely needed. This allows to set ECA budgets properly, to increase the probability of minimizing sourcing costs. The first results are already presented to the client, which was impressed with the result. It gives a clear graphical representation of the estimated total costs, budgets needed in which countries, and where the budgets are spent. Evenly important, it shows the uncertainty in all these estimates, through probability distribution. In addition, this tool allows easy identification of the cost impact of different scenarios, such as exploring the cost effect of excluding budget from a certain ECA country.","optimization; Export Credit Agency; sourcing strategy; sourcing cost","en","master thesis","","","","","","","","2013-08-13","Technology, Policy and Management","Policy Analysis","","Management of Technology","","" "uuid:da349f17-a65c-482e-8d94-9ff8a41c66d6","http://resolver.tudelft.nl/uuid:da349f17-a65c-482e-8d94-9ff8a41c66d6","Waveform Optimization for Compressive-Sensing Radar Systems","Zegov, L.T.","Leus, G. (mentor); Pribic, R. (mentor)","2013","Compressive sensing (CS) provides a new paradigm in data acquisition and signal processing in radar, based on the assumptions of sparsity of an unknown radar scene, and the incoherence of the transmitted signal. The resolution in the conventional pulse-compression radar is foreseen to be improved by the implementation of CS. An unknown sparse radar scene can then be recovered through CS with a high probability, even in the case of an underdetermined linear system. However, the theoretical framework of CS radar has to be verified in an actual radar system, accounting for practical system aspects, such as the signal bandwidth, ease of generation and acquisition, system complexity, etc. In this thesis, we investigate linear frequency modulated (LFM), Alltop and Björck waveforms, which show theoretically favorable properties in a CS-radar system, in the basic radar problem of range-only estimation. The aforementioned waveforms were investigated through a model of a digital radar system - from signal generation in the transmitter, to sparse signal recovery in the receiver. The capabilities of the CS-radar versus the conventional pulse compression radar were demonstrated, and the Alltop and Björck sequences are proven to outperform the commonly used linear LFM waveform in typical CS-radar scenarios.","compressive sensing; radar; waveform; optimization","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Circuits and Systems","","","","" "uuid:0feb1f50-32ae-4e54-87ea-3b551497389e","http://resolver.tudelft.nl/uuid:0feb1f50-32ae-4e54-87ea-3b551497389e","Risk based design of land reclamation and the feasibility of the polder terminal","Lendering, K.; Jonkman, S.N.; Peters, D.J.","","2013","New ports are mostly constructed on low lying coastal areas or in shallow coastal waters. The quay wall and terminal yard are raised to a level well above mean sea level to assure flood safety. The resulting ‘conventional terminal’ requires large volumes of good quality fill material often dredged from the sea, which is costly. The alternative concept of a ‘polder terminal’ has a terminal yard which lies below the outside water level and is surrounded by a quay wall flood defence structure. This saves large amounts of reclamation investment but introduces a higher damage potential in case of flooding and corresponding flood risk. Important conditions for the feasibility of a polder terminal are low pervious subsoil and high reclamation cost. Further, a polder terminal requires a water storage and drainage system, against additional cost. A risk-based analysis of the optimal quay wall height and polder level is performed, which is an optimization (cost benefit analysis) under two variables. The overtopping failure mechanism proves to be the dominant failure mechanism for flooding. During overtopping the water depth in the polder terminal is larger than on the conventional terminal, resulting in higher damage potential and corresponding flood risk for the polder terminal. However, the reclamation savings prove to be larger than the increased flood risk: the ‘polder terminal’ could save 10 to 30% of the total cost (investment and risk) demonstrating that it to be an economically attractive alternative to a conventional terminal.","container terminals; flood risks; optimization; polder terminals; probabilistic design","en","conference paper","Institute for Research and Community Service","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:56a64800-0dde-42fd-a2f1-05ed7c357b0b","http://resolver.tudelft.nl/uuid:56a64800-0dde-42fd-a2f1-05ed7c357b0b","An Optimization Model for Simultaneous Periodic Timetable Generation and Stability Analysis","Sparing, D.; Goverde, R.M.P.; Hansen, I.A.","","2013","We present an optimization model which is able to generate feasible periodic timetables for networks given the line structure and the requested line frequencies, taking into account infrastructure constraints and train overtake locations. As the model uses the minimum cycle time as the objective function, the stability of the timetable is also simultaneously expressed. Dimension reduction techniques are presented taking advantage of the symmetries of periodic timetables. The model is applied to a case study of a dense corridor with heterogeneous traffic.","timetable design; timetable stability; optimization","en","conference paper","International Association of Railway Operations Research (IAROR)","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","" "uuid:5baa1059-6a25-4bfc-8328-ae6fda18c598","http://resolver.tudelft.nl/uuid:5baa1059-6a25-4bfc-8328-ae6fda18c598","Efficiency analysis and design methodology of hybrid propulsion systems","Kwasieckyj, B.","Stapersma, D. (mentor)","2013","A hybrid propulsion system features both a diesel engine and an electric motor for propulsion. The degrees of freedom with power generation raise the question how this division between power can be optimised in such a way that the engines are running with their optimal fuel efficiency. A generalised method to determine the power generation for all operating modes for a vessel, with a focus on the lowest fuel consumption of the diesel engines is developed.","hybrid propulsion; ship; taguchi; orthogonal array; optimization","en","master thesis","","","","","","","","2013-03-23","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","Ship Design, Production and Operation (SDPO)","","" "uuid:3e2cb6d7-3ba2-4b45-af71-2fa106b5d189","http://resolver.tudelft.nl/uuid:3e2cb6d7-3ba2-4b45-af71-2fa106b5d189","Optimal Usage of Multiple Energy Carriers in Residential Systems: Unit Scheduling and Power Control","Ramirez-Elizondo, L.M.","Van der Sluis, L. (promotor)","2013","The world’s increasing energy demand and growing environmental concerns have motivated scientists to develop new technologies and methods to make better use of the remaining resources of our planet. The main objective of this dissertation is to develop a scheduling and control tool at the district level for small-scale systems with multiple energy carriers and to apply exergy-related concepts for the optimization of these systems. The tool is based on the energy hub approach and provides insights and techniques that can be used to evaluate new district energy scenarios. The topics that are presented include the multicarrier unit commitment framework, the multi-carrier exergy hub approach, a hierarchical multi-carrier control architecture, a comparison of multi-carrier power applications and the implementation of a multi-carrier energy management system in a real infrastructure.","optimization; multiple energy-carriers; renewables; sustainable energy","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Electrical Sustainable Energy","","","","" "uuid:49a3bfaa-012f-4e3b-ac34-b8d9a760c4fc","http://resolver.tudelft.nl/uuid:49a3bfaa-012f-4e3b-ac34-b8d9a760c4fc","Porting GCC to a Clustered VLIW Processor","Shankar, A.","Turjan, A. (mentor); Molnos, A.M. (mentor)","2013","A clustered architecture is a viable design choice when aiming to increase the performance of a VLIW processor while avoiding the hardware complexity and increased access times associated with a centralized register file. However, this places additional responsibility on the compiler: the production of an efficient cluster assignment. In this thesis, we describe how we ported the GNU Compiler Collection (GCC), a popular free compiler, to a clustered version of the Embedded Vector Processor (EVP), a VLIW vector processor being developed at ST-Ericsson. The aim of this thesis project was to produce a prototype GCC back-end for the clustered EVP, and to benchmark it. In this report we describe our implementation in detail, presenting an approach that tackles the problem of clustering, commenting upon existing algorithms, choosing and improving upon one of them while designing a GCC RTL optimization pass for cluster assignment. We visually inspected our prototype for functional correctness, and benchmarked it against the original EVP design and the corresponding production compiler. Our measurements show a 27% speed-up in compute intensive components of the EVP's W-CDMA workload.","GCC; cluster; assignment; VLIW; processor; compiler; optimization","en","master thesis","","","","","","","","2016-02-01","Electrical Engineering, Mathematics and Computer Science","Electrical Engineering","","Computer Engineering","","" "uuid:02468c77-5c64-4df8-9a24-1ed7ad9d1408","http://resolver.tudelft.nl/uuid:02468c77-5c64-4df8-9a24-1ed7ad9d1408","Optimization of Space Trajectories Including Multiple Gravity Assists and Deep Space Maneuvers","Musegaas, P.","Noomen, R. (mentor)","2013","The optimization of high-thrust interplanetary trajectories continues to draw attention. Especially when both Multiple Gravity Assists (MGA) as well as Deep Space Maneuvers (DSMs) are included, the optimization is typically very difficult. The search space may be characterized by a large number of minima and is furthermore very sensitive to small deviations in the decision vector. Various options are available to model these high-thrust trajectories. The trajectory may be modeled using a simple MGA trajectory model as well as using models including DSMs. Both a position and a velocity formulation variant may be adopted and also unpowered or powered swing-bys may be used. These trajectory models were implemented to study the effect of both DSM as well as powered swing-bys. Especially the option to perform DSMs proved to be vital for obtaining good trajectories. Also powered swing-bys may improve the efficiency of the trajectory. The velocity formulation variant proved to be much easier to optimize than the position formulation model. By analyzing the sensitivity and dependency of the various parameters in both models, a proposal for an even better trajectory model is suggested. Also regarding the optimization of these trajectories many options are available. Especially metaheuristics have proven to be very successful in optimizing these trajectories. Various studies have shown the importance of proper tuning of the basic versions of these metaheuristics, which is however often overlooked. This study applied a very rigorous tuning scheme to find the optimal settings for DE, GA and PSO. The results clearly reveal the superiority of DE above other methods. The tuned variants of DE outperformed other settings by one or multiple orders of magnitude, revealing the importance of this tuning scheme. The tuned variants of DE helped to improve a large number of instances in the Global Optimization Trajectory Problem (GTOP) database of ESA. Also the efficiency of these DE variants was shown to be competitive with, and sometimes better than, the best algorithms encountered in literature.","Deep Space Maneuver; optimization; GTOP; high-thrust; interplanetary; trajectory","en","master thesis","","","","","","","","2013-02-26","Aerospace Engineering","Astrodynamics and Space Missions","","","","" "uuid:f928487e-cf6e-4658-a415-7d7290ea83f2","http://resolver.tudelft.nl/uuid:f928487e-cf6e-4658-a415-7d7290ea83f2","Strategie voor meerjarig wegonderhoud op autosnelwegen","Backx, J.J.A.M.","Sanders, F.M. (mentor); Verlaan, J.G. (mentor); Zuurbier, F.S. (mentor)","2012","In this research a decision model designed for the optimal planning of maintenance for road constructions to a particular section of a highway for a longer period. Optimal refers to the minimization of construction maintaining the required quality of the product and performance conditions. The optimization problem consists of assigning maintenance actions (A) to segments (S) over a planning horizon (T) equal to the contract for the multi-year road maintenance.","maintenance; optimization; planning","nl","master thesis","","","","","","","","2012-10-27","Civil Engineering and Geosciences","Transport & Planning","","Transport and Planning","","" "uuid:c032128b-6759-4434-88ab-67b9eeec4e0b","http://resolver.tudelft.nl/uuid:c032128b-6759-4434-88ab-67b9eeec4e0b","Dynamic in situ calibration of an instrumented treadmill for systems identification and parameter estimation","Amirtha, T.R.","Sloot, L. (mentor); De Groot, J. (mentor); De Vlugt, E. (mentor); Van Der Helm, F.C.T. (mentor)","2012","Existing re-calibration methods for instrumented treadmills have mainly been performed when the instrumented treadmill has been in static operation i.e. the belts are not running. The effect re-calibrating during experimental operation, i.e. while the belts are running, on the ground reaction force (GRF) and the center of pressure (CoP) accuracy has not yet been studied due to difficulties of obtaining a range of test points across the treading area during experimental operation. Therefore, the effect of the dynamics of the treadmill’s moving parts on the re-calibration process is not known. In addition, the GRF and CoP accuracy requirements are not known for systems identification and parameter estimation (SIPE) experiments on instrumented treadmills. Here, a technique is described to comprehensively recalibrate a split-belt, instrumented treadmill, while it operates under experimental conditions, for SIPE of the lower extremity dynamics during gait. Re-calibration matrices are created with datasets that were generated under static and experimental treadmill operation and are assessed on validation datasets. No relationship was determined between the treadmill’s dynamics and the GRF and CoP errors. The dynamic re-calibration resulted in lower root-mean-square GRF and CoP errors than the static re-calibration did and was more rapid to calculate. The dynamic re-calibration matrix was additionally validated by performing SIPE of a load on the treadmill, which resulted in a relative error of 2%.","force platform; force plate; calibration; center of pressure; ground reaction force; gait; treadmill; accuracy; motion analysis; optimization; system identification; parameter estimation","en","master thesis","","","","","","","","2012-10-29","Mechanical, Maritime and Materials Engineering","BioMechanical Engineering","","Biomedical Engineering","","" "uuid:fcc290f8-cf60-44a4-be68-189f29a2fb82","http://resolver.tudelft.nl/uuid:fcc290f8-cf60-44a4-be68-189f29a2fb82","Estimates of extremes in the best of all possible worlds","Van Nooyen, R.R.P.; Kolechkina, A.G.","","2012","In applied hydrology the question of the probability of exceeding a certain value occurs regularly. Often it is in a context where extrapolation from a relatively short time series is needed. It is well known that in its simplest form extreme value theory applies to independent identically distributed random variables. It is also well known that more advanced theory allows for some degrees of correlation and that techniques for coping with trends are available. However, the problem of extrapolation remains. To isolate the effect of extrapolation we generate synthetic time series of length 20, 50 and 100 from known distributions to derive empirical distributions for the 1:100 and 1:1000 exceedance.","extremes; estimators; optimization; statistical distributions","en","conference paper","STAHY","","","","","","","","Civil Engineering and Geosciences","Water Management","","","","" "uuid:3b1c6432-cfbf-4fec-894b-9f6b870015f5","http://resolver.tudelft.nl/uuid:3b1c6432-cfbf-4fec-894b-9f6b870015f5","Wing Shape Multidisciplinary Design Optimization","Mariens, J.","Elham, A. (mentor)","2012","Multidisciplinary design optimizations have shown great benefits for aerospace applications in the past. Especially in the last decades with the advent of high speed computing. Still computational time limits the desire for models with high level of fidelity cannot be always fulfilled. As a conse- quence, fidelity is often sacrificed in order to keep the computing time of the optimization within limits. There is always a compromise required to select proper tools for an optimization problem. In this final thesis work, the differences between existing weight modeling techniques are investi- gated. Secondly, the results of using different weight modeling techniques in multidisciplinary design optimization of aircraft wings is compared. The aircraft maximum take-off weight was selected as the objective function. The wing configuration of a generic turboprop and turbofan passenger aircraft were considered for these optimizations. This should aid future studies of wing shapes in early design stages to select a proper weight prediction technique for a given case. A quasi-three- dimensional aerodynamic solver was developed to calculate the wing aerodynamic characteristics. Various statistical prediction methods (low level of fidelity) and a quasi-analytical method (medium level of fidelity) are used to estimate the structural wing weight. Furthermore, the optimal wing shape was found using a local optimization algorithm and is compared to the results found using a novel optimization algorithm to find the global optimum. The quasi-three-dimensional aerodynamic solver was validated using experimental data and other available aerodynamic tools. Compared to the results generated by other tools, the developed solver has a wider range of validity. Most important of all, it is up to 10 times faster and the results show good agreement with other data. Several test cases were used to prove the robustness and effectiveness of the global optimization algorithm. A comparison of the different weight estimation methods indicated that the lower level fidelity methods are insensitive for some wing parameters. The results of the optimizations showed that the optimum wing shape is affected by the used weight modeling technique. Use of different weight prediction methods strongly affects the computational times and the convergence history. The global optimization algorithm was able to find the global solution for the wing shape optimization. However, the search for the global optimum comes at a cost: the computational time is significantly larger.","wing; shape; optimization; quasi-3D; multidisciplinary; MDO; locsmooth; wing weight prediction; EMWET","en","master thesis","","","","","","","","2012-08-31","Aerospace Engineering","Flight Performance and Propulsion","","Aircraft Design","","" "uuid:93af1749-0b97-416a-ba27-907ae4921a7f","http://resolver.tudelft.nl/uuid:93af1749-0b97-416a-ba27-907ae4921a7f","Using particle packing technology for sustainable concrete mixture design","Fennis, S.A.A.M.; Walraven, J.C.","","2012","The annual production of Portland cement, estimated at 3.4 billion tons in 2011, is responsible for about 7% of the total worldwide CO2-emission. To reduce this environmental impact it is important to use innovative technologies for the design of concrete structures and mixtures. In this paper, it is shown how particle packing technology can be used to reduce the amount of cement in concrete by concrete mixture optimization, resulting in more sustainable concrete. First, three different methods to determine the particle distribution of a mixture are presented; optimization curves, particle packing models and discrete element modelling. The advantage of using analytical particle packing models is presented based on relations between packing density, water demand and strength. Experiments on ecological concrete demonstrate how effective particle packing technology can be used to reduce the cement content in concrete. Three concrete mixtures with low cement content were developed and the compressive strength, tensile strength, modulus of elasticity, shrinkage, creep and electrical resistance was determined. By using particle packing technology in concrete mixture optimization, it is possible to design concrete in which the cement content is reduced by more than 50% and the CO2-emission of concrete is reduced by 25%.","aggregate; cement spacing; concrete; flowability; particle packing; optimization","en","journal article","Heron","","","","","","","","Civil Engineering and Geosciences","Structural Engineering","","","","" "uuid:9c81ea64-37a4-4781-9247-2d42cb439e19","http://resolver.tudelft.nl/uuid:9c81ea64-37a4-4781-9247-2d42cb439e19","A Multidisciplinary Optimization of Composite Space Enclosures","Koerselman, J.R.","Vos, R. (mentor); Brander, T. (mentor)","2012","A design methodology for composite space enclosures was generated. As a result a panel of an electronics housing structure, as part of a general satellite traversing both GEO and LEO, was designed and optimized. A mass saving of 18% was achieved over a conventional aluminum panel, while assuring structural integrity for acceleration loads, avoiding vibrational resonance with other satellite components, allowing electrical conductance, providing sufficient radiation protection from the harsh space environment and at the same time assuring manufacturability. The optimized structure was composed of layers of carbon fiber composite and tungsten foils. For radiation purposes the layers were placed asymmetric around the geometric midplane, resulting in shape distortions due to residual thermal stresses from the curing process. These shape distortions were kept to a minimum. The validity of the theoretical models was assessed by means of testing for shape distortions, radiation attenuation, bonding strength and electrical resistivity. The bonding of the tungsten with the prepreg material was found to be problematic, but an improvement in lap shear strength was found with respect to methods proposed in the literature. A chemical etching surface treatment with a reduced etching time of one minute was proposed for the tungsten foils.","multidisciplinary; optimization; composite; space; enclosure; tungsten; surface treatment; SIDER; induced shape distortions","en","master thesis","","","","","","","","","Aerospace Engineering","Aerospace Design, Integration & Operations","","Design, Integration and Operations of Aircraft and Rotorcraft","","" "uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","http://resolver.tudelft.nl/uuid:3dacc24d-cf41-4c13-8e1e-10f11a1b6f23","Sequential robust optimization of a V-bending process using numerical simulations","Wiebenga, J.H.; Van den Boorgaard, A.H.; Klaseboer, G.","","2012","The coupling of finite element simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a generally applicable strategy for modeling and efficiently solving robust optimization problems based on time consuming simulations. Noise variables and their effect on the responses are taken into account explicitly. The robust optimization strategy consists of four main stages: modeling, sensitivity analysis, robust optimization and sequential robust optimization. Use is made of a metamodel-based optimization approach to couple the computationally expensive finite element simulations with the robust optimization procedure. The initial metamodel approximation will only serve to find a first estimate of the robust optimum. Sequential optimization steps are subsequently applied to efficiently increase the accuracy of the response prediction at regions of interest containing the optimal robust design. The applicability of the proposed robust optimization strategy is demonstrated by the sequential robust optimization of an analytical test function and an industrial V-bending process. For the industrial application, several production trial runs have been performed to investigate and validate the robustness of the production process. For both applications, it is shown that the robust optimization strategy accounts for the effect of different sources of uncertainty onto the process responses in a very efficient manner. Moreover, application of the methodology to the industrial V-bending process results in valuable process insights and an improved robust process design.","metal forming processes; finite element method; optimization; uncertainty; robustness; sequential optimization","en","journal article","Springer-Verlag","","","","","","","","Mechanical, Maritime and Materials Engineering","Materials Innovation Institute","","","","" "uuid:aa419ba5-3d31-4d73-adf3-c79870deccc7","http://resolver.tudelft.nl/uuid:aa419ba5-3d31-4d73-adf3-c79870deccc7","Optimal Adaptive Policymaking under Deep Uncertainty? Yes we can!","Hamarat, C.; Kwakkel, J.H.; Pruyt, E.","","2012","Uncertainty manifests itself in almost every aspect of decision making. Adaptive and flexible policy design becomes crucial under uncertainty. An adaptive policy is designed to be flexible and can be adapted over time to changing circumstances and unforeseeable surprises. A crucial part of an adaptive policy is the monitoring system and associated pre-specified actions to be taken in response to how the future unfolds. However, the adaptive policymaking literature remains silent on how to design this monitoring system and how to specify appropriate values that will trigger the pre-specified responses. These trigger values have to be chosen such that the resulting adaptive plan is robust and flexible to surprises in the future. Actions should be neither triggered too early nor too late. One possible family of techniques for specifying triggers is optimization. Trigger values would then be the values that maximize the extent of goal achievement across a large ensemble of scenarios. This ensemble of scenarios is generated using Exploratory Modeling and Analysis. In this paper, we show how optimization can be useful for the specification of trigger values. A Genetic Algorithm is used because of its flexibility and efficiency in complex and irregular solution spaces. The proposed approach is illustrated for the transitions of the energy system towards a more sustainable functioning which requires effective dynamic adaptive policy design. The main aim of this paper is to show the contribution of optimization for adaptive policy design.","adaptive policymaking; exploratory modeling and analysis; optimization","en","conference paper","","","","","","","","","Technology, Policy and Management","Multi Actor Systems","","","","" "uuid:a53f5bbd-2640-41cb-982d-b05a6fff9166","http://resolver.tudelft.nl/uuid:a53f5bbd-2640-41cb-982d-b05a6fff9166","Manifold mapping optimization with of without true gradients","Delinchant, B.; Lahaye, D.; Wurtz, F.; Coulomb, J.L.","","2012","This paper deals with the Space Mapping optimization algorithms in general and with the Manifold Mapping technique in particular. The idea of such algorithms is to optimize a model with a minimum number of each objective function evaluations using a less accurate but faster model. In this optimization procedure, fine and coarse models interact at each iteration in order to adjust themselves in order to converge to the real optimum. The Manifold Mapping technique guarantees mathematically this convergence but requires gradients of both fine and coarse model. Approximated gradients can be used for some cases but are subject to divergence. True gradients can be obtained for many numerical model using adjoint techniques, symbolic or automatic differentiation. In this context, we have tested several Manifold Mapping variants and compared their convergence in the case of real magnetic device optimization.","space mapping; manifold mapping; optimization; surrogate model; gradients; symbolic derivation; automatic differentiation","en","report","Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:9a018e13-f29e-4597-8870-6f8ab2fa9787","http://resolver.tudelft.nl/uuid:9a018e13-f29e-4597-8870-6f8ab2fa9787","Multi-Objective Optimization for Urban Drainage Rehabilitation","Barreto Cordero, W.J.","Price, R.K. (promotor); Solomatine, D.P. (promotor)","2012","Flooding in urbanized areas has become a very important issue around the world. The level of service (or performance) of urban drainage systems (UDS) degrades in time for a number of reasons. In order to maintain an acceptable performance of UDS, early rehabilitation plans must be developed and implemented. In developing countries the situation is serious, little investment is done and there are smaller funds each year for rehabilitation. The allocation of such funds must be “optimal” in providing value for money. However this task is not easy to achieve due to the multicriteria nature of the rehabilitation process, taking into account technical, environmental and social interests. Most of the time these are conflicting, which make it a highly demanding task. The present book introduce a framework to deal with multicriteria decision making for the rehabilitation of urban drainage systems, and focuses on several aspects such as the improvement of the performance of the multicriteria optimization through the inclusion of new features in the algorithms and the proper selection of performance criteria. The use of Genetic Algorithms, parallelization and application in countries like Brazil, Colombia y Venezuela are treated in this book.","multi-objective; urban drainage; optimization; parallel computing,; genetic algorithms","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Water Management","","","","" "uuid:b4aee571-0489-42ff-ab55-d74e980f724a","http://resolver.tudelft.nl/uuid:b4aee571-0489-42ff-ab55-d74e980f724a","Shape Parameterization in Aircraft Design: A Novel Method, Based on B-Splines","Straathof, M.H.","Van Tooren, M.J.L. (promotor)","2012","This thesis introduces a new parameterization technique based on the Class-Shape-Transformation (CST) method. The new technique consists of an extension to the CST method in the form of a refinement function based on B-splines. This Class-Shape-Refinement-Transformation (CSRT) method has the same advantages as the original CST method, while also allowing for local deformations in a shape. A number of test cases were performed using two different design frameworks with low and high fidelity. The low fidelity framework was based on a commercial panel method code and coupled to various optimization algorithms. The high fidelity framework used an in-house Euler code and employed adjoint optimization.","shape; parameterization; aircraft; design; B-splines; Class-Shape-Refinement-Transformation; adjoint; euler; optimization","en","doctoral thesis","","","","","","","","2012-02-03","Aerospace Engineering","FPP","","","","" "uuid:65db30d9-206c-4661-abd2-c645482a8e2d","http://resolver.tudelft.nl/uuid:65db30d9-206c-4661-abd2-c645482a8e2d","Binaural Model-Based Speech Intelligibility Enhancement and Assessment in Hearing Aids","Schlesinger, A.","Gisolf, D. (promotor); Boone, M.M. (promotor)","2012","The enhancement of speech intelligibility in noise is still the main subject in hearing aid research. Based on the advanced results obtained with the hearing glasses, in the present research the speech intelligibility is even further improved by the application of binaural post-filters. The functionalities of these filters are related to the principles of the auditory scene analysis. A statistical analysis of binaural cues in noise at the output of different hearing aids, the utilization of a Bayesian classifier in the source separation process and an evolutionary optimization against binaural models of speech intelligibility provides a comprehensive understanding for the utilization of binaural post-filters in adverse environments. As the listening ease and a fair amount of speech quality are mandatory in speech enhancement, tradeoffs between speech intelligibility and quality were studied in terms of the preservation of natural binaural cues and the suppression of musical noise.","CASA; STI; SII; binaural; genetic algorithm; optimization; Bayesian classification","en","doctoral thesis","TU Delft","","","","","","","2011-12-23","Applied Sciences","Imaging Science and Technology","","","","" "uuid:856057f2-24d4-4078-aa58-e0941b5b21c1","http://resolver.tudelft.nl/uuid:856057f2-24d4-4078-aa58-e0941b5b21c1","Numerical Optimization of Hydraulic Fracture Stage Placement in a Gas Shale Reservoir","Holt, S.","Jansen, J.D. (mentor); Leeuwenburgh, O. (mentor); Van Bergen, F. (mentor)","2011","The upstream oil and gas industry focuses increasingly on unconventional gas resources to maintain the level of its hydrocarbon reserves. To unlock the full potential of gas shale reservoirs, horizontal wells are drilled and active stimulation of the reservoirs, in the form of multi-stage hydraulic fracturing, is performed. This new technique has radically changed the energy future of the United States and is on the forefront of changing it in Europe as well. The hydraulic fracturing treatment is a costly, resource intensive and potentially environmentally dangerous procedure. The objective of this thesis is to create a realistic and versatile gas shale reservoir model and optimize the placement and number of hydraulic fracture stages along a horizontal well bore, thereby maximizing the production of gas while minimizing the amount of money that is spent to do so. On the basis of the computationally efficient ensemble based optimization of vertical well placement, an idea coined and investigated by Leeuwenburgh et al. (2010), it is postulated that numerical optimization can aid in finding the optimal placement of hydraulic fracture stages along a horizontal well bore in an equally computationally efficient manner. Three gradient-based optimization algorithms (Ensemble based Optimization: EnOpt (Chen, 2008), Simultaneous Perturbation Stochastic Approximation: SPSA (Spall, 1998) and finite difference gradient estimation) that work with continuous variables, are used to approximate the gradient. Because hydraulic fracture stage locations in a reservoir simulator are commonly treated as discrete variables (well grid block indices), standard implementations of gradient-based optimization are not applicable for optimal hydraulic fracture stage placement. We propose three distinct variable parameterizing placement methods to overcome the inherent continuous to discrete variables conversion issues. After the theoretical arguments about the strengths and weaknesses of the proposed optimization routines, both single well and multiple well scenario experiments are performed. Good results are obtained from the various experiments which favor an optimization with the EnOpt algorithm in combination with the fracture stage interval placement method.","optimization; hydraulic fracturing; multi-stage; hydraulic fracture; ensemble based optimization; horizontal well; well placement; stimulation; gas shale reservoir; shale gas","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","Petroleum Engineering","","" "uuid:dfaae28f-c2dd-4bdc-82d6-a1c1aa98fa26","http://resolver.tudelft.nl/uuid:dfaae28f-c2dd-4bdc-82d6-a1c1aa98fa26","Predicting Storm Surges: Chaos, Computational Intelligence, Data Assimilation, Ensembles","Siek, M.B.L.A.","Solomatine, D.P. (promotor)","2011","Accurate predictions of storm surge are of importance in many coastal areas. This book focuses on data-driven modelling using methods of nonlinear dynamics and chaos theory for predicting storm surges. A number of new enhancements are presented: phase space dimensionality reduction, incomplete time series, phase error correction, finding true neighbours, optimization of chaotic model, data assimilation and multi-model ensembles. These were tested on the case studies in the North Sea and Caribbean Sea. Chaotic models appear to be are accurate and reliable short and mid-term predictors of storm surges aimed at supporting decision-makers for flood prediction and ship navigation.","ocean wave prediction; nonlinear dynamics and chaos theory; neural networks; optimization; dimensionality reduction; phase error correction; incomplete time series; multi-model ensemble prediction; data-driven modelling; computational intelligence; hydroinformatics","en","doctoral thesis","CRC Press/Balkema","","","","","","","","Civil Engineering and Geosciences","Water Management","","","","" "uuid:06252984-bcfd-49a5-9416-69b948bcc0ff","http://resolver.tudelft.nl/uuid:06252984-bcfd-49a5-9416-69b948bcc0ff","An optimization model for a Train-Free-Period planning for ProRail based on the maintenance needs of the Dutch railway infrastructure","Jenema, A.R.","Aardal, K.I. (mentor)","2011","The thesis reports on the Dutch railway infrastructure manager ProRail, on the literature study, on the determined Top 10 of maintenance activities that are determining the maintenance schedule, on the developing of the optimization model that finds such a maintenance schedule, and finally on the results and conclusions.","ProRail; optimization; maintenance; railway infrastructure","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","","","" "uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","http://resolver.tudelft.nl/uuid:e8f7fdb9-d209-45be-9e03-13da46e386bc","Event-based progression detection strategies using scanning laser polarimetry images of the human retina","Vermeer, K.A.; Lo, B.; Zhou, Q.; Vos, F.M.; Vossepoel, A.M.; Lemij, H.G.","","2011","Monitoring glaucoma patients and ensuring optimal treatment requires accurate and precise detection of progression. Many glaucomatous progression detection strategies may be formulated for Scanning Laser Polarimetry (SLP) data of the local nerve fiber thickness. In this paper, several strategies, all based on repeated GDx VCC SLP measurements, are tested to identify the optimal one for clinical use. The parameters of the methods were adapted to yield a set specificity of 97.5% on real image series. For a fixed sensitivity of 90%, the minimally detectable loss was subsequently determined for both localized and diffuse loss. Due to the large size of the required data set, a previously described simulation method was used for assessing the minimally detectable loss. The optimal strategy was identified and was based on two baseline visits and two follow-up visits, requiring two-out-of-four positive tests. Its associated minimally detectable loss was 5–12?m, depending on the reproducibility of the measurements.","progression detection; simulation; glaucoma; polarimetry; optimization; image processing","en","journal article","Elsevier","","","","","","","","Applied Sciences","IST/Imaging Science and Technology","","","","" "uuid:de8519a2-0f49-471f-9b0f-e10f7df0132b","http://resolver.tudelft.nl/uuid:de8519a2-0f49-471f-9b0f-e10f7df0132b","Project Risk Management Practices: How can the current Project Risk Management practices surrounding medium construction projects be optimized?","Souffront, L.F.W.M.","Van Beers, C. (mentor); Filippov, S. (mentor); Veeneman, W. (mentor)","2011","Multiple instruments and procedures are used within the different project management areas in order to execute projects in an efficient and controllable way. Project Risk Management (PRM) has become over the past years a crucial part of the project management practices and is seen by many practitioners as a key factor to go towards more successful projects. More and more organizations are adopting this practice in an effort to achieve a better strategic alignment, increase the project success, and optimize the utilization of their resources. The research aims at generating new insights in the field of project risk management. This was done by investigating the main tenets of the risk management theory and by comparing it to empirical data gathered through a case study. Several short-, medium-, and long- term recommendation were made based on a Risk Maturity Model.","medium project; project risk management; construction sector; optimization; case study","en","master thesis","","","","","","","","2011-08-19","Technology, Policy and Management","Technology, Strategy, & Entrepreneurship","","Management of Technology","","" "uuid:9da72fb0-25d7-46ff-a58f-8c490524f297","http://resolver.tudelft.nl/uuid:9da72fb0-25d7-46ff-a58f-8c490524f297","Statistical analysis of newspaper headlines with optimization","Jacobs, A.G.M.M.","Vallentin, F. (mentor)","2011","","sparse; optimization; newspapers","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","","","" "uuid:bdda7a33-1073-4a38-aabb-124367b3a3e1","http://resolver.tudelft.nl/uuid:bdda7a33-1073-4a38-aabb-124367b3a3e1","Robust ensemble based multi-objective production optimization: Application to smart mells.","Fonseca, R.M.","Jansen, J.D. (mentor); Leeuwenburgh, O. (mentor)","2011","Recent improvements in dynamic reservoir modeling have led to an increase in the application of model-based optimization of hydrocarbon bearing reservoirs. Numerous studies and articles have indicated the possibility of improving reservoir management using these dynamic models, coupled with methods to reduce uncertainties in the static models, to optimize reservoir performance. These studies have focused on maximizing the life-cycle performance of the project. Thus life cycle optimization is essentially a single-objective optimization problem. In reality, short-term targets usually drive operational decisions. The impact of short-term targets should be included in the optimization to achieve a more realistic solution. The process of optimizing these short-term targets constrained to life cycle targets is a form of multi-objective optimization. Several methods have been suggested to achieve multi-objective reservoir flooding optimization (Van Essen et al. 2011). These methods have been implemented with the adjoint formulation. This thesis proposes the use of an ensemble-based optimization technique (EnOpt) for multi-objective optimization. The optimization of smart wells or production schedules (inflow control valve (ICV) settings) is the objective of this work. We also propose variations to the existing multi-objective algorithms suggested by Van Essen et al. (2011). We propose the use of the BFGS algorithm to improve the computational efficiency. Undiscounted Net Present Value (NPV) and highly discounted NPV are the long-term and short-term objective functions used in this thesis. We also propose an extension of the optimization functionality to better cope with model uncertainties. This robust ensemble-based multi-objective production optimization framework has been applied and tested on a synthetic reservoir model. In our test cases, the ensemble-based multi-objective optimization methods achieved a 14.2% increase in the secondary objective at the cost of only a minor decrease between 0.2-0.5% in the primary objective.","optimization","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Geotechnology","","Petroleum Engineering","","" "uuid:1de4f520-efc7-42d7-8266-b388451f4d14","http://resolver.tudelft.nl/uuid:1de4f520-efc7-42d7-8266-b388451f4d14","Stochastic Open Pit Design with a Network Flow Algorithm: Application at Escondida Norte, Chile","Van Eldert, J.","De Ruiter, J.J. (mentor); Dimitrakopoulos, R.G. (mentor)","2011","In the optimization of open pit mine design, the Lerchs-Grossmann algorithm is the industry standard, although network flow algorithms are also well suited, efficient, and known. The stochastic version of the conventional (deterministic) network flow algorithm is based on the use of multiple simulated realizations of the ore deposit, thus accounting for geological uncertainty. In comparison, the conventional pit optimization methods use only one estimated or average-type model of the deposit and assume it represents the exact deposit in the ground. The use of multiple scenarios results in the ability to generate risk profiles in terms of both grade and material types for pit designs and production schedules. This thesis focuses on the application of the stochastic maximum flow algorithm for multiple ore processing destinations at the Escondida Norte copper mine, Chile. The case study shows the optimal pushback layout minimising geological risk during the life-of-mine. The limitation of this method is that it uses only a part of the local joint uncertainty of the block grades and material types. However, it can be extended to account for simulated commodity price forecasts as well as discounting.","optimization; open pit; mining; stochastic; maximum flow; mine design","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Department of Geotechnology","","Section Resources Engineering","","" "uuid:25446e4d-2626-49a0-8b0e-d9889df343b5","http://resolver.tudelft.nl/uuid:25446e4d-2626-49a0-8b0e-d9889df343b5","Generation costs estimation in the Spanish Mainland Power System from 2011 to 2020","Crisostomo Ramirez, J.D.","Ramos, A. (mentor)","2011","The electricity sector in Spain had been evolving steadily in an ascendant rate since the liberalization in late 90’s. Demand was expected to keep growing but it suddenly dropped in 2009 creating an unbalance in the system in terms of demand and capacity available. In addition the increasing share of renewable energy contribution has also imposed an additional pressure on the hydro and thermal technologies leaving less residual demand for such technologies. The current and expected scenario in the Spanish mainland power system seems to be harder for the ordinary regime technologies for the coming years. It has just been issued a Royal Decree to support the domestic coal mines, imposing quotas for coal units using such coal. This work has the purpose of gather all the regulatory and economical constraints and apply them to estimate the generation costs for the following ten years. The approach to do such extensive task is to apply a regulated cost structure based on fixed and variable costs already proved in a previous work as a reference model to contrast the system costs in the mainland power system in Spain. The generation dispatch is done using a traditional approach of unit commitment based on the least cost dispatch and taking into consideration the different constraints to reflect the most plausible behavior of market players. The results are consistent with the costs associated to the different technologies. Nuclear units are base load during the whole year and CCGT is the technology that balances the system because of demand-generation variations. The most stable technology in terms of cost and production is the Nuclear while the technology with the lowest costs is hydro. Coal and CCGT technologies appear to be the most expensive and become the marginal technologies. Regarding to the evolution of the generation mix, there are thermal units decommissioned because of aging and the new Industrial Emissions Directive issued by EU. In addition, an assumption was made of what in reality would happen when the existing thermal units are not being dispatched and the owners decide closure. It was also included new hydro power plants either under construction or planned to be commissioned and the necessary additional MW needed as CCGT units in order to keep security of supply in the system. The latter was done mainly to keep the Coverage Index in the minimum level required by the system operator.","regulated cost structure; optimization","en","master thesis","","","","","","","Campus only","2011-08-10","Technology, Policy and Management","Modelling","","MSc. Engineering and Policy Analysis - EMIN","","" "uuid:be0f5746-ff05-42a3-805a-f4a72fef4cc6","http://resolver.tudelft.nl/uuid:be0f5746-ff05-42a3-805a-f4a72fef4cc6","Applying the shuffled frog-leaping algorithm to improve scheduling of construction projects with activity splitting allowed","Tavakolan, M.T.; Ashuri, B.; Chiara, N.","","2011","In situation of contractors competing to finish a given project with the least duration and cost, acquiring the ability to improve the project quality properties seems essential for project managers. Evolutionary Algorithm (EAs) have been applied as suitable algorithms to develop the multi-objective Time-Cost trade-off Optimization (TCO) and Time-Cost-Resource Optimization (TCRO) in the past few decades ; however, by improving EAs, the Shuffled Frog Leaping Algorithm (SFLA) has been introduced as an algorithm capable of achieving a better solution with faster convergence. Furthermore, considering splitting in execution of activities can make models closer to approximating real projects. One example has been used to demonstrate the impact of SFLA and splitting on the results of the model and to compare with previous algorithms. Current research has elucidated that SFLA improves final results and splitting allows the model find suitable solutions.","optimization; multi-objective SFLA; splitting; leveling; construction management","en","conference paper","","","","","","","","","","","","","","" "uuid:d586ee6e-4815-4561-87d9-6ae00bdb739e","http://resolver.tudelft.nl/uuid:d586ee6e-4815-4561-87d9-6ae00bdb739e","Multidisciplinary Design Optimization in the Conceptual Design Phase: Creating a Conceptual Design of the Blended Wing-Body with the BLISS Optimization Strategy","Hendrich, T.J.M.","Schroijen, M.J.T. (mentor); Bijl, H. (mentor); Visser, H.G. (mentor); La Rocca, G. (mentor)","2011","Traditionally, the aircraft design process is divided into three phases: conceptual, preliminary and detailed design. In each subsequent phase, the fidelity of the analysis tools increases and more and more details of the design geometry are frozen. In each phase a number of design variants is generated, fully analyzing them with the tools available, and then doing trade studies between important design variables to finally choose the best variant. In the past, this approach has shown good results for 'Kansas city' type aircraft, which could be decomposed into different airframe parts with distinct functions, such as wings, tail, engines and fuselage. Each part needs to fullfill its own set of requirements and could be designed and optimized relatively independently from the others. For the new generation of large transport aircraft, such as the Blended Wing Body (BWB), the traditional design approach is less suited. The Blended Wing-Body - studied by Boeing and many others as a future long-haul transport aircraft concept - is characterized by an integrated airframe, in which the aforementioned parts can no longer be clearly distinguished. The Blended Wing-Body features many and strong interactions between the various design disciplines and airframe subparts. Using the traditional design doctrine, these interactions greatly increase the required time to design. Over the past years,Multidisciplinary Design Optimization (MDO) is being considered as an alternative. Nowadays, in industry the MDO approach is mainly used in the detail design phase and for isolated, well-defined design cases. The goal of this project is to create an MDO framework which can aid the designer in optimizing entire aircraft designs in the conceptual phase. This framework is shaped to the Bi-level Integrated System Synthesis (BLISS) strategy. This strategy splits the optimization into two levels: a disciplinary level, and a system one. Before optimization, BLISS performs a sensitivity analysis to obtain linearized global sensitivities of the design objective and constraints to each of the design variables. Validation is done using three cases: two sample problems from literature with known solutions, and the optimization of a simplified Boeing 747 wing for maximum aerodynamic efficiency using an aerodynamic and structural model. All three cases were optimized succesfully. Finally, as a proof-of-concept for MDO, the framework is required to find an conceptual design of the Blended Wing-Body with minimum structural weight and minimum drag across a given mission. Meanwhile, structural, aerodynamic and performance constraints had to be satisfied. The problem features 5 disciplines, 93 constraints, 110 states and in total 92 design variables. Again, BLISS could converge to a solution, requiring 4 hours per cycle. By tuning the design variables, BLISS managed to converge to a final design in 22 cycles. The final design satisfies all constraints, except for the large local Mach number on the outboard wing. Similar problems were identified in several other Blended Wing-Body studies. The results support BLISS as a viable candidate method for introducing MDO in the conceptual design practice.","MDO; multidisciplinary; BLISS; Blended Wing Body; design; optimization","en","master thesis","","","","","","","","","Aerospace Engineering","Aerospace Design, Integration & Operations","","","","" "uuid:8d7290d3-a903-4cfe-8c12-0387b94a192e","http://resolver.tudelft.nl/uuid:8d7290d3-a903-4cfe-8c12-0387b94a192e","Information Theory for Risk-based Water System Operation","Weijs, S.V.","Van de Giesen, N.C. (promotor)","2011","Operational management of water resources needs predictions of future behavior of water systems, to anticipate shortage or excess of water in a timely manner. Because the natural systems that are part of the hydrological cycle are complex, the predictions inevitably are subject to considerable uncertainty. Still, definitive decisions about e.g. hydropower reservoir releases or polder pump flows have to be made looking ahead into the uncertain future. This demands risk-based approach, in which, ideally, all possible future events should be considered, along with their probabilities that represent the information and uncertainty available at the time of decision. The thesis deals with water, but the flows studied are mostly those of information. Like the flow of water, also information flows obey certain fundamental laws. These are the laws of Information Theory, which also provide guidelines for developing models, handling data, and designing statistical procedures to make predictions and decisions. The information-theoretical perspective used in the thesis leads to the conclusion that predictions should necessarily be probabilistic and should be evaluated using a relative entropy measure, of which an intuitive decomposition into three components is presented. Other chapters in the thesis deal with the use of model predictive control and stochastic dynamic programming for operational water management, the time-dynamics of information, generation of weighted ensemble forecasts that balance uncertainty and information, and a perspective on data compression as philosophy of science. Recommendations for practice and further research indicate that entropy has a bright future, not only as an ever-increasing thermodynamic measure, but also as an information-theoretical measure of uncertainty that is useful in any field where predictions and decisions have to be made in a context of complex and largely unobservable systems.","information theory; operational water management; risk; probabilistic forecasts; optimization; entropy; control; water; hydrology; water resources management","en","doctoral thesis","VSSD","","","","","","","2011-03-29","Civil Engineering and Geosciences","Watermanagement","","","","" "uuid:f97a2a79-a3bf-4bcc-b0a6-819e70ccd62b","http://resolver.tudelft.nl/uuid:f97a2a79-a3bf-4bcc-b0a6-819e70ccd62b","A new suit for the IJsselmeer: Possibilities for facing the future needs of the lake by means of an optimized dynamic target water level","Talsma, J.","Van de Giesen, N.C. (mentor)","2011","Introduction and problem definition The IJsselmeer is located in the center of the Netherlands. For its relevance for the Dutch economy and society, it is often addressed as the Wet Hearth of the country. When looking into the future, the IJsselmeer is under climate threats. Wetter winters will bring more water into the system, in combination with sea level rise, and lower gravity discharge to the Waddenzee. This will generate safety issues. On the other hand summers will be drier, putting the satisfaction of water demand in danger. Research approach and research question The goal of the research is to define for the IJsselmeer a dynamic target water level which is variable through the whole year by means of an optimization approach. The optimization uses a single objective function considering dikes safety and water demand. Such approach has been chosen because follows a different path than the ones mainly used so far to tackle the issue. When management measures alone are not enough to define a climate-proof IJsselmeer, extra measures are taken into consideration: a pumping station at the Afsluitdijk and early storage in March. The main research question asks for an evaluation of the optimization methodology used to define efficient alternatives for the IJsselmeer. The sub-question requires the assessment of the flexibility of the IJsselmeer towards a climate-proof system, and the definition of extra measures, when needed. Methodology The definition of the optimum measures is achieved in several steps. Firstly the objective of the problem owner is defined. The Dienst IJsselmeergebied is the only problem owner. Its interests are safety and water demand satisfaction. Then indicators are derived from the objectives, and merged into the objective function. Classes of measures are selected, and a model of the system designed for their evaluation. Finally the optimization problems are defined in order to design the optimum alternatives. Results A different planning of the target water level alone is not able to satisfy the needs of safety and water demand on the long term. As it is now, the IJsselmeer is flexible on the short term, but not enough to accommodate the impacts of longer horizons: extra measures are needed in order to define a climate proof system in 2050 and 2100. Pumping station at the Afsluitijk is an effective measure to guarantee safety for all the scenarios. Early storage in March is effective in the medium horizon (2050) but need high target water levels along the summer for the long term (2100). This might generate safety issues. Even if applied on a simplified case, the use of an optimization methodology manages to define a realistic picture of the flexibility of the IJsselmeer, and retrieves efficient options for possible future strategies. For this reasons, the present research can be considered a successful implementation of an optimization approach for the IJsselmeer. Conclusions and recommendations For the short term it is recommended to use the flexibility of the system, implementing the changes in summer target water levels which would allow deeper satisfaction of water demand. For the medium/long term, options for early storage need to be investigated together with the summer target water levels needed. This would probably require reinforcement of the dikes. Options for safety can be then defined for the new reinforced system, considering combinations of pumping station and raise of the dikes. A more extensive and detailed optimization tool should be realized for the IJsselmeer, and applied for the definition of the measures above. In particular it is recommended to use a multi-objective analysis and include costs in the definition of the indicators.","optimization; dynamic target water level","en","master thesis","","","","","","","","2011-05-04","Civil Engineering and Geosciences","Watermanagement","","Water Resources","","" "uuid:58f4d3c3-0a38-4640-aded-51d7bca2396e","http://resolver.tudelft.nl/uuid:58f4d3c3-0a38-4640-aded-51d7bca2396e","Analysis of near-optimal evacuation instructions","Huibregtse, O.L.; Bliemer, M.C.J.; Hoogendoorn, S.P.","","2010","In this paper, approximations of optimal evacuation instructions are analyzed. The instructions, consisting of a departure time, a destination, and a route, are for the evacuation by car of a population of a region threatened by a hazard. An optimization method presented in earlier research is applied on three different hazard scenarios resulting in an instruction set for each scenario. These instruction sets are different because of network degeneration caused by the different hazard scenarios. Analysis of the network occupancy during the evacuations as consequence of the instruction sets shows that the capacity is used in the scenarios for minimal 87%, 90%, and 87% for the period wherein the effect of the network degeneration is relatively small. Although the results are logical, no clear patterns are perceptible in the instructions leading to this network occupancy. This endorses to the viewpoint from the earlier paper, namely, that it is useful to apply an optimization method to create evacuation instructions instead of applying instructions set up by straightforward rules (like evacuating to the nearest destination). Furthermore, it shows the efficiency of this specific optimization method.","evacuation; instructions; optimization","en","journal article","Elsevier","","","","","","","","Civil Engineering and Geosciences","Transport and Planning","","","","" "uuid:1c5639e7-f7ee-4b9b-b15a-074039906860","http://resolver.tudelft.nl/uuid:1c5639e7-f7ee-4b9b-b15a-074039906860","The Simulation-based Multi-objective Evolutionary OptimizatioN (SIMEON) Framework","Halim, R.A.","Verbraeck, A. (mentor); Seck, M.D. (mentor); Cunningham, S. (mentor); Van Houten, S.P. (mentor)","2010","A powerful combination of simulation and optimization has been successfully applied to solve real-world decision making problems (Fu et al., 2000; Fu, Glover, & April, 2005). Unfortunately, there are scientific and application problems with this method. Firstly, there is no transparent and formal structure to define the integration between simulation and optimization. Secondly, there are challenges to ensure a proper balance between the various desired features of the simulation-based optimization method (i.e. generality, efficiency, high-dimensionality and transparency)(Fu, 2002). This research provides two contributions to the problems above by providing: 1) the design of the framework that addresses the knowledge gap above; 2) the implementation of the framework that fulfills the aforementioned features in Java. The proposed framework is developed based on Zeigler’s modeling and simulation framework and the phases of an optimization study in operations research. The test and evaluation show that the desired features are successfully satisfied.","framework; simulation; multi-objective; evolutionary; optimization","en","master thesis","","","","","","","","","Technology, Policy and Management","Systems Engineering, Policy Analysis, and Management (SEPAM)","","Systems Engineering","","" "uuid:2dbcdb3d-0606-4707-b7ba-6e7ceafd549b","http://resolver.tudelft.nl/uuid:2dbcdb3d-0606-4707-b7ba-6e7ceafd549b","Aircraft Fuselage Design Study: Parametric Modeling, Structural Analysis, Material Evaluation and Optimization for Aircraft Fuselage","?en, I.","Alderliesten, R.C. (mentor); Benedictus, R. (mentor); Rans, C.D. (mentor); Neelis, B.M. (mentor)","2010","The strong search for lightweight materials has become a trend in the aerospace industry. Aircraft manufacturers are responding to this trend and new aerospace materials are introduced to build lighter aircrafts. However material manufacturers, like Tata Steel, are unfamiliar with the determination of running loads and the behavior of materials in fuselage structures. Therefore an evaluation tool is needed for determining the running loads and evaluating the performance of new materials. This will give material manufacturers better insight in what properties and performance are specifically needed for materials in aircraft structures. The goal of this project is to develop an analytic design, analysis and evaluation tool for both metal and composite fuselage configurations in Visual Basic Application in order to gain insight into the structural performance of these material classes and to estimate the weight and required structural dimensions for both aluminum and composite fuselages. The fuselage geometry is setup parametrical and modeled as a simplified tube with variable crosssection without cut-outs and wing box, and it is divided in bays and skin panels. By modeling the aerodynamic-, gravity-, ground reaction forces and internal pressure a free body diagram and force/moment distribution is created for several flight and ground load cases, like 1G flight, lateral gust or landing load cases. The critical load cases are used for analysis. The running loads, like bending stress, longitudinal stress, circumferential stress and shear stress are calculated for the entire aircraft fuselage. A clear load pattern is created in order to evaluate the materials. The materials are evaluated for strength, stability and several other failure modes, like fatigue and crack growth. The skin panels are optimized for these evaluation methodologies and after doing so a minimum fuselage weight is obtained for conventional aircraft configurations. The Airbus A320 is taken as reference aircraft and the running loads and optimization results of the model are validated with this aircraft. The model proved to be valid and is therefore considered suitable to be used as an analysis and evaluation tool. The final stage of the project involved an initial assessment of aluminum and composite as structural material.","aircraft design; fuselage design; parametric modeling; structural analysis; optimization; aluminum; Ilhan Sen","en","master thesis","","","","","","","","","Aerospace Engineering","Mechanics, Aerospace Structures & Materials","","","","" "uuid:ccc6e7f3-3b21-4f05-a0ca-df8cad6d0ca0","http://resolver.tudelft.nl/uuid:ccc6e7f3-3b21-4f05-a0ca-df8cad6d0ca0","Optimization of sandwich composites fuselages under flight loads","Yan, C.; Bergsma, O.; Koussios, S.; Zu, L.; Beukers, A.","","2010","The sandwich composites fuselages appear to be a promising choice for the future aircrafts because of their structural efficiency and functional integration advantages. However, the design of sandwich composites is more complex than other structures because of many involved variables. In this paper, the fuselage is designed as a sandwich composites cylinder, and its structural optimization using the finite element method (FEM) is outlined to obtain the minimum weight. The constraints include structural stability and the composites failure criteria. In order to get a verification baseline for the FEManalysis, the stability of sandwich structures is studied and the optimal design is performed based on the analytical formulae. Then, the predicted buckling loads and the optimization results obtained froma FEMmodel are compared with that from the analytical formulas, and a good agreement is achieved. A detailed parametric optimal design for the sandwich composites cylinder is conducted. The optimization method used here includes two steps: the minimization of the layer thickness followed by tailoring of the fiber orientation. The factors comprise layer number, fiber orientation, core thickness, frame dimension and spacing. Results show that the two-step optimization is an effective method for the sandwich composites and the foam sandwich cylinder with core thickness of 5 mm and frame pitch of 0.5 m exhibits the minimum weight.","sandwich; composites; stability; optimization; ANOVA","en","journal article","Springer","","","","","","","","Aerospace Engineering","Aerospace Materials and Manufacturing","","","","" "uuid:c2a93de0-21e4-490b-a18c-09f319c2da17","http://resolver.tudelft.nl/uuid:c2a93de0-21e4-490b-a18c-09f319c2da17","Rigorous simulations of emitting and non-emitting nano-optical structures","Janssen, O.T.A.","Urbach, H.P. (promotor)","2010","In the next decade, several applications of nanotechnology will change our lives. LED lighting is about to replace the common light bulb. The main advantages are its energy efficiency and long lifetime. LEDs can be much more efficient, when part of the emitted light that is currently trapped in the device, could be radiated out of the device. Other devices such as photovoltaic solar cells and biosensors can also be made more efficient and cheaper. LEDs, solar cells and biosensors have in common that they consist of small structures of the order of the wavelength of the light. With such small structures light can be manipulated in a special way. In this thesis, we describe a method to calculate the interaction of light with these small structures. It is shown that an efficient LED which radiates light, can be treated as a solar cell that absorbs as much of the incoming light as possible. On this so-called reciprocity principle, which was discovered by Henrik Antoon Lorentz, a very efficient computational optimalisation method can be based. With this method existing designs of for example LEDs can be made more efficient iteratively. This thesis shows optimized designs of LEDs, solar cells and biosensors.","FDTD; LED; plasmonics; optimization; reciprocity; biosensors","en","doctoral thesis","Optics Research Group","","","","","","","2010-11-09","Applied Sciences","Imaging Science & Technology","","","","" "uuid:810eb93f-c55d-4b28-a8ba-3b831987c5ff","http://resolver.tudelft.nl/uuid:810eb93f-c55d-4b28-a8ba-3b831987c5ff","Suburban 2.0: Differentiated houses for the masses","Kramer, N.D.F.","Biloria, N. (mentor); Bier, H.H. (mentor); Sobota, M. (mentor)","2010","Population growth and immigration increase the demand for mass housing developments all over the world. These developments are widely criticized for being mono-functional, mono-typological, and mono-cultural. This project is a design method for a new kind of mass housing. All design rules are reformulated as algorithms, that interact with each other. Important input for the design rules is the future dweller. This leads to a bottom up, dynamic process, providing each dweller with a well fitted house, in a differentiated environment that provides public space and services. The geometry is optimized to use material as efficient as possible, which would be possible in the near future with full scale 3-D printers that are being developed at the moment.","urban; architecture; social; user-specific; mass-customization; complexity; self-organization; hyperbody; complex geometry; optimization; additive manufacturing","en","master thesis","","","","","","","","2010-11-11","Architecture","Architecture","","Hyperbody","","" "uuid:f34c2606-dbae-4182-873b-8c1a99714297","http://resolver.tudelft.nl/uuid:f34c2606-dbae-4182-873b-8c1a99714297","Interval Analysis: Contributions to static and dynamic optimization","De Weerdt, E.","Mulder, J.A. (promotor)","2010","The field of global optimization has been an active one for many years. By far the most applied methods are gradient and evolutionary based algorithms. The most appearing drawback of those types of methods is that one cannot guarantee that the global solution is found within finite time. Moreover, if the global solution is found (by chance), the methods cannot provide a guaranteed feedback to the user stating that the provided solution is the global one. Therefore, no natural stopping conditions are available for most of the existing optimization algorithms. There are, however, other tools available, which do provide the guarantee that the global solution is found and that have natural stopping conditions. Interval analysis in combination with interval arithmetic is such a tool. Interval arithmetic was initially developed to cope with rounding errors in digital computers. Using interval arithmetic, one can perform reliable computing such that catastrophic numeric errors can be prevented (the explosion of the Ariane 5 rocket on June 4, 1996 was caused by a simple numeric overflow). It was soon found, that interval arithmetic could be used to form guaranteed bounds on any type of function or numeric algorithm for any domain. These bounds provide the crucial information needed to perform global optimization. Interval analysis is the group name of all methods that use the information obtained from guaranteed bounds to solve global optimization problems. Developed in the 1960’s, interval analysis gained popularity during the 90’s when digital computers became increasingly powerful. Nowadays, interval analysis has been widely applied in the field of static optimization, i.e. optimization that does not involve differential algebraic equations, and verified integration. However, interval analysis has not been applied often in the field of dynamic optimization. The goal of the research is to investigate whether interval analysis, in combination with interval arithmetic, can be used to solve non-linear, constrained, dynamic optimization problems. Moreover, the possibility of extending existing theory in the field of static optimization is investigated. The focus of the research lies on trajectory optimization (a specific case of dynamic optimization). The most important condition of the designed solvers is that the dynamic constraints, formed by the equations of motion, must be satisfied for all time instances. To reach the research objectives, the theory and application of both interval arithmetic and interval analysis have been thoroughly investigated. The work is divided into two parts. The first part is on static optimization, which includes the discussion on interval arithmetic and describes the basics regarding interval analysis. The existing theory of inclusion functions, formed via interval arithmetic, has been evaluated and extended upon. The development of the Polynomial Inclusion Function, a new type of inclusion function, shows that significant improvements are possible in this field. During the review of interval analysis, its main virtues and limitations were demonstrated. The most important advantages are the guarantee that all optimal solutions are found to any degree of accuracy and that the user knows when the solution set has been found. The main limitation is the curse of dimensionality: the computational load grows, for most problems, exponentially with al linear increase in problem dimension. The author believes that this curse is mainly caused by two aspects of the current implementation of interval analysis. The first aspect is the widening of the inclusion function due to the dependency effects. The dependency effects can be partially prevented by efficient implementation of function evaluations and through application of advanced inclusion functions. However, a generic efficient method for preventing dependency effects is still not available. The other aspect causing the curse of dimensionality is the current inefficient handling of available information. The optimization algorithms within interval analysis are commonly based on branch and bound algorithms. Through a process of elimination, one is left with a list of domains in which the optimal solution set must lie. Current methods for eliminating (part of) the domain, such as the Newton step, do not use the gathered/available information efficiently. This is mainly due to the definition of the domain and the storage of the information, i.e. keeping track of infeasible regions. It is the author’s opinion that this is the reason that the application of interval analysis is limited to solving lower dimensional problems. Despite the curse of dimensionality, interval analysis based solvers can solve complicated, non-linear, constrained problems. This has been shown in multiple chapters in the first part. Complicated problems, such as neural network output optimization and the problem integer ambiguity resolution in the field of Global Navigation Satellite Systems, are solved rigorously by interval analysis based solvers. The applications show that equality and inequality constraints are efficiently handled using interval analysis. Moreover, they show that interval analysis can be used to solve real-life problems and demonstrate that interval analysis is a strong global optimization tool. The second part of the research is on dynamic optimization, thereby focusing on trajectory optimization. The trajectory optimization problem is infinite dimensional with begin and end-point constraints, dynamic constraints (the equations of motion), and possibly additional equality and inequality constraints. The problem is infinite dimensional since the states and controls need to be specified for each time instance. In the field of trajectory optimization one can identify two classes of methods: indirect methods and direct methods. Disregarding the optimization problems for which an analytic solution is present, both classes require a transformation to make the problem solvable. Three transformation methods have been considered: control parameterization, state parameterization, and control and state parameterization. With control parameterization, the control is defined for each time step using a polynomial and the states are computed using explicit integration. For state parameterization, the states are defined and the controls are deduced via the equations of motion (implicit integration). The last method applies parameterization of both the states and controls with respect to time. Trajectories are sought that satisfy the dynamic constraints at given time instances. The nature of the transformation methods implies that the first two methods can be used to find trajectories that satisfy the dynamic constraints at all time instances, while the latter cannot be used for this purpose. Therefore, only the first two methods have been thoroughly investigated. The last method was only briefly reviewed. The main conclusion regarding the control parameterization approach is that it suffers greatly from the required explicit integration. Although verified integration is possible and sharp bounds on the trajectories can be provided, the problem is to prove the existence of a solution within a given domain of the search space. Without the ability to update the estimate of the minimal cost function value early in the optimization process, the computational load becomes very high. Despite the drawback of control parameterization, it has been demonstrate that this approach can be used to find the global solution, although, currently, only very low dimensional problems can be solved. Higher dimensional problems can be solved using the state parameterization approach. By using simplex splines, the begin- and end-point constraints can be implicitly satisfied, which significantly reduces the problem complexity. The limitation is that the approach is only suitable for fully controllable systems. For systems that are not fully controllable one needs to apply explicit integration for all dependent states. This will increase the computational load significantly and would eliminate most of the benefits of the state parameterization approach. An interval analysis based solver has been applied to solve the problem of satellite trajectory planning for formation flying. Although still suffering from the curse of dimensionality, the results demonstrate that interval analysis can be used to solve the problem rigorously. Moreover, it has been shown that the performance of the solver is superior to gradient based solvers when constraints are imposed. The main conclusion of the research is that it is possible to apply interval analysis to dynamic optimization. The current status of the solvers (in this thesis and in literature) allows one to solve only ‘lower’ dimensional problems. Radical changes in the approach of handling information and keeping track of infeasible regions must be made to make interval analysis applicable to higher dimensional problems. Despite the limitations of interval analysis, the presented results clearly demonstrate the virtues of interval analysis based solvers in the field of global optimization. Several new exciting research opportunities have been identified, such as nonlinear stability analysis using interval analysis, the combination of interval analysis and evolutionary algorithms, and a new way of forming inclusion functions to boost the efficiency of interval analysis based solvers. Overall, the potential of interval analysis is very large and the author believes that interval analysis will become one of the most important tools in the field of global optimization in the near future.","interval analysis; optimization; dynamic","en","doctoral thesis","","","","","","","","2010-09-14","Aerospace Engineering","Control and Simulation Division","","","","" "uuid:fdc2dbda-b419-450f-a305-64825a43a0c8","http://resolver.tudelft.nl/uuid:fdc2dbda-b419-450f-a305-64825a43a0c8","Global Optimization using Interval Analysis: Interval Optimization for Aerospace Applications","Van Kampen, E.","Mulder, J.A. (promotor)","2010","Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in adaptive control algorithms; and in system identification problems, such as online aircraft model identification or human perception modeling. The main goal of this thesis is to investigate how Interval Analysis (IA) can be used as a tool for aerospace related optimization problems; to examine its theoretical and practical limitations, and to explore the ways in which optimization algorithms can benefit from interval analysis. A subset of goals is to improve the solutions for a number of aerospace related optimization problems. The scientific contribution of this thesis consists of the design and implementation of interval optimization algorithms for four important aerospace problems. The first contribution concerns finding the trim points for a nonlinear aircraft model. Trim points, defined as the combination of control settings for which all linear and rotational accelerations on the aircraft are zero, are important for flight control system design, since they provide information about the flight envelope and stability properties of the aircraft. Unlike other trim algorithms, the interval based method can guarantee that all trim points are found. In the second application, an interval optimization algorithm is developed for fitting pilot input/output data from an experiment in the SIMONA Research Simulator to a multi-modal human perception model. Perception models improve the understanding of how humans perceive motion and are an essential tool in the design of flight simulators. Results show that the minimum of the cost function found by the interval method is lower than the one previously found, resulting in an improved human perception model. This second application particularly demonstrates the capabilities of IA optimization as a parameter identification tool. The third contribution is an interval based algorithm for solving the integer ambiguity problem related to Global Navigation Satellite Systems (GNSS). Phase measurements of the carrier wave of a GNSS signal are used to estimate the length and orientation of baselines between two or more antennas. This estimation procedure contains an optimization problem in which the integer number of carrier wavelengths between antennas has to be determined. The new interval method provides guarantees that correct solutions are found when the measurement noise is encapsulated by an interval number. The final contribution is an interval optimization algorithm that minimizes fuel consumption during rendezvous and docking procedures of satellites in circular orbits. To avoid integration of interval functions, an analytical solution to the system of differential equations that describes the relative motion of the satellites is used to generate trajectories resulting from a set of thruster pulses of varying amplitudes. Introduction of obstacles, in the form of forbidden areas in the path between the two satellites, makes the problem nonlinear, such that gradient-based optimization algorithms can fail to obtain the globally optimal solution. The interval algorithm always converges to the trajectory that avoids all obstacles and results in minimum fuel consumption. It can be concluded that IA is an excellent tool for solving nonlinear optimization algorithms, providing guarantees on obtaining the global minimum of the cost function.","optimization; interval analysis","en","doctoral thesis","","","","","","","","2010-09-24","Aerospace Engineering","Control and Simulation","","","","" "uuid:bccecdeb-c382-445e-bf2f-62b4fcff7a78","http://resolver.tudelft.nl/uuid:bccecdeb-c382-445e-bf2f-62b4fcff7a78","Non-Invasive Electromagnetic Ablation of Female Breast Tumors","Brink, W.M.","Kooij, B.J. (mentor); Lager, I.E. (mentor)","2010","Breast cancer is the most common malignant tumor among women today. Available techniques for treating breast cancer often introduce strong side effects. The non-invasive electromagnetic ablation of breast tumors has a lot of potential, because it can provide a quick treatment modality without introducing harmful side effects. In this project we assess the feasibility of non-invasive electromagnetic ablation of female breast tumors. The two main challenges in this project are: 1. The computation of electromagnetic fields inside the female breast. 2. The focussing of power such that the power dissipated in the tumor is maximized while the power dissipated in healthy tissue is minimized. In our investigation we simulate a two-dimensional configuration with a circular array of line-sources operating at a single-frequency within the range of 1 to 10 GHz. The electromagnetic fields are computed using a discretized EFIE method, after which we evaluate three algorithms that focus the dissipated power in order to gain insight in the potential of this treatment modality.","electromagnetics; breast; cancer; treatment; therapy; hyperthermia; thermal; ablation; antenna; array; optimization; scattering","en","master thesis","","","","","","","","2010-09-28","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","Microwave Technology and Systems for Radar","","" "uuid:93d863ff-6363-4f9f-8ab9-5be23b9d96e9","http://resolver.tudelft.nl/uuid:93d863ff-6363-4f9f-8ab9-5be23b9d96e9","Optimization of the Al-Shaheen Field Performance using Smart Well Technology","Gelderblom, D.O.","Jansen, J.D. (mentor); Do, S.H. (mentor); Kapteijn, P.K.A. (mentor)","2010","This MSc thesis reports the results on optimizing the Al-Shaheen field performance using smart well technology. The field is currently being developed by Maersk Oil and Gas (MOG) offshore Qatar, using large-scale water injection on very long horizontal wells. The studied reservoir consists of a laterally uniform, tight matrix. However, undesired water short-circuiting between injectors and producers due to localized heterogeneity leads to reduced sweep efficiency and increased water production, thereby reducing the economic life of approximately 10% of the wells. Smart well technology combines monitoring and control capabilities with multi-segment completions in order to optimize flooding mechanisms. In this study two different optimization strategies were simulated on a sector model containing different level of heterogeneity. The first method comprised a reactive, measurement-based approach, where injection segments were shut-in when increased water production was observed in production segments. The second method comprised a proactive, model-based approach where the optimal shut-in timing of injection segments was obtained from gradient information. The evaluated flooding mechanisms include water injection and Water-Alternating-Gas (WAG) injection. Results show optimization with smart well technology can significantly improve recovery and reduce water and gas circulation under varying conditions of reservoir heterogeneity. The measurement-based optimization confirms that the technology can improve reservoir engineering by its increased downhole monitoring capabilities. Results from measurement-based optimization approach the optimum found by model-based optimization.","optimization; optimisation; Al Shaheen; Al-Shaheen; smart; gradient","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Section Petroleum Engineering","","","","" "uuid:d9b524b0-d2e1-4bde-883a-cc6313a1d8c0","http://resolver.tudelft.nl/uuid:d9b524b0-d2e1-4bde-883a-cc6313a1d8c0","Automated Implant-Processor Design","Dave, D.","Gaydadjiev, G. (mentor); Strydis, C. (mentor)","2010","As we move towards an aging population, it is likely that an increasing number of people will require an increasing diversity of implants, but at a lower cost to the society. Also, as computer technology progresses, smaller, more powerful, and less battery intensive implants can be designed. However, present implant design methodology is highly inefficient at meeting these goals as it suffers from non-reuse of existing knowledge by relying heavily on custom designs and ASICs. The SiMS project was started with the goal of creating pre-designed, pre-tested, and pre-certified toolbox of components for biomedical implants that can be assembled in a modular fashion for various application scenarios. One of the most important components in such a tool-box is the processor. Designing such a processor is a non-trivial task and previous work has concentrated on studying the effect of changing the processor input-parameters (such as caches), one parameter at a time. The present work represents a shift in this methodology, as we now allow co-variation in all possible input parameters in order to find optimal configurations in terms of the output objectives - power, performance, and area. Towards this end, we implement ImpEDE -- ""Implantable-processor Evolutionary Design-space Explorer"" -- a framework that performs multi-objective optimization of processor parameters, and hence gives as output a Pareto optimal set of processors. The framework consists of a cache simulator and a cycle-accurate processor simulator running benchmarks and workloads designed for medical implants, in order to simulate the optimization objectives. A popular, highly configurable, multi-objective genetic algorithm, NSGA-II, performs the actual optimization. Supporting scripts add modularity by acting as the interface between the genetic algorithm and the simulators, enabling easy replacement with new simulators. The whole framework is parallelized such that extra computation cycles of the idle laboratory CPUs can be utilized, thereby giving a considerable speedup without requiring any special hardware. We perform experiments on the non-dominated solution fronts evolved by the framework on a sub-set of benchmarks, in order to optimize parameters of the genetic algorithm, with an aim towards speeding up convergence. We also examine the effects of changing the workload size run by the benchmarks. A solution Pareto optimal front consisting of optimal processor configurations across all benchmarks is found. This front is used as a reference in order to characterize the benchmarks in the ImpBench suite. Finally, the objective space of the reference front is compared to existing implant designs, and a set of ""generic processors"" are chosen such that all the existing implant applications studied can be covered.","implant; pareto; genetic algorithm; design-space exploration; optimization; power; area; energy; processor; simulation","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Engineering","","Computer Engineering","","" "uuid:f272117c-e1b5-4ae6-96cb-aa86fe62a015","http://resolver.tudelft.nl/uuid:f272117c-e1b5-4ae6-96cb-aa86fe62a015","Overview of Methods for Multi-Level and/or Multi-Disciplinary Optimization","De Wit, A.J.; Van Keulen, A.","","2010","Multi-level optimization and multi-disciplinary optimization are areas of research that are concerned with developing efficient analysis and optimization techniques for complex systems that are made up of coupled elements (components). Within the field of multilevel optimization and multi-disciplinary optimization a large number of techniques have been developed for efficient analysis and optimization of complex systems. This paper presents an unified overview of main stream approaches that were found in the literature. Four general steps are distinguished in both multi-level optimization and multi-disciplinary optimization: physical coupling, optimization problem coupling, coordination and solution sequence. Via these four steps approaches are classified and possibilities for combining aspects of different methods are given. Finally, advantages and disadvantages of approaches applied to engineering problems are discussed and directions for further research are given.","multi-level; multi-disciplinary; optimization; decomposition; coordination; overview","en","conference paper","American Institute of Aeronautics and Astronautics (AIAA)","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","","" "uuid:0e0bc750-bd3e-4b8a-8c09-019f22326fef","http://resolver.tudelft.nl/uuid:0e0bc750-bd3e-4b8a-8c09-019f22326fef","Increasing the energy efficiency of glass façades.","Van Kilsdonk, J.M.A.","Van Timmeren, A. (mentor); Veer, F.A. (mentor); Klein, T. (mentor)","2010","Design of a sun shading system which is optimized to generate as much energy as possible. This has been done by calculating the optimal size and positioning of the slats. The user also played a central role in the design process. Not only the design of the slats is innovative, also the way they are connected to the glass facade. This is done by Fischer-plugs, which make the façade easy (dis-)mountable and gives it a high-tech, lightweight look.","PV cell; glass; facade; sustainable energy; Fischer system; innovative; integral design; optimization","en","master thesis","","","","","","","","","Architecture","Building Technology","","Research & Design","","" "uuid:319dffb8-3bbc-49de-a6c5-68d8972f3888","http://resolver.tudelft.nl/uuid:319dffb8-3bbc-49de-a6c5-68d8972f3888","A generic method to optimize instructions for the control of evacuations","Huibregtse, O.L.; Hoogendoorn, S.P.; Pel, A.J.; Bliemer, M.C.J.","","2010","A method is described to develop a set of optimal instructions to evacuate by car the population of a region threatened by a hazard. By giving these instructions to the evacuees, traffic conditions and therefore the evacuation efficiency can be optimized. The instructions, containing a departure time, a destination, and a route, are created using an optimization method based on ant colony optimization. Iteratively is searched for an approximation of the optimal evacuation instructions. The usefulness of the optimization method compared to other optimization methods is the simultaneous optimization of the departure time, destination, and route instructions instead of the optimization of only one or two of these variables for a dynamic instead of static evacuation problem. In a case study, the functioning of the method is illustrated. The relative high fitness in the case study of the set of instructions following from the optimization method compared with the fitness of a set of instructions set up by straightforward rules (like evacuating to the nearest destination) shows also the usefulness of applying an optimization method to create a set of evacuation instructions.","evacuation; instructions; control; optimization; ant colony optimization","en","conference paper","IFAC","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:1137ebe3-3dcb-43ca-84f7-89bbbbc2d635","http://resolver.tudelft.nl/uuid:1137ebe3-3dcb-43ca-84f7-89bbbbc2d635","Efficient particle-based estimation of marginal costs in a first-order macroscopic traffic flow model","Zuurbier, F.S.; Hegyi, A.; Hoogendoorn, S.P.","","2010","Marginal costs in traffic networks are the extra costs incurred to the system as the result of extra traffic. Marginal costs are required frequently e.g. when considering system optimal traffic assignment or tolling problems. When explicitly considering spillback in a traffic flow model, one can use a numerical derivative or resort to heuristics to calculate the marginal costs. Numerical derivatives are computationally demanding, restricting its use to simple networks. Heuristic approaches in most cases approximate the marginal costs by only considering the extra costs on the links which are traveled by the extra traffic, excluding the possibly external costs incurred on other links due to spillback. This paper proposes a novel way to estimate the true marginal costs of traffic in a dynamic discrete LWR model which correctly deals with congestion onset, spillback and dissolution. The proposed methodology tracks virtual changes in density through the network by means of particles which travel along with the characteristics of traffic. By using density based cost functions, the virtual changes in density can be directly related to the marginal costs. The computational efficiency of the methodology stems from the fact that only local conditions are considered when propagating the virtual change in density. The paper discusses the methodology and necessary model extensions, provides a numerical validation experiment illustrating the exact detail of the solution by comparison to a numerical derivative and discusses some generalizations.","optimization; dynamic traffic assignment; system optimal; LWR; marginal costs; particle","en","conference paper","IFAC","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:d8f58668-ba49-441d-bbf0-aa8c7114da4a","http://resolver.tudelft.nl/uuid:d8f58668-ba49-441d-bbf0-aa8c7114da4a","A Unified Approach towards Decomposition and Coordination for Multi-level Optimization","De Wit, A.J.","Van Keulen, A. (promotor)","2009","Complex systems, such as those encountered in aerospace engineering, can typically be considered as a hierarchy of individual coupled elements. This hierarchy is reflected in the analysis techniques that are used to analyze the physcial characteristics of the system. Consequently, a hierarchy of coupled models is to be used, accounting for different physical scales, components and/or disciplines. Numerical optimization of complex systems with embedded hierarchy is accomplished via multi-level optimization methods. Multi-level optimization methods utilize the hierarchical nature of complex systems to distribute the optimization process into smaller coupled less complex optimization problems located at the individual elements of the hierarchy. The present thesis presents a generalized approach towards decomposition and coordination for the numerical optimization of complex systems with embedded hierarchy. The developed methods are applied to numericaly maximizing the range of a supersonic business jet via multi-level optimization considering coupling between multiple engineering disciplines.","multi-level; multi-disciplinary; optimization; decomposition; coordination","en","doctoral thesis","","","","","","","","2009-11-30","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","","","" "uuid:25c85feb-7ef1-4752-9810-e70f49e88802","http://resolver.tudelft.nl/uuid:25c85feb-7ef1-4752-9810-e70f49e88802","On maximum field components in the focal point of a lens","Urbach, H.P.; Pereira, S.F.; Broer, D.J.","","2009","We determine field distributions in the pupil of a high NA lens, that give, for a given power incident on the lens, the maximum electric field amplitude in focus in a specific direction. We consider in particular the cases of maximum longitudinal and maximum transverse components. The distribution of the maximum longitudinal component in the focal plane is narrower than that of the focused Airy spot and hence can give higher resolution in imaging.","High NA; beam shaping; optimization; longitudinal polarization","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:dc5b1158-be54-42d6-a4d3-b0a19462f507","http://resolver.tudelft.nl/uuid:dc5b1158-be54-42d6-a4d3-b0a19462f507","Robustness of networks","Wang, H.","Van Mieghem, P. (promotor)","2009","Our society depends more strongly than ever on large networks such as transportation networks, the Internet and power grids. Engineers are confronted with fundamental questions such as “how to evaluate the robustness of networks for a given service?”, “how to design a robust network?”, because networks always affect the functioning of a service. Robustness is an important issue for many complex networks, on which various dynamic processes or services take place. In this work, we define robustness as follows: a network is more robust if the service on the network performs better, where performance of the service is assessed when the network is either (a) in a conventional state or (b) under perturbations, e.g. failures, virus spreadings etc. In this thesis, we survey a particular line of network robustness research within our general framework: robustness quantification, optimization and the interplay between service and network. Significant progress has been made in understanding the relationship between the structural properties of networks and the performance of the dynamics or services taking place on these networks. We assume that network robustness can be quantified by a topological measure of the network. A brief overview of the topological measures is presented. Each measure may represent the robustness of a network with respect to a certain performance aspect of a service. We focus on the measure known as algebraic connectivity. Evidence collected from literature shows that the algebraic connectivity characterizes network robustness with respect to synchronization of dynamic processes at nodes, random walks on graphs and the connectivity of a network. Moreover, we illustrate that, on a given diameter, graphs with large algebraic connectivity tend to be dense in the core and sparse at the border. Such structures distribute traffic homogeneously and are thus robust in terms of traffic engineering. How do we design a robust network with respect to the metric algebraic connectivity? First, the complete graph has the maximal algebraic connectivity, while its high link density makes it impractical to use due to the cost of constructing links. Constraints on other network features are usually set up to incorporate realistic requirements. For example, constraint on the diameter may guarantee certain end-to-end quality of service levels such as the delay. We propose a class of clique chain structures which optimize the algebraic connectivity and many other robust features among all graphs with diameter D and size N. The optimal graph within the class can be determined either analytically or numerically. Second, complete replacement of an existing infrastructure is expensive. Thus, we design strategies for robustness optimization using minor topological modifications. These strategies are evaluated in various classes of graphs. The robustness quantification, or equivalently, the association of the performance of a service with a topological measure, may be implicit. In this case, we explore the interplay between topology and service in determining the overall performance. Many services on communications and transportation networks are based on shortest path routing. The weight of a link, such as delay or bandwidth, is generally a metric optimized via shortest path routing. Thus, link weight tuning, a mechanism to control traffic, is also considered as part of the service. The interplay between service (shortest path routing and link weight tuning) and topology is investigated for the following performance aspects: (a) the structure of the transport overlay network, which is the union of shortest paths between all node pairs and (b) the traffic distribution in the overlay network. Important new findings are (i) the universal phase transition in overlay structures as we tune the link weight structure over different classes of networks and (ii) the power law traffic distribution in the overlay networks when link weights vary strongly in various classes of networks. Furthermore, we consider the service that measures a network topology as the union of shortest paths among a set of testboxes (nodes). The measured topology is a subgraph of the overlay network, which is again a subgraph of the actual network. The performance in terms of the sampling bias of measuring a network topology is investigated. Our work contributes substantially to a better understanding of the effect of the service (testbox selection) and the actual network structure on the performance with respect to sampling bias. Our investigations on the interplay between service and network reveal again the association between the performance of a service and certain topological feature, and thus, contribute to the quantification of network robustness. The multidisciplinary nature of this research lies not only in the presence of robustness issues in many complex networks, but also in that advances in other disciplines such as graph theory, combinatorics, linear algebra and statistical physics are widely applied throughout the thesis to study optimization problems and the performance of large networks.","robustness; network topology; service; optimization","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Telecommunications","","","","" "uuid:c58b5999-da12-4a62-876f-95d7784edf91","http://resolver.tudelft.nl/uuid:c58b5999-da12-4a62-876f-95d7784edf91","Model-Based Control and Optimization of Large Scale Physical Systems - Challenges in Reservoir Engineering","Van den Hof, P.M.J.; Jansen, J.D.; Van Essen, G.M.; Bosgra, O.H.","","2009","Due to urgent needs to increase efficiency in oil recovery from subsurface reservoirs new technology is developed that allows more detailed sensing and actuation of multiphase flow properties in oil reservoirs. One of the examples is the controlled injection of water through injection wells with the purpose to displace the oil in an appropriate direction. This technology enables the application of model-based optimization and control techniques to optimize production over the entire production period of a reservoir, which can be around 25 years. Large scale reservoir flow models are used for optimizing production settings, but suffer from high levels of uncertainty and limited validation options. One of the challenges is the development of reduced complexity models that deliver accurate long-term predictions, and at the same time are not more complex than can be warranted by the amount of data that is available. In this paper an overview will be given of the problems and opportunities for model-based control and optimization in this field aiming at the development of a closed-loop reservoir management system.","petroleum; reservoir; optimization","en","conference paper","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:cb3de0cf-a506-4490-b988-f4d1bf00ae55","http://resolver.tudelft.nl/uuid:cb3de0cf-a506-4490-b988-f4d1bf00ae55","Model-based predictive control applied to multi-carrier energy systems","Arnold, M.; Negenborn, R.R.; Andersson, G.; De Schutter, B.","","2009","The optimal operation of an integrated electricity and natural gas infrastructure is investigated. The couplings between the electricity system and the gas system are modeled by so-called energy hubs, which represent the interface between the loads on the one hand and the transmission infrastructures on the other. To increase reliability and efficiency, storage devices are present in the multi-carrier energy system. In order to optimally incorporate these storage devices in the operation of the infrastructure, the capacity constraints and dynamics of these have to be taken into account explicitly. Therefore, we propose a model predictive control approach for controlling the system. This controller takes into account the present constraints and dynamics, and in addition adapts to expected changes of loads and/or energy prices. Simulations in which the proposed scheme is applied to a three-hub benchmark system are presented.","optimal power flow; electric power systems; model predictive control; natural gas systems; optimization","en","conference paper","IEEE","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:ff8e44db-72e2-49fa-bd7f-bde923758e68","http://resolver.tudelft.nl/uuid:ff8e44db-72e2-49fa-bd7f-bde923758e68","An efficient method for reducing the sound speed induced errors in multibeam echosounder bathymetric measurements","Snellen, M.; Siemes, K.; Simons, D.G.","","2009","Nowadays extensive use is made of multibeam echosounders (MBES) for mapping the bathymetry of sea- and river-floors. The MBES is capable of covering large areas in limited time by emitting an acoustic pulse along a wide swathe perpendicular to the sailing direction. The angle and the corresponding two-way travel-time of the received signals are determined through beamsteering at reception. Water depths along the swathe can be derived from this angle and travel-time combination. In general, two sets of sound speed measurements are taken when conducting MBES measurements. The first set is used for the beamsteering and consists of the sound speeds at the MBES transducer. The second set is used for determining the propagation of the sound through the water column, needed for correctly converting the measured travel times to a depth. In general, this set of sound speed measurements consist of the complete sound speed profiles (SSPs). The quality of the sound speed measurements at the transducer position sometimes gets degraded, resulting in beam steering angles that differ from those aimed for. Also sometimes the SSPs used for converting the beam travel times to depths deviate from the true prevailing SSPs due to the, in general, limited amount of SSP measurements taken during a survey. Both above mentioned effects result in an erroneous bathymetry. Here, we present a method for eliminating these errors, without the need for additional sound speed information.","multibeam echosounder; sound speed profile; optimization","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","","" "uuid:fbc64a39-931e-4b40-8803-486466f20703","http://resolver.tudelft.nl/uuid:fbc64a39-931e-4b40-8803-486466f20703","The potential of inverting geo-technical and geo-acoustic sediment parameters from single-beam echo sounder returns","Simons, D.G.; Snellen, M.; Siemes, K.","","2009","Seafloor characterization is important in many fields including hydrography, marine geology, coastal engineering and habitat mapping. The advantage of non-invasive acoustic methods for sediment characterization over conventional bottom grabbing is the nearly continuous versus sparse sensing and the enormous reduction in survey time and costs. Among the various acoustic systems for seafloor characterization, the single-beam echo sounder is of particular interest due to its simplicity and versatility. Seafloor characterization algorithms can be roughly divided into two categories: model-based and empirical, where the latter simply relies on the observation that certain echo features, such as amplitude, duration and skewness of the echo, are correlated with sediment type. Here we apply the model-based approach where we compare the measured echo signal with theoretically modeled echo envelopes in the time domain. For modeling the received echo sounder signals use is made of a physical backscatter model that fully accounts for watersediment interface roughness and sediment volume scattering. We use differential evolution, a fast variant of a genetic algorithm, as the global optimization method to invert the model input parameters mean grain size, spectral strength of the interface roughness and volume scattering cross section. In the model grain size determines geo-acoustic parameters like sediment sound speed, density and attenuation. The analysis is applied to simulated data.","single-beam echosounder; seafloor classification; optimization","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","","" "uuid:6c6197bd-5757-428a-9d3d-e94af148ce90","http://resolver.tudelft.nl/uuid:6c6197bd-5757-428a-9d3d-e94af148ce90","A systematic analysis of the optical merit function landscape: Towards improved optimization methods in optical design","Van Turnhout, M.","Urbach, H.P. (promotor); Bociort, F. (promotor)","2009","A major problem in optical system design is that the optical merit function landscape is usually very complicated, especially for complex design problems where many minima are present. Finding good new local minima is then a difficult task. We show however that a certain degree of order is present in the optical design space, which is best observed when we consider not only local minima, but saddle points as well. With a special method, which we call Saddle-Point Construction (SPC), saddle points can be constructed in a simple way. Via saddle points, new local minima can be obtained very rapidly. When using a local optimization method, the final design after optimization highly depends on the starting configuration. We can group the initial configurations that lead to a given local minimum after local optimization into a graphical region, which shape depends on the optimization method used. However, saddle points are critical points in the merit function landscape that always remain on the boundaries, independent of the used optimization method. When the local optimization process is not chaotic, the geometric decomposition of the space of initial configurations into discrete regions has boundaries given by simple curves. But when the optimization is chaotic, the curves separating the different regions are very complicated objects termed fractals. In such cases, starting configurations, which are very close to each other, lead to different local minima after optimization. A better understanding of these instabilities can be obtained by using low damping values in a damped least-squares method.","optical system design; saddle point; optimization; fractal; chaos","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","","" "uuid:4f491cc5-cdc7-49b4-8b80-700dae2cf57c","http://resolver.tudelft.nl/uuid:4f491cc5-cdc7-49b4-8b80-700dae2cf57c","Validity improvement of evolutionary topology optimization: Procedure with element replaceable method","Zhu, J.; Zhang, W.; Bassir, D.H.","","2009","The aim of this paper is to enhance the validity of existing evolutionary topology optimization procedures. As this hard-killing scheme related to the element sensitivity values may lead to incorrect predictions of inefficient elements to be removed and the value of the objective function becomes sharply deteriorated during the iterations, a check position (CP) control is proposed to prevent the erroneous topology design generated by the rejection criteria of evolutionary methods. For this purpose, we introduce a sort of orthotropic cellular microstructure (OCM) element with moderate pseudodensity that acts as a compromising element between solid element and void OCM element. In this way, all inefficient elements removed previously are automatically replaced with the moderate OCM elements depending upon the deterioration of the objective function. Erroneously removed elements are then identified in the updated finite element model through a direct sensitivity computing of the moderate OCM elements and will be finally recovered by the bi-directional element replacement. Besides, detailed structures with checkerboard patterns are eliminated by controlling the local structural bandwidth with the so-called threshold method. Typical optimization examples of structural compliance and natural frequency that were difficult to tackle are solved by the proposed design procedure. Satisfactory numerical results are obtained.","optimization; evolutionary method; erroneous design; check position control; moderate microstructure","en","journal article","EDP sciences","","","","","","","","Aerospace Engineering","Aerospace Structures","","","","" "uuid:ff66e490-db59-4e3c-b6e2-926da4f074df","http://resolver.tudelft.nl/uuid:ff66e490-db59-4e3c-b6e2-926da4f074df","Algebraic Connectivity Optimization via Link Addition","Wang, H.; Van Mieghem, P.","","2008","","algebraic connectivity; synchronization; optimization; link addition","en","conference paper","ICST","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:a8ec762b-8e2a-422f-9978-a6e85673df40","http://resolver.tudelft.nl/uuid:a8ec762b-8e2a-422f-9978-a6e85673df40","Understanding catchment behaviour through model concept improvement","Fenicia, F.","Savenije, H.H.G. (promotor)","2008","This thesis describes an approach to model development based on the concept of iterative model improvement, which is a process where by trial and error different hypotheses of catchment behaviour are progressively tested, and the understanding of the system proceeds through a combined process of modelling and experimenting. We show a number of case studies where we demonstrate the need of combining the power of physical laws and established scientific theories with qualitative understanding of natural phenomena, which requires creativity and intuition. We emphasize the importance of the 'Art' of modelling, which is often a neglected aspect of scientific research. We address topical research issues such as reducing model structural uncertainty through progressive understanding of catchment behaviour, incorporating process knowledge in the different stages of model development, linking modelling and experimentation, and understanding the contribution of data to process understanding.","hydrological modelling; calibration; optimization; uncertainty; model structure","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:7cd0b27c-f95b-47c3-969b-36c4b7affa0d","http://resolver.tudelft.nl/uuid:7cd0b27c-f95b-47c3-969b-36c4b7affa0d","Saddle-point construction in the design of lithographic objectives, part 2: Application","Marinescu, O.; Bociort, F.","","2008","","saddle point; lithography; optimization; optical system design; EUV; DUV","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:f16b0c66-bef3-46f9-a84c-174c0e0bc449","http://resolver.tudelft.nl/uuid:f16b0c66-bef3-46f9-a84c-174c0e0bc449","Saddle-point construction in the design of lithographic objectives, part 1: Method","Marinescu, O.; Bociort, F.","","2008","","saddle point; lithography; optimization; optical system design; EUV; DUV","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:324e0e8a-527e-43bb-87c0-8e131654acc9","http://resolver.tudelft.nl/uuid:324e0e8a-527e-43bb-87c0-8e131654acc9","Performance Enhancement of Abrasive Waterjet Cutting","","Karpuschewski, B. (promotor)","2008","Abrasive Waterjet (AWJ) Machining is a recent non-traditional machining process. This technology is widely used in industry for cutting difficult-to-machine-materials, milling slots, polishing hard materials etc. AWJ machining has many advantages, e.g. it can cut net-shape parts, no heat is generated during the cutting process, it is particularly environmentally friendly as it is clean and it does not create dust. Although AWJ machining has many advantages, a big disadvantage of this technology is its relatively high cutting cost. Consequently, the reduction of the machining cost and the increase of the profit rate are big challenges in AWJ technology. To reduce the total cutting cost as well as to increase the profit rate, this research focuses on performance enhancement of AWJ cutting with two possible solutions including optimization in the cutting process and abrasive recycling. The first solution to enhance the AWJ cutting performance is the optimization of the AWJ cutting process. As a precondition, it is necessary to have a cutting process model for optimization. In order to use that model for this purpose, several important requirements are given. The most important requirement for such a model is that it can describe the âoptimum relationâ between the optimum abrasive mass flow rate and the maximum depth of cut. To develop a cutting process model which can be used for the AWJ optimization, many available models have been analyzed. Since the most important requirement for a process model (see above) can be obtained from Hoogstrate's model, an extension of this model is carried out. The extension model consists of three sub-models including pure waterjet model, abrasive waterjet model and abrasive-work material interaction model. The extension cutting process model is more accurate than the original one and it is capable to optimize AWJ systems. The influence of many process parameters, the work materials, the abrasive type and size have been taken into account. Up to now, there has not been a model for the prediction of AWJ nozzle wear. Therefore, modeling the nozzle wear rate has been carried out and a model for the wear rate of nozzles made from composite carbide has been proposed. Based on the extension cutting process model, two types of optimization applications have been carried out. They are related to technical problems and economical problems. From the results of these problems, regression models for determining the optimum nozzle exchange diameter and the optimum abrasive mass flow rate for various objectives have been proposed. The other solution to enhance the cutting performance is abrasive recycling. In this study, GMA garnet, the most popular abrasives for blast cleaning and waterjet cutting, has been chosen for the investigation. The recycling of GMA abrasives has been investigated on both technical side and economical side. On the technical side, the reusability and the cutting performance of the recycled and recharged abrasives have been analysed. The influence of the recycled and recharged abrasives on the cutting quality was studied. On the economical side, first, the prediction of the cost of recycled and recharged abrasives was done. Then, the economic comparisons for selecting abrasives have been carried out. In addition, the economics of cutting with recycled and recharged abrasives have been studied. Several suggestions for an abrasive recycling process which promises a more effective use of the grains have been proposed. By optimization in the cutting process and by abrasive recycling, the cutting performance can be increased, the total cutting cost can be reduced, and the profit rate can be enlarged considerably. Consequently, the performance of AWJ cutting can be enhanced significantly.","abrasive waterjet; waterjet; optimization; abrasive recycling; modeling","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:20b5a4b5-6419-4593-a668-48074982bcb3","http://resolver.tudelft.nl/uuid:20b5a4b5-6419-4593-a668-48074982bcb3","Model-based lifecycle optimization of well locations and production settings in petroleum reservoirs","Zandvliet, M.J.","Bosgra, O.H. (promotor); Jansen, J.D. (promotor)","2008","The coming years there is a need to increase production from petroleum reservoirs, and there is an enormous potential to do so by increasing the recovery factor. This is possible by making better use of recent technological developments, such as horizontal wells, downhole valves and sensors. However, actually making better use of these improved capabilities is difficult because of many open problems in reservoir management and production operations processes. Consequently, there is significant scope to increase the recovery factor of oil and gas fields by tailoring tools from the systems and control community to efficiently perform dynamic optimization of wells (e.g. number, locations) and their production settings (e.g. bottom-hole pressures, flow rates, valve settings) based on uncertain reservoir models, in the sense that they lead to good decisions while requiring limited time from the user. This thesis aims at developing these tools, and the main contributions are as follows. Many production setting optimization problems can be written as optimal control problems that are linear in the control. If the only constraints are upper and lower bounds on the control, these problems can be expected to have pure bang-bang optimal solutions. The adjoint method to derive gradients of a cost function with respect to production settings can be combined with robust optimization to efficiently compute settings that are robust against uncertainty in reservoir models. The gradients used in production setting optimization can be used to efficiently compute directions in which to iteratively improve upon an initial well configuration by surrounding the to-be-placed wells by pseudo wells (i.e. wells that operate at a negligible rate). The controllability and observability properties of single-phase flow reservoir model are analyzed. It is shown that pressures near wells in which we can control the flow rate or bottom-hole pressure are controllable, whereas pressures near wells in which we can measure the flow rate or bottom-hole pressure are observable. Finally, a new method of regularization in history matching is presented, based on this controllability and observability analysis.","petroleum; reservoir engineering; systems and control; optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:932855fc-0dd3-4a1a-a2bb-685aaa0c54c1","http://resolver.tudelft.nl/uuid:932855fc-0dd3-4a1a-a2bb-685aaa0c54c1","Stadskantoor - Station - Spoorzone Delft - Grid Relaxation - Rain Analysis","Haasnoot, M.","Claessens, F. (mentor); Borgart, A. (mentor); Stouffs, R.M.F. (mentor); Wilms Floet, W.W.L.M. (mentor); Mihl, H. (mentor)","2008","Met als bijlage: A0 poster","relaxation; gridshell; dubble curved; optimization; stadskantoor","en","master thesis","TU Delft, Architecture, Architecture","","","","","","","","Architecture","","","","","" "uuid:4f4b7fb1-4a77-46bb-9c14-ff5e4bb6477c","http://resolver.tudelft.nl/uuid:4f4b7fb1-4a77-46bb-9c14-ff5e4bb6477c","Optimization of extreme ultraviolet mirror systems comprising high-order aspheric surfaces","Marinescu, O.; Bociort, F.","","2008","","mirror systems; aspheres; extreme ultraviolet lithography; optimization; relaxation","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:5feb9aa6-d1bc-482b-8570-7e892bdf3bc5","http://resolver.tudelft.nl/uuid:5feb9aa6-d1bc-482b-8570-7e892bdf3bc5","Optimization based image registration in the presence of moving objects","Karimi Nejadasl, F.; Gorte, B.G.H.; Hoogendoorn, S.P.; Snellen, M.","","2008","","registration; optimization; Differential Evolution; Nelder-Mead; 3D Euclidean","en","conference paper","","","","","","","","","Aerospace Engineering","Remote Sensing","","","","" "uuid:d50848b4-cd08-4482-a824-7d51700be44e","http://resolver.tudelft.nl/uuid:d50848b4-cd08-4482-a824-7d51700be44e","Integrated modeling of ozonation for optimization of drinking water treatment","van der Helm, A.W.C.","van Dijk, J.C. (promotor)","2007","Drinking water treatment plants automation becomes more sophisticated, more on-line monitoring systems become available and integration of modeling environments with control systems becomes easier. This gives possibilities for model-based optimization. In operation of drinking water treatment plants, the processes are usually optimized individually on the basis of ""rules of thumb"" and operator knowledge and experience. However, changes in operational conditions of individual processes can affect subsequent processes and an optimal operation, which can include a number of water quality parameters, costs and environmental impact is different for every operator. Improvement of the operation of a drinking water treatment plant is possible by using an integrated model of the entire water treatment plant as an instrument for operational support and for process control. For this purpose, it is important that explicit objectives are defined for the operation. From the research it is concluded that the objective for integrated optimization of the operation of drinking water treatment should be the improvement of water quality and not a priori reduction of environmental impact or costs. In the research an integrated model for ozonation, including ozone decay, bromate formation, assimilable organic carbon (AOC) formation, E. coli disinfection, CT and decrease in UV absorbance at 254 nm (UVA254) is developed. With the model, different control strategies for ozonation are assessed. The research also describes a newly developed design for ozone installations, the dissolved ozone plug flow reactor, (DOPFR) and the effect of character and removal of natural organic matter (NOM) prior to ozonation. The research was carried out as part of the project Promicit, a cooperation of Waternet, Delft University of Technology, DHV B.V. and ABB B.V. and was subsidized by SenterNovem, agency of the Dutch Ministry of Economic Affairs. Part of the experiments was performed in cooperation with Kiwa Water Research.","modeling; modelling; integrated; ozonation; optimization; drinking water; drinking water treatment; bromate; natural organic matter; nom; disinfection; assimilable organic carbon; aoc; life cycle assessment; lca; bottled water","en","doctoral thesis","Water Management Academic Press","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:a544e3ea-a75d-4bf0-a7da-3a38bf72e666","http://resolver.tudelft.nl/uuid:a544e3ea-a75d-4bf0-a7da-3a38bf72e666","The speed optimization of a printing press","Wadman, W.S.","Hooghiemstra, G. (mentor); Lopuhaa, H.P. (mentor)","2007","PCM Uitgevers is van de grootste uitgeverijen van Nederland. Ze produceert de Volkskrant, het NRC Handelsblad, het Algemeen Dagblad, de Trouw en diverse kleinere kranten. Voor het drukken van de kranten heeft PCM een vestiging in Amsterdam.Iedere ochtend wordt in Amsterdam een groot aantal kranten gedrukt, waarna zij door vrachtwagens worden opgehaald en op tijd door Nederland gedistribueerd moeten worden. De voorspelbaarheid van de werksnelheid van de pers is hiervoor erg belangrijk, maar helaas niet erg betrouwbaar: de pers valt wel eens uit en ligt een onzekere tijd stil. Hierdoor kan het zijn dat kranten te vroeg dan wel te laat klaar zijn ten opzichte van de af- en aanrijdende vrachtwagens. Dit zijn ongewenste situaties. Er zijn marges opgegeven waarbinnen het falen van het perssysteem 'acceptabel' is.We zullen een maat gaan definin en beargumenteren die aangeeft hoeveel de pers 'faalt' tijdens het productieproces. Hierna zullen we gaan onderzoeken of er een bepaalde optimale perssnelheid tussen twee storingen is, waarbij naar verwachting de mate van onacceptabel falen van het systeem het kleinst is.","optimization; mathematics; stochastic; probability","en","bachelor thesis","TU Delft, Electrical Engineering, Mathematics and Computer Science, Applied Mathematics","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:28b2169c-2dc0-4258-b572-8c2320cf81d1","http://resolver.tudelft.nl/uuid:28b2169c-2dc0-4258-b572-8c2320cf81d1","Practical guide to saddle-point construction in lens design","Bociort, F.; Van Turnhout, M.; Marinescu, O.","","2007","Saddle-point construction (SPC) is a new method to insert lenses into an existing design. With SPC, by inserting and extracting lenses new system shapes can be obtained very rapidly, and we believe that, if added to the optical designer’s arsenal, this new tool can significantly increase design productivity in certain situations. Despite the fact that the theory behind SPC contains mathematical concepts that are still unfamiliar to many optical designers, the practical implementation of the method is actually very easy and the method can be fully integrated with all other traditional design tools. In this work we will illustrate the use of SPC with examples that are very simple and illustrate the essence of the method. The method can be used essentially in the same way even for very complex systems with a large number of variables, in situations where other methods for obtaining new system shapes do not work so well.","optical system design; optimization; saddle points","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:c05ad7d6-5504-4fa4-a14f-496e9bb20928","http://resolver.tudelft.nl/uuid:c05ad7d6-5504-4fa4-a14f-496e9bb20928","Predictability and unpredictability in optical system optimization","Van Turnhout, M.; Bociort, F.","","2007","Local optimization algorithms, when they are optimized only for speed, have in certain situations an unpredictable behavior: starting points very close to each other lead after optimization to different minima. In these cases, the sets of points, which, when chosen as starting points for local optimization, lead to the same minimum (the so-called basins of attraction), have a fractal-like shape. Before it finally converges to a local minimum, optimization started in a fractal region first displays chaotic transients. The sensitivity to changes in the initial conditions that leads to fractal basin borders is caused by the discontinuous evolution path (i.e. the jumps) of local optimization algorithms such as the damped-least-squares method with insufficient damping. At the cost of some speed, the fractal character of the regions can be made to vanish, and the downward paths become more predictable. The borders of the basins depend on the implementation details of the local optimization algorithm, but the saddle points in the merit function landscape always remain on these borders.","optimization; optical system design; saddle points; fractals; basins of attraction","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:b9319341-ae54-4b38-a25c-3972f3ac9062","http://resolver.tudelft.nl/uuid:b9319341-ae54-4b38-a25c-3972f3ac9062","Trajectory optimization for a mission to Neptune and Triton","Melman, J.C.P.","Ambrosius, B.A.C. (mentor); Noomen, R. (mentor); Ortega, G. (mentor); Biesbroek, R. (mentor)","2007","","interplanetary; optimization; gravity assist; swing-by; low thrust","en","master thesis","TU Delft, Aerospace Engineering, Astrodynamics and Satellite Systems","","","","","","","","Aerospace Engineering","","","","","" "uuid:703cd3c2-8cf4-48f7-babc-8b33cdd38949","http://resolver.tudelft.nl/uuid:703cd3c2-8cf4-48f7-babc-8b33cdd38949","Optimization technique for ED&PE","Kumar, P.; Bauer, P.","","2007","","optimization; BLDC drive","en","conference paper","Tulip","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:8eff9ef1-b509-4f3d-b1f7-7d1357c53ff8","http://resolver.tudelft.nl/uuid:8eff9ef1-b509-4f3d-b1f7-7d1357c53ff8","Structured controller synthesis for mechanical servo-systems: Algorithms, relaxations and optimality certificates","Hol, C.W.J.","Scherer, C.W. (promotor); Bosgra, O.H. (promotor)","2006","In many application areas of mechanical servo-systems the high demands on the performance often imply a tightly tuned feedback controller, that takes dynamical interaction into account. Model-based H-optimal controller synthesis is a well-suited technique for this purpose. However, the state-of-the-art synthesis approach yields controllers with high McMillan degree that can not be implemented in real-time at high sampling-rates, because of the limited computational capacity. This motivates to constrain the McMillan degree of the controller. The aim of this thesis is to provide numerical tools for H-optimal degree constrained (or otherwise structured) controller synthesis. For this problem we have developed relaxations that are based on Sum-Of-Squares polynomials. Their optimal values are lower bounds on the globally optimal structured controller synthesis problem and can be computed by solving LMI problems. It is guaranteed, that the bounds converge to best achievable performance as we improve our relaxations. To make this technique feasible for plants with high McMillan degree, we proposed a computationally less demanding scheme based on partial dualization. The Sum-Of-Squares relaxations have also been applied to robust polynomial Semi-Definite Programs (SDPs). Also for this case a sequence of relaxations has been developed, whose optimal values converge from below to the optimal value of the robust SDP. Furthermore for the structured controller synthesis problem an Interior Point algorithm has been developed. It is shown how this algorithm can be made more efficient, by exploiting the control-theoretic characteristics of the problem. Conditions have been derived to verify local optimality of the optimized controller. Finally, it has been illustrated by real-time experiments that the algorithms described in this thesis can be used to synthesize high-performing fixed-order controllers for a new prototype of a wafer stage.","controller synthesis; static output feedback; optimization; sumof-squares; matrix inequalities; bmi; lmi; interior point","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:11464f49-b10b-48ed-9075-9e281514618a","http://resolver.tudelft.nl/uuid:11464f49-b10b-48ed-9075-9e281514618a","Analytical and Numerical Developments in Optimal Shape Design for Aerospace: An overview","Pironneau, O.","","2006","","optimization; optimal shape design; gradient methods; finite element methods","en","conference paper","","","","","","","","","","","","","","" "uuid:63a75aa9-c71e-4439-9d0b-864fe8c2915d","http://resolver.tudelft.nl/uuid:63a75aa9-c71e-4439-9d0b-864fe8c2915d","A continuous adjoint formulation with emphasis to aerodynamic-turbomachinery optimization","Papadimitriou, D.I.; Giannakoglou, K.C.","","2006","This paper summarizes progress, recently made in the Lab. of Thermal Turbomachines of NTUA, on the formulation and use of the continuous adjoint methods in aerodynamic shape optimization problems. The basic features of state of the art adjoint methods and tools which are capable of handling arbitrary objective functions, cast in the form of either boundary or field integrals, are presented. Starting point of the presentation is the formulation of the continuous adjoint method for arbitrary integral objective functionals in problems governed by arbitrary, linear or nonlinear, first or second order state pde's; the scope of this section is to demonstrate that the proposed formulation is general without being restricted to aerodynamics. It is noticeable that, regardless of the type of functional (field of boundary integral) the expressions of its gradient with respect to the design variables include boundary integrals only. Thus, the derived adjoints can be used with either structured or unstructured grids and there is no need for repetitive remeshing or computation of field integrals which increase the CPU cost and deteriorate the computational accuracy. Then, the presentation focuses on aerodynamic shape optimization problems governed by the compressible fluid flow equations, numerically solved through a time-marching formulation and an upwind discretization scheme for the convection terms. Two design problems, namely the inverse design of a 2D cascade at inviscid flow conditions (used as a test bed for the assessment of three descent algorithms based on the same gradient information) and the design optimization of a 3D peripheral compressor cascade for minimum viscous losses are presented. For the latter, the flow is turbulent and the field integral of entropy generation, recently proposed by the same authors, is used as objective function.","continuous adjoint; inverse design; optimization; losses minimization; turbomachines","en","conference paper","","","","","","","","","","","","","","" "uuid:cdc345d1-a0b5-4b70-98fb-bc2235c818a6","http://resolver.tudelft.nl/uuid:cdc345d1-a0b5-4b70-98fb-bc2235c818a6","Application of sonic boom optimization to supersonic aircraft design","Daumas, L.; Dinh, Q.V.; Kleinveld, S.; Rogé, G.","","2006","Preliminary results on shape optimization of a wing-body configuration aiming at reducing sonic boom overpressure will be discussed. The optimization process uses a CAD modeler and an Euler CFD code with adjoint. Thickness, scale, twist and camber at section level were used to obtain gains in ground pressure signature.","adjoint; CAD modeller; optimization; sonic boom; supersonic aircraft design","en","conference paper","","","","","","","","","","","","","","" "uuid:8b3c60a5-4e17-4680-b7c6-252fb4ae87ca","http://resolver.tudelft.nl/uuid:8b3c60a5-4e17-4680-b7c6-252fb4ae87ca","VIVACE: Multidisciplinary Decision Support","Homsi, P.","","2006","","collaboration; multidisciplinary; optimization; decision; knowledge; data management; virtual enterprise; aeronautic; aircraft; engine","en","conference paper","","","","","","","","","","","","","","" "uuid:197e6db7-921d-4786-958d-b0c06079f1fc","http://resolver.tudelft.nl/uuid:197e6db7-921d-4786-958d-b0c06079f1fc","Realistic high-lift design of transport aircraft by applying numerical optimization","Wild, J.; Brezillon, J.; Mertins, R.; Quagliarella, D.; Germain, E.; Amoignon, O.; Moens, F.","","2006","The design activity within the EUROLIFT II project is targeted towards an improvement of the take-off performance of a generic transport aircraft configuration by a re-design of the trailing edge flap. The involved partners applied different optimization strategies as well as different types of flow solvers in order to cover a wide range of possible approaches for aerodynamic design optimization. The optimization results obtained by the different partners have been cross-checked in order to eliminate solver dependencies and to identify the best obtained design. The final selected design has been applied to the wind tunnel model and the test in the European Transonic Wind Tunnel (ETW) at high Reynolds number confirms the predicted improvements.","optimization; high-lift; application; CFD; wind tunnel testing","en","conference paper","","","","","","","","","","","","","","" "uuid:8abc533d-b860-46c1-8868-5eabdb33e415","http://resolver.tudelft.nl/uuid:8abc533d-b860-46c1-8868-5eabdb33e415","Partitioned strategies for optimization in FSI","Bletzinger, K.U.; Gallinger, T.; Kupzok, A.; Wüchner, R.","","2006","In this paper the possibility of the optimization of coupled problems in partitioned approaches is discussed. As a special focus, surface coupled problems of fluid-structure interaction are considered. Well established methods of optimization are analyzed for usage in the context of coupled problems and in particular for a solution through partitioned approaches. The main benefits expected from choosing a partitioned solution strategy as basis for the optimization are: a high flexibility in the usage of different solvers and therefore different approaches for the single-field problems as well as the possibility to apply well tested and sophisticated methods for the modeling of complex problems.","optimization; coupled problems; fluid-structure interaction; partitioned approach","en","conference paper","","","","","","","","","","","","","","" "uuid:fc982426-38af-4ba7-bc57-c3e44f14c4c6","http://resolver.tudelft.nl/uuid:fc982426-38af-4ba7-bc57-c3e44f14c4c6","Aerodynamic optimization of an airfoil using gradient based method","Mirzaei, M.; Roshanian, J.; Nasrin Hosseini, S.","","2006","A gradient based method is presented for optimization of an airfoil configuration. The flow is governed by two dimensional, compressible Euler equations. A finite volume code based on unstructured grid is developed to solve the equations. The procedure is carried out for optimizing an airfoil with initial configuration of NACA 0012. The advantage of this technique over the other gradient based methods is its speed of converging.","CFD; optimization; gradient; objective function; design variables","en","conference paper","","","","","","","","","","","","","","" "uuid:ea7af067-bd46-48c8-a147-fe4cddc936ec","http://resolver.tudelft.nl/uuid:ea7af067-bd46-48c8-a147-fe4cddc936ec","Looking for order in the optical design landscape","Bociort, F.; Van Turnhout, M.","","2006","In present-day optical system design, it is tacitly assumed that local minima are points in the merit function landscapewithout relationships between them. We will show however that there is a certain degree of order in the design landscapeand that this order is best observed when we change the dimensionality of the optimization problem and when weconsider not only local minima, but saddle points as well. We have developed earlier a computational method fordetecting saddle points numerically, and a method, then applicable only in a special case, for constructing saddle points by adding lenses to systems that are local minima. The saddle point construction method will be generalized here and wewill show how, by performing a succession of one-dimensional calculations, many local minima of a given global searchcan be systematically obtained from the set of local minima corresponding to systems with fewer lenses. As a simpleexample, the results of the Cooke triplet global search will be analyzed. In this case, the vast majority of the saddlepoints found by our saddle point detection software can in fact be obtained in a much simpler way by saddle point construction, starting from doublet local minima.","saddle point; optimization; optical system design; lithography","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:cdd281b2-0bc7-4f57-a9fb-3ddbe49c1082","http://resolver.tudelft.nl/uuid:cdd281b2-0bc7-4f57-a9fb-3ddbe49c1082","Designing lithographic objectives by constructing saddle points","Marinescu, O.; Bociort, F.","","2006","Optical designers often insert or split lenses in existing designs. Here, we present, with examples from Deep and Extreme UV lithography, an alternative method that consists of constructing saddle points and obtaining new local minima from them. The method is remarkable simple and can therefore be easily integrated with the traditional design techniques. It has significantly improved the productivity of the design process in all cases in which it has been applied so far.","saddle point; lithography; optical system design; optimization; DUV; EUV","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:b842a4d0-0708-4c37-b3e7-e86f91c72dd4","http://resolver.tudelft.nl/uuid:b842a4d0-0708-4c37-b3e7-e86f91c72dd4","Challenges for process system engineering in infrastructure operation and control","Lukszo, Z.; Weijnen, M.P.C.; Negenborn, R.R.; De Schutter, B.; Ilic, M.","","2006","The need for improving the operation and control of infrastructure systems has created a demand on optimization methods applicable in the area of complex sociotechnical systems operated by a multitude of actors in a setting of decentralized decision making. This paper briefly presents main classes of optimization models applied in PSE system operation, explores their applicability in infrastructure system operation and stresses the importance of multi-level optimization and multi-agent model predictive control. If you want to cite this report, please use the following reference instead: Z. Lukszo, M.P.C. Weijnen, R.R. Negenborn, B. De Schutter, and M. Ilic, “Challenges for process system engineering in infrastructure operation and control,” in 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering (Garmisch-Partenkirchen, Germany, July 2006) (W. Marquardt and C. Pantelides, eds.), vol. 21 of Computer-Aided Chemical Engineering, Amsterdam, The Netherlands: Elsevier, ISBN 978-0-444-52969-5, pp. 95–100, 2006.","infrastructures; optimization; multi-agent systems; model predictive control","en","report","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","","","" "uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","http://resolver.tudelft.nl/uuid:37f7ee07-9bb8-4b13-be8f-dc4d27417b0f","Model reduction for dynamic real-time optimization of chemical processes","Van den Berg, J.","Bosgra, O.H. (promotor)","2005","The value of models in process industries becomes apparent in practice and literature where numerous successful applications are reported. Process models are being used for optimal plant design, simulation studies, for off-line and online process optimization. For online optimization applications the computational load is a limiting factor. The focus of this thesis is on nonlinear model approximation techniques aiming at reduction of computational load of a dynamic real-time optimization problem. Two types of model approximation methods were selected from literature and assessed within a dynamic optimization case study: model reduction by projection and physics-based model reduction. Model order reduction by projection is partially successful. Even with a strongly reduced number of transformed differential equations it is possible to compute acceptable approximate solutions. Projection does not provide predictable results in terms of simulation error and stability and does not reduce the computational load of simulation. On the other hand, physics-based model reduction appeared to be very successful in reducing the computational load of the sequential dynamic optimization problem.","chemical processes; model reduction; optimization","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","","" "uuid:a29ca0b4-c17d-4a14-99c0-9672b805021e","http://resolver.tudelft.nl/uuid:a29ca0b4-c17d-4a14-99c0-9672b805021e","Uncertainty-based Design Optimization of Structures with Bounded-But-Unknown Uncertainties","Gurav, S.P.","van Keulen, A. (promotor)","2005","","uncertainty; optimization; response surface; parallel computing; MEMS","en","doctoral thesis","Delft University Press","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:7bf2a037-c8eb-44be-96ef-411529c4be0b","http://resolver.tudelft.nl/uuid:7bf2a037-c8eb-44be-96ef-411529c4be0b","Topology Optimization using a Topology Description Function Approach","de Ruiter, M.J.","van Keulen, F. (promotor)","2005","During the last two decades, computational structural optimization methods have emerged, as computational power increased tremendously. Designers now have topological optimization routines at their disposal. These routines are able to generate the entire geometry of structures, provided only with information on loads, supports, and space to work in. The most common way to do this is to partition the available space in elements, and to determine the material content of each of the elements separately. This thesis presents a different approach, namely the \emph{Topological Description Function} (TDF) approach. The TDF is a function parametrized by design variables. The function determines a geometry using a level-set approach. A finite element representation of the geometry then is used to determine how well the geometry performs with respect to objective and constraints. This information is given to an optimization program, which has the purpose of finding an optimal combination of values for the design variables. This approach decouples the geometry description of the design from the evaluation, allowing the designer to tune the detailedness of the geometry and the computational grid separately as wished. In this thesis, the concept of a TDF is explained in detail. Using a genetic algorithm for the optimization turns out to be too computationally expensive, however, it shows the validity of the TDF as a geometry description method. A method based on an intuitive updating scheme shows that the TDF approach can be used to do topology optimization.","level set method; topology; optimization; tdf; topology description function; genetic algorithm; optimality criteria method; structural optimization","en","doctoral thesis","","","","","","","","","Mechanical Maritime and Materials Engineering","","","","","" "uuid:33282f5f-e093-4a9a-88e8-819ccfb40114","http://resolver.tudelft.nl/uuid:33282f5f-e093-4a9a-88e8-819ccfb40114","Model-based optimization of the operation procedure of emulsification","Stork, M.","Bosgra, O.H. (promotor)","2005","Emulsions are widely encountered in the food and cosmetic industry. The first food we consume is an emulsion, namely breast milk. Other common emulsions are mayonnaise, dressings, skin creams and lotions. Equipment often used for the production of oil-in-water emulsions in the food industry consists of a stirred vessel in combination with a colloid mill and a circulation pipe. Within this set-up there are two main variations: i) Configuration I where the colloid mill acts like a shearing device and at the same time as a pump. This configuration is used in the majority of the production facilities, and ii) Configuration II where the shearing and pumping action are not coupled. The operation procedure for obtaining a certain predefined emulsion quality is often established based on experience (best practice). This is most probably time-consuming (e.g. large experimental efforts for new developed products) and it is also unclear if the process is operated at its optimum (e.g. in minimum time). An other drawback is that there is no feedback during the production process. Hence, it is not possible to deal with disturbances acting on the process. A possible consequence is that, at the end of the production process, the product quality specifications are not met and the product has to be classified as off-spec. In order to be able to enlarge the efficiency of the production processes and to shorten the time to market of new products - and therewith create an advantage over competition - it is necessary to overcome these limitations of the current operation procedure. In the work reported a first step is set into this direction. A model describing the droplet size distribution (DSD) and the emulsion viscosity as function of the time was developed and several off-line optimization studies were performed. The model comprises several fit parameters and experiments were performed in order to estimate the values of these parameters. A number of additional experiments were performed to compare the simulated results with the measurements (model validation). The results of the parameter estimation and the model validation show that the simulated results are qualitatively in good agreement with the measurement data. Given the overall performance of the model it is expected that the model quality is sufficient to render practical relevant optimization results. Although the optimization studies have been performed for a model emulsion, small scale equipment and are not yet experimentally validated, the results of this work strongly suggest that it is indeed possible to minimize the production times and to shorten the product development times for new products. This overall conclusion is based on the following observations: 1) The optimization results show that it is beneficial to produce emulsions with Configuration II: - Configuration II allows the production of emulsions with a bi-modal DSD. No operation procedure was found for the production of such an emulsion in Configuration I. - The production of emulsions in Configuration II is always at least as fast as in Configuration I. 2) The followed approach allows to calculate: * If an emulsion with a certain, predefined, DSD and emulsion viscosity can be produced. * How the process should be controlled in order to produce such an emulsion. * How the process should be controlled to produce this emulsion in minimal time. 3) The optimization results show that it is possible to produce emulsions with: * A bi-modal DSD. * Less oil while maintaining a similar DSD and value of the emulsion viscosity evaluated at a shear rate of 10 1/s by adapting only the operation procedure. Hence, the addition of extra stabilizers is not considered. This offers possibilities for the production of a broader range of emulsion products and could direct product development in a new direction. Based on this, it is worthwhile and therefore recommended to expand this research work in the direction of industrial emulsions.","modeling; emulsions; emulsification; optimization; milp; parameter estimation; fryma-delmix; colloid mill; population balance equations; droplet size distribution; mayonnaise","en","doctoral thesis","","","","","","","","","Design, Engineering and Production","","","","","" "uuid:e15f936a-9439-4247-b0f9-051619b34cd4","http://resolver.tudelft.nl/uuid:e15f936a-9439-4247-b0f9-051619b34cd4","Finding new local minima by switching merit functions in optical system optimization","Serebriakov, A.; Bocoirt, F.; Braat, J.","","2005","","optical design; geometrical optics; optimization; merit function; aberrations","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:43fb3a2f-0c02-406a-ad7d-374ec5f71d63","http://resolver.tudelft.nl/uuid:43fb3a2f-0c02-406a-ad7d-374ec5f71d63","Optimization and analysis of deep-UV imaging systems","Serebriakov, A.G.","Braat, J.J.M. (promotor)","2005","This thesis has been devoted to two main subjects: the compensation of birefringence induced by spatial dispersion (BISD) in Deep-UV lithographic objectives and the optimization of optical systems in general.","optimization; lithography; optics","en","doctoral thesis","","","","","","","","","Applied Sciences","","","","","" "uuid:05dfafdc-cd7c-4b17-a92f-8420e5bb78a0","http://resolver.tudelft.nl/uuid:05dfafdc-cd7c-4b17-a92f-8420e5bb78a0","Generating saddle points in the merit function landscape of optical systems","Bociort, F.; Van Turnhout, M.","","2005","Finding multiple local minima in the merit function landscape of optical system optimization is a difficult task, especially for complex designs that have a large number of variables. We discuss here a method that enables a rapid generation of new local minima for optical systems of arbitrary complexity. We have recently shown that saddle points known in mathematics as Morse index 1 saddle points can be useful for global optical system optimization. In this work we show that by inserting a thin meniscus lens (or two mirror surfaces) into an optical design with N surfaces that is a local minimum, we obtain a system with N+2 surfaces that is a Morse index 1 saddle point. A simple method to compute the required meniscus curvatures will be discussed. Then, letting the optimization roll down on both sides of the saddle leads to two different local minima. Often, one of them has interesting special properties.","saddle point; optimization; optical system design; lithography","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:ab738b03-b906-4dc7-9e9c-6ac16446af10","http://resolver.tudelft.nl/uuid:ab738b03-b906-4dc7-9e9c-6ac16446af10","Saddle points in the merit function landscape of lithographic objectives","Marinescu, O.; Bociort, F.","","2005","The multidimensional merit function space of complex optical systems contains a large number of local minima that are connected via links that contain saddle points. In this work, we illustrate a method to construct such saddle points with examples of deep UV objectives and extreme UV mirror systems for lithography. The central idea of our method is that, at certain positions in a system with N surfaces that is a local minimum, a thin meniscus lens or two mirror surfaces can be introduced to construct a system with N+2 surfaces that is a saddle point. When the optimization goes down on the two sides of the saddle point, two minima are obtained. We show that often one of these two minima can be reached from several other saddle points constructed in the same way. The practical advantage of saddle-point construction is that we can produce new designs from the existing ones in a simple, efficient and systematic manner.","saddle point; lithography; optimization; optical system design; EUV","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:1e3ce36d-f1f6-4fbd-9349-42ba2352d668","http://resolver.tudelft.nl/uuid:1e3ce36d-f1f6-4fbd-9349-42ba2352d668","The network structure of the merit function space of EUV mirror systems","Marinescu, O.; Bociort, F.","","2005","The merit function space of mirror systems for EUV lithography is studied. Local minima situated in a multidimensional merit function space are connected via links that contain saddle points and form a network. In this work we present the first networks for EUV lithographic objectives and discuss how these networks change when control parameters, such as aperture and field are varied and constraints are used to limit the variation domain of the variables. A good solution in a network obtained with a limited number of variables has been locally optimized with all variables to meet practical requirements.","network; saddle point; optical system design; EUV lithography; optimization","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:1b4c982d-8d39-40fc-9184-282f4116a585","http://resolver.tudelft.nl/uuid:1b4c982d-8d39-40fc-9184-282f4116a585","Regularization of Water Flooding Optimization","Malekzadeh, R.","Jansen, J.D. (mentor)","2005","The use of smart well technology to optimize water flooding introduces a large number of control parameters both in space (well segments) and time. The problem of finding the optimal control parameters to maximize net present value as an objective function can be solved with the aid of a gradient-based optimization method. Using too many parameters may lead to a large number of local maxima in the objective function, so the gradient-based optimization method may result in suboptimal solutions. In this thesis, proper orthogonal decomposition is applied to regularize gradient-based control parameter optimization by projecting the original high dimensional control space onto a low dimensional subspace and thus reduce the number of control parameters. Since in a low dimensional subspace there are fewer local maxima, the solution is more likely to reach a local maximum that is in the close vicinity of the global solution. To evaluate the efficiency of our proposed method, ordinary multiscale parameterization as developed by Lien et al. (2005) is also applied to the optimization of the control parameters. A multiscale approach starts from optimization of a very coarse representative parameter. Then the number of parameters is gradually increased until convergence is reached. Numerical examples indicate that a regularization approach with the aid of proper orthogonal decomposition may speed up the convergence rate, and also may increase the convergence to the global solution within shorter optimization time compared to optimization without regularization technique. The method effectively reduces the control effort by grouping multiple well settings in space and time and treating them as one control parameter.","smart well; simulation; optimization; regularization","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Department of Geotechnology","","Section for Petroleum Engineering","","" "uuid:a4d313dc-81f6-4f5f-a83a-404f539aa838","http://resolver.tudelft.nl/uuid:a4d313dc-81f6-4f5f-a83a-404f539aa838","Optimization of multilayer reflectors for extreme ultraviolet lithography","Bal, M.F.; Singh, M.; Braat, J.J.M.","","2004","","multilayer; optimization; extreme ultraviolet lithography; graded multilayers; imaging","en","journal article","SPIE","","","","","","","","Applied Sciences","Optics Research Group","","","","" "uuid:c253f0fa-a879-422b-8027-b3de1f91775a","http://resolver.tudelft.nl/uuid:c253f0fa-a879-422b-8027-b3de1f91775a","Avoiding unstable regions in the design space of EUV mirror systems comprising high-order aspheric surfaces","Marinescu, O.; Bociort, F.; Braat, J.","","2004","When Extreme Ultraviolet mirror systems having several high-order aspheric surfaces are optimized, the configurations often enter into highly unstable regions of the parameter space. Small changes of system parameters lead then to large changes in ray paths, and therefore optimization algorithms crash because certain sssumptions upon which they are based become invalid. We describe a technique that keeps the configuration away from the unstable regions. The central component of our technique is a finite-aberration quantity, the so-called quasi-onvariant, which has been originally introduced by H. A. Buchdahl. The quasi-invariant is computed for several rays in the system, and its average change per surface is determined for all surfaces. Small values of these average changes indicate stability. The stabilization technique consists of two steps: First, we obtain a stable initial configuration for subsequent optimization by choosing the system parameters such that the quasi-invariant change per surface is minimal. Then, if the average changes per surfaces of the quasi-invariant remain small during optimization, the configuration is kept in the safe region of the parameter space. This technique is applicable for arbitrary rotationally symmetric optical systems. Examples from the design of aspheric mirror systems for EUV lithography will be given.","mirror systems; aspheres; EUV lithography; optimization; relaxation","en","conference paper","SPIE","","","","","","","","Applied Sciences","Optics Research Groep","","","","" "uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","http://resolver.tudelft.nl/uuid:b73b1b5b-e1d8-4151-a920-6cd5d44af136","Dynamic Optimization in Business-wide Process Control","Tousain, R.L.","Bosgra, O.H. (promotor); Backx, A.C.P.M. (promotor)","2002","The chemical marketplace is a global one with strong competition between man- ufacturers. To continuously meet the customer demands regarding product quality and delivery conditions without the need to maintain very large stor- age levels chemical manufactures need to strive for production on demand. In this thesis we research how market-oriented production can be realized for the particular class of multi-grade continuous processes. For this class of processes production on demand is particularly challenging due to the the complex trade- off between performing costly and time-consuming changeovers and maintaining high storage levels. The first requirement for market-oriented production is that production management cooperates with purchasing and sales management. We propose the use of a scheduler as a decision support system in a cooperative organization constituted by these players. In such a scheduler, decision making is represented using decision variables and their effect on the company-wide objective, which is chosen to be the added value of the company, is modeled. The scheduler then selects a decision strategy that is optimal with respect to the objective and presents this strategy to the decision makers who use it to base their actual decision taking on. The company-market interaction is modeled using a transaction-based mod- eling framework. Therein not the actual market behavior is modeled but the expected effect of the interaction of the company with the market. Two types of transactions can be modeled in this framework: orders, which result from contracts with suppliers and customers, and opportunities, which express the expected sales and purchases. Two different approaches to the modeling of production decisions are taken, the choice of which depends largely on the im- plementation of the process control hierarchy that is assumed. In the first approach, production management and control is performed by a single level controller and the control decisions are the minute to minute manipulation of the valves. This approach is academically interesting, though practically in- tractable due to the combination of long horizons and fast sampling times. In the second approach the process control hierarchy consists of a scheduling layer at which it is determined what products will be produced when, and a process control layer which determines how this production is realized. This approach is taken in the rest of the thesis.","chemical processes; optimization; supply chain","en","doctoral thesis","Delft University Press","","","","","","","","Design, Engineering and Production","","","","","" "uuid:ae1f3aba-ce14-4de3-9e9c-8b75072c7f48","http://resolver.tudelft.nl/uuid:ae1f3aba-ce14-4de3-9e9c-8b75072c7f48","Trailing for a better alternative - Logistic optimisation of dredging projects","Nieman, A.","Holierhoek, C.K. (mentor); Ridder, H.A.J. (mentor); Horstmeier, T.H.W. (mentor); D' Angremond, K.G. (mentor)","2001","Determining the way to make a reclamation project with many excavation areas or borrow areas, and several pieces of dredging equipment at minimum costs takes too much time to do by hand. A linear programming application is made to support the allocation of excavation areas and pieces of equipment to reclamation areas. Such an application was already available at HAM, but it could only be used in limited cases. The newly developed application can be used not just for allocation optimisation of soil in reclamation projects, but also for allocation optimisation in dredging projects aswell as a mix ofthe two as well. This application is a linear model of the cost items in a project. Preconditions areadded to the model for limits to available sand, limits to project duration, limits imposed byworking methods and options for working in joint ventures. The optimisation application can beused for obtaining the cheapest working method while making a tender, or while executing a project. The model is implemented in an executable and a reliable solver is included to calculate optimal solutions. A simple shell is made in Microsoft Excel that provides an interface familiar to the user. The program is tested for stability and speed. The program is also tested on a few projects to establish its practical value. By introducing an execution step 1 , the new optimisation program can be used for complete projects while planning preconditions can be included. Once a project has been cast in the model, it can be used for rapid calculation of different scenarios. In most cases, working methods obtained by optimisation proved to be cheaper than working methods obtained by traditional methods. Some additions can be made in the future. Options to generate input with Monte Carlo simulation can be included to the shell or in the executable. Time can be saved and mistakes can be avoided if the large amount of data is stored in a database system. A tool can be developed which represents the solution in some graphical form for easier interpretation and comparison. Chapter 2 deals with the problem and goal definition of this project. The optimisation model cannot be used in all types of projects. Chapter 3 describes in what cases and how to use the optimisation model. Chapter 4 Explains Why optimisation is chosen for achieving the objective of this thesis. The aspects of dredging processes that have an influence on the costs are described in section 5. Section 6 describes how a new model is made with the old model, tjur7^2 as a starting point. Section 7 explains Why it is advisable to resort to commercial software for solving the model. The experience obtained from three projects is described in section 8. Chapter 9 summarises the conclusions drawn from developing and testing the optimisation tool. The actions that have to be taken to finish the development and evolve the program ""Optimise"" into a tool with automated analysis, are written in section 10.","reclamation; dredging; optimization; programming","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:e7367a12-2b86-4e56-931c-0e3bbcb93211","http://resolver.tudelft.nl/uuid:e7367a12-2b86-4e56-931c-0e3bbcb93211","Water Demand Management. Approaches, Experiences and Application to Egypt","Mohamed, A.S.","Van Beek, E. (promotor); Savenije, H.G. (promotor)","2001","","Egypt; demand management; conservation; reuse; new lands; framework for analysis; strategies; criteria; optimization; financial incentives; water resources management","en","doctoral thesis","Delft University Press","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:35fbea8e-ecc3-4209-b09f-dddcd1d5e3e2","http://resolver.tudelft.nl/uuid:35fbea8e-ecc3-4209-b09f-dddcd1d5e3e2","Minimaliseren restlading: ""Volvox Terranova""","Bruinsma, J.","D'Angremond, K. (mentor); Tutuarima, W.H. (mentor); Van Kesteren, W. (mentor); Peerlkamp, K. (mentor); Van Oord ACZ (contributor)","2001","De sleephopperzuiger ""Volvox Terranova"" is eigendom van Van Oord ACZ en heeft een laadcapaciteit van 20.000 m3. Na het walpersen van de lading blijft een restlading over van ca. ---m3. Het doel van dit onderzoek is een voorstel tot aanpassing van de ""Volvox Terranova"" te presenteren om de restlading te minimaliseren, zonder de leegzuigproductie negatief te beinvloeden. Om dit te bereiken is een analyse uitgevoerd van het ontwerp van het schip, de leegzuigvolgorde, de processen in het beun, de verschijningsvorm van de restlading en het huidige jetsysteem. Tevens is in een schaalmodel de effectiviteit van een nieuwe leegzuigdeur beproefd. In een beunsectie is na het leegzuigen van het beun ca. --- a --- m3 zand (restlading) aanwezig. De hoeveelheid restlading in een beunsectie wordt met name beinvloed door de positie van de leegzuigdeur en het functioneren van het jetsysteem. Om de hoeveelheid restlading te verminderen zal in de opschoonslag, -de laatste fase van het leegzuigen-, geconcentreerd en zo laag mogelijk per beunsectie afgezogen moeten worden. Op dit moment wordt in het schip in twee beunsecties tegelijk op een hoogte van --- meter boven het laagste gedeelte van het beun afgezogen. Door het aanbrengen van een nieuwe leegzuigdeur wordt het afzuigpunt verlaagd en zal het afzuigdebiet tot een sectie worden beperkt en daardoor verdubbelen. Het jetsysteem zal aangepast dienen te worden aan de situatie met een nieuwe leegzuigdeur. In een proevenserie is de effectiviteit van een nieuwe leegzuigdeur getest. De hoeveelheid restlading was bij de proeven met de nieuwe leegzuigdeur ongeveer ---% van de restlading die gemeten werd bij de proeven met de bestaande leegzuigdeuren. Tijdens het leegzuigen wordt door het jetwater uit het jetsysteem de korrelspanningen in het zandpakket gereduceerd, zodat het zand ten gevolge van de maartekracht gemakkelijker in de richting van de leegzuigdeur afschuift. Het jetdebiet dat nodig is om de korrelspanningen te verminderen is met name afhankelijk van de korreldiameter van het zand. Voordat de jets voldoende water spuiten zal de jet met behulp van een hogere jetdruk moeten opstarten. In de laatste fase van het leegzuigproces zal de evenwichtshelling van het zand bereikt worden en zal het zand moeten worden weggespoeld door de jets. De erosieve werking van een jet is afhankelijk van de uitstroomsnelheid en de invloedsafstand van deze jet. De uitstroomsnelheid wordt met name bepaald door het verschil in jetdruk over een jet. De invloedsafstand van de jet en het debiet uit de jet nemen toe als de diameter van de nozzle wordt vergroot. In de proeven is geconstateerd dat bij het vergroten van het jetvermogen de afzuigconcentratie wordt verhoogd. Tevens is vastgesteld dat het vergroten van de nozzle diameter de effectiviteit van een jet vergroot en de restlading vermindert. Op grond van de uitkomsten van het theoretisch onderzoek en de resultaten van de proeven wordt geadviseerd om in de ""Volvox Terranova"" per beunsectie een nieuwe leegzuigdeur aan te brengen en de nozzle diameters in het beun te vergroten.","dredging; trailing hopper dredge; optimization","en","master thesis","TU Delft, Civil Engineering and Geosciences, Hydraulic Engineering","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:9a4537a9-9354-40b5-a59e-266ac18bb697","http://resolver.tudelft.nl/uuid:9a4537a9-9354-40b5-a59e-266ac18bb697","Ontwerpmodel van een schutsluis","Rietdijk, J.","Bakker, K.J. (mentor); Horstmeier, T.H.W. (mentor); De Vries, J.T. (mentor); Vrijling, J.K. (mentor)","2000","Het ontwerpmodel heeft als doelstelling het aan de hand van functionele en operationele eisen bepalen van het 'optimale' ontwerp van een schutsluis. Voor de defmitie van het 'optimale' ontwerp is gebruik gemaakt van de ontwerpfilosofie van de Bouwdienst. De Bouwdienst ontwerpt op basis van het voldoen aan de gestelde functionele eisen en op basis van het economisch optimum, vanuit een positie met maatschappelijke verantwoordelijkheid Onder het economisch optimum wordt verstaan: Een object is economisch optimaal ontworpen indien het aan de gestelde eisen voldoet en indien de som van de stichtingskosten en de verdisconteerde verwachte kosten minimaal zijn. De verwachte kosten behelzen o.a. inspectie- en reparatiekosten die noodzakelijk zijn om gedurende de totale geplande levensduur aan de gestelde eisen te blijven voldoen en kosten van sloop na ajloop van de gebruiksfase. Het 'optimale' ontwerp van de sluis is het ontwerp dat voldoet aan de gestelde eisen en waarvan tevens de totale verdisconteerde kosten over de levensduur minimaal zijn. Het ontwerpmodel is gecreeerd aan de hand van de volgende stappen: I. Bepalen van de gewenste uitkomst 2. Opstellen van de uitgangspunten en aannames voor het model 3. Bepalen van de benodigde invoergegevens 4. Het opstellen van de relaties in het model 5. Het optimaliseren van het schutsluisontwer","ship lock; optimization; inland navigation","nl","master thesis","","","","","","","","","Civil Engineering and Geosciences","Sectiion Hydraulic Engineering","","","","" "uuid:d25d26d2-32cc-40eb-a73c-06b31a360cb2","http://resolver.tudelft.nl/uuid:d25d26d2-32cc-40eb-a73c-06b31a360cb2","Optimalisatie van baggerwerkzaamheden op de Midden-Waal","Van Berkel, T.","Havinga, H. (mentor); Van der Schrieck, G.L.M. (mentor); De Vriend, H.J. (mentor)","1999","De Waal is de meest bevaren rivier van Europa, jaarlijks wordt er zo'n 150 miljoen ton vracht over getransporteerd. Dit zal de komende 10 jaar met 40% toe nemen. Rijkswaterstaat heeft om de bevaarbaarheid en veiligheid te kunnen blijven waarborgen het Waalproject opgezet. Hoofddoelstelling van dit project is het vergroten van de vaargeul bij OLR (Overeengekomen Lage Rijnafvoer). Voor de Waal is OLR gelijk aan 777 m3/s en zijn de afmetingen van de vaargeul 2,50 m diep en 150 m breed. Voor de nieuwe eisen geldt een diepte van 2,80 m en een breedte van 170 m. Om aan deze eisen te voldoen is voor de Midden-Waal (tussen Nijmegen en Tiel) besloten om alle knelpunten weg te baggeren. Hierbij wordt het gebaggerde zand in diepere delen van de rivier teruggestort. Uit vooronderzoek is gebleken dat deze baggerwerkzaamheden kunnen worden geoptimaliseerd. Dit door gebruik te maken van de natuurlijke morfologische reactie van de rivier op ingrepen in het dwarsprofiel. Met name het nivelleren van bochten, waarbij zand vanuit de ondiepe binnenbocht in de diepe buitenbocht wordt gestort, en opeenvolgende zandvangen zijn veelbelovend. Het doel van dit onderzoek is een analyse van de mogelijkheden om de baggerhoeveelheden te minimaliseren via dergelijke ingrepen in het dwarsprofiel. De morfologische reacties op ingrepen zijn met Sobek-Sedredge berekend. Dit rekenpakket simuleert in een quasi twee-dimensionale omgeving de sedimentbeweging en morfologie in rivieren. Doordat niet eerder met dit pakket was gewerkt is veel tijd besteed aan het opzetten van het model. De optimalisatie van het nivelleren is gedaan door te varieren met de mate van nivellering, de lengte waarover de ingreep plaatsvindt en het tijdsinterval tussen onderhoudsbaggerwerkzaamheden. De resultaten van deze berekeningen zijn vergeleken met de door Rijkswaterstaat geplande werkzaamheden. Bij deze werkzaamheden wordt elk knelpunt weggebaggerd en wordt het vrijgekomen zand in diepe gedeeltes van de rivier teruggestort. Uit deze vergelijking bleek dat de jaarlijkse onderhoudsbaggerwerkzaamheden bij bochtnivellering groter zijn dan bij de huidige aanpak van knelpunten. Eveneens bleek dat door het snelle uitdempen van de morfologische reactie de breedte en diepte winst te gering te zijn om veel knelpunten op te lossen. Bij de berekeningen met zandvangen kon door beperkingen van het rekenmodel alleen worden gekeken naar de gevolgen van een enkele zandvang in plaats van opeenvolgende zandvangen. Hierbij bleek dat de morfologische reactie geYnduceerd door een zandvang snel uitdempt en dat onderhoudsbaggerwerkzaamheden groter zijn dan de geplande werkzaamheden. De hoofdconclusie van dit onderzoek is dat het onderzochte gebruik van de morfologische reactie op ingrepen in het dwarsprofiel om de baggervolumina te minimaliseren geen voordelen oplevert.","dredging; optimization; river maintenance","nl","master thesis","","","","","","","","","Civil Engineering and Geosciences","Sectioin Hydraulic Engineering","","","","" "uuid:0bc0134e-c5e8-4062-956d-979d049352a8","http://resolver.tudelft.nl/uuid:0bc0134e-c5e8-4062-956d-979d049352a8","Dynamic Water-System Control - Design and Operation of Regional Water-Resources Systems","Lobbrecht, A.H.","Segeren, W.A. (promotor); Lootsma, F.A. (promotor)","1997","","water management; water resources; control system; real-time control; dynamic control; optimization; successive linear programming; interests; strategy; design","en","doctoral thesis","","","","","","","","","Civil Engineering and Geosciences","","","","","" "uuid:6b34b76a-72e7-4922-9a6a-b2f389b53877","http://resolver.tudelft.nl/uuid:6b34b76a-72e7-4922-9a6a-b2f389b53877","Verkenning genetische algorithmen, een hulpmiddel bij de inrichting van een Rijntak","Goossens, J.G.C.M.; Boogaard, H.F.P. van den","","1996","","Waal; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:d1f186a5-6601-4bfb-a72f-9e007977d6e9","http://resolver.tudelft.nl/uuid:d1f186a5-6601-4bfb-a72f-9e007977d6e9","Interior point techniques in optimization: Complementarity, sensitivity and algorithms","Jansen, B.","Lootsma, F.A. (promotor); Boender, C.G.E. (promotor)","1996","","optimization; sensitivity analysis; interior point algorithms","en","doctoral thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:e80f3094-dbf5-4df2-b9e5-73e0937e26ec","http://resolver.tudelft.nl/uuid:e80f3094-dbf5-4df2-b9e5-73e0937e26ec","Fuzzy predictive control based on human reasoning","Babuska, R.; Sousa, J.; Verbruggen, H.B.","","1995","","predictive control; fuzzy decision making; optimization; learning","en","conference paper","Delft University of Technology","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","","" "uuid:717630e4-194c-4d2a-b4d1-d7f3929b5608","http://resolver.tudelft.nl/uuid:717630e4-194c-4d2a-b4d1-d7f3929b5608","User's manual for the computer program CUFUS: Quick design procedure for a CUt-out in a FUSelage version 1.0","Heerschap, M.E.","","1995","","Structural design procedures; cut-outs; pressurized fuselages; finite elements; optimization; sensitivity analysis; NASTRAN; PATRAN","en","report","Delft University of Technology","","","","","","","","Aerospace Engineering","","","","","" "uuid:afd31d18-2efe-4149-afbe-a8f946c7c2c7","http://resolver.tudelft.nl/uuid:afd31d18-2efe-4149-afbe-a8f946c7c2c7","Optimization of design of IMS racing yachts","van Oossanen, P.","","1995","","optimization; yachts","","other","","","","","","","","indefinite","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","","" "uuid:a65dcff7-5005-4a96-9b25-0789d7ea095a","http://resolver.tudelft.nl/uuid:a65dcff7-5005-4a96-9b25-0789d7ea095a","Lokatiekeuze monsternamestation in de Nieuwe Waterweg: Optimalisatiestudie meetlokatie(s) en methodiek","Bleeker, F.J.; Bons, C.A.","","1993","","waterkwaliteitsmeting; water quality measurement; Nieuwe Waterweg; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:f381200a-8c95-47b7-911e-963241f5d4fc","http://resolver.tudelft.nl/uuid:f381200a-8c95-47b7-911e-963241f5d4fc","Computer aided optimum design of rubble-mound breakwater cross-sections: Manual of the RUMBA computer package, release 1","De Haan, W.","","1989","The computation of the optimum rubble-mound breakwater crosssection is executed on a micro-computer. The RUMBA computer package consists of two main parts: the optimization process is executed by a Turbo Pascal programme, the second part consists of editing functions written in AutoLISP. AutoLISP is the programming language within AutoCAD. The quarry production, divided into a number of categories, and long-term distributions of deep water wave heights and water levels, form the basis of the computation. Concrete armor units have been excluded from the computation. Deep water wave heights are converted to wave heights at site. A set of alternative cross-sections is computed based on both functional performance criteria, and Van der Meer's stability formulae for statically stable structures. Construction costs and maintenance costs are determined of each alternative. The optimum is derived by minimizing the sum of the construction costs and maintenance costs. Moreover, the programme provides means to economize the use of the quarry. At this stage the computer programme is useful for feasibility studies of harbour protection or coastal protection in regions, where use can be made of a quarry in the neighbourhood of the project site and the use of concrete armor units is excluded in advance. Briefly a method is described to extend the computer programme to the use of concrete armor units.","breakwater; armour units; optimization","en","report","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","","" "uuid:3a4a1ebc-f64a-4fba-8d46-b62dd47ca290","http://resolver.tudelft.nl/uuid:3a4a1ebc-f64a-4fba-8d46-b62dd47ca290","Illustrative examples of optimization techniques for quantitative and qualitative water management: Report on investigation","Verhaeghe, R.J.; Tholen, N.","","1983","","waterbeheer; water resources management; waterkwaliteit; water quality; optimalisering; optimization","en","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:4d4806a8-3c2d-4e3e-abe2-f0a40476ef72","http://resolver.tudelft.nl/uuid:4d4806a8-3c2d-4e3e-abe2-f0a40476ef72","Optimalisatie op basis van lineair programmeren (LP) en dynamisch programmeren (DP): Mogelijkheden en beperkingen","Abraham, G.; Beek, E. van","","1982","","beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; programmering; programming; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:09369434-a255-45f4-a816-baa09f830394","http://resolver.tudelft.nl/uuid:09369434-a255-45f4-a816-baa09f830394","Optimalisatietechnieken in kwantitatief waterbeheer: Ontwerp van beheerstrategieën in PAWN","Samson, J.; Dijkman, J.P.M.","","1981","","beslissingsondersteunende systemen (BOS); decision support systems (DSS); waterbeheer; water resources management; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:d42a86c7-b46c-471f-ad18-2e74cc461b74","http://resolver.tudelft.nl/uuid:d42a86c7-b46c-471f-ad18-2e74cc461b74","Optimalisatietechnieken in kwantitatief en kwalitatief waterbeheer","Verhaeghe, R.J.","","1978","","waterbeheer; water resources management; waterkwaliteit; water quality; grondwaterbeheer; groundwater management; watervoorziening; water supply; optimalisering; optimization","nl","report","Deltares (WL)","","","","","","","","","","","","","" "uuid:3bfeced0-7f7b-4cda-82a3-be291e9d8ffe","http://resolver.tudelft.nl/uuid:3bfeced0-7f7b-4cda-82a3-be291e9d8ffe","Conception de réseau iBGP","Buob, M.O.; Uhlig, S.; Meulle, M.","","","BGP is used today by all Autonomous Systems (AS) in the Internet. Inside each AS, iBGP sessions distribute the external routes among the routers. In large ASs, relying on a fullmesh of iBGP sessions between routers is not scalable, so route-reflection is commonly used. The scalability of route-reflection compared to an iBGP full-mesh comes at the cost of opacity in the choice of best routes by the routers inside the AS. This opacity induces problems like suboptimal route choices in terms of IGP cost, deflection and forwarding loops. In this work, we propose a solution to design iBGP route-reflection topologies which lead to the same routing as with an iBGP full-mesh and having a minimal number of iBGP sessions. Moreover we compute a robust topology even if a single node or link failure occurs. We apply our methodology on the network of a tier-1 ISP. Twice as many iBGP sessions are required to ensure robustness to single IGP failure. The number of required iBGP sessions in our robust topology is however not much larger than in the current iBGP topology used in the tier-1 ISP network.","BGP; route-reflection; IBGP topology design; optimization","en","conference paper","CFIP","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Network Architectures and Services","","","",""