A new parametric study, to expend the optimizing plate girders using S890 steel was conducted as well to address the usefulness of steels with higher yield strength in plate girders subjected to bending. This study again used the geometry used by Abspoel. The results showed a decrease in maximum web slenderness, but still a significant increase in bending moment capacity compared to the S690 plate girders. It was shown that using an optimized S890 plate girders compared to hot rolled section made also from S890 steel, could reduce the use of steel by more than 80%.

After the parametric studies showed increasing capacity, the geometry used to numerically model the plate girders, was critically addressed, using small scale numerical studies using FEM-software. These tests showed that not only the slenderness of the web was a factor in the bending moment capacity of a plate girder, but also the flange geometry plays a significant role. It was shown that increasing the length of the tested part of the girder, the failure mode could change from flange yielding to an instable mode in which the flange rotated around its longitudinal axis, resulting in a much lower bending moment capacity.

An extra investigation in using a hybrid steel composition resulted in showing the potential of this optimization. Because by adding lower grade steel, more ductility was shown due to these parts yielding prior to yielding of the compressive flange, resulting in possible safer design.","Steel Plate girders; optimization; bending moment capacity; plate buckling; Slender plate girders","en","master thesis","","","","","","","","","","","","","","" "uuid:6eb18a63-3cc5-43d5-a38c-a8992b9cddd1","http://resolver.tudelft.nl/uuid:6eb18a63-3cc5-43d5-a38c-a8992b9cddd1","Future City Hydrogen: Reality or Utopia?: A techno-economical feasibility study of an optimal stand-alone Solar-Electrolyzer-Battery-FuelCell system for residential utilization","Tamarzians, Michel (TU Delft Electrical Engineering, Mathematics and Computer Science)","Smets, Arno (mentor); Isabella, Olindo (graduation committee); Rueda Torres, Jose (graduation committee); Delft University of Technology (degree granting institution)","2019","The population worldwide is growing rapidly which leads to an increase of the energy demand. Simultaneously, the established energy resources are being depleted and contribute negatively to the climate. The necessity for a sustainable and inexhaustible energy source, to deal with the increasing energy demand in an ecological friendly approach, will play a key role in the 21st century. One of the most predictable and inexhaustible renewable energy sources is the Sun. Nevertheless, changing weather conditions, like rain and clouds, winter and summer, result in daily and seasonal fluctuations. A reliable stand-alone solar system requires a profound storage method to tackle the daily and seasonal fluctuations that can potentially result in deficit or dumped energy.

Generally, a battery bank is adopted in stand-alone solar systems, but the low energy density makes a battery bank not suitable as a seasonal storage method. A seasonal storage method can be implemented by the production and consumption of the chemical product hydrogen. Hydrogen has a high energy density compared to batteries (142 MJ/kg vs 0.95 MJ/kg), but the low round-trip efficiency prevents implementing hydrogen as a daily storage method. For a highly reliable and optimal sized stand-alone energy system, a combination of both a battery bank and the chemical product hydrogen are used as a profound storage method. The combined storage method can be used in times

of excess and deficit energy. This results in a so called stand-alone hybrid PV-Electrolyzer-Battery-FC energy system. In this final thesis project a stand-alone hybrid PV-Electolyzer-Battery-FC energy system is modelled and optimized to determine the current and future feasibility, both technologically and economically, for residential utilization. A simulation model of the hybrid energy system is designed in TRNSYS. The model is optimized by minimizing the loss of load probability (LLP) and levelized cost of energy (LCOE) for the stand-alone hybrid PV-Electolyzer-Batter-FC energy system at residential level in TRNOPT. Several cases are optimized based on the electrical, heat and mobility demand. The used optimization method is a combination of particle swarm optimization (PSO) and Hooke-Jeeves optimization algorithms implemented by GenOpt.

It is established that the proposed stand-alone hybrid PV-Electrolyzer-Battery-FC is technically feasible for the fulfillment of the annual electrical demand of a typical Dutch household. The feasible system size consists of 19 PV modules, battery capacity of 25.5 kWh and a tank volume of 1.24 cubic meters for a LCOE of 1.04 /kWh. If the future prices of the main components can be reduced to 0.01 €/Wp for PV, 0.01 €/Wh for battery and 0.01 €/W for electrolyzer and fuel cell the hybrid system can potentially reach a LCOE of 0.28 €/kWh. Reduction of the prices can be realized by large scale production, large scale implementation and technology maturity. In the end, a LCOE of 0.17 €/kWh can be realized by renewable energy systems if these future prices are realized and the following conditions are met: (1) fully covered roof area by PV modules and (1) the production, consumption and storage of hydrogen should be centralized to scatter the infrastructure costs over all the consumers. This can induce a so called hydrogen economy in the future, whereby the hydrogen gas can be the sustainable link between the increasing energy demand and the depleting fossil fuels.","Solar-Battery-Hydrogen System; Alkaline Electrolyzer; PEM fuel cell; Autonomous; Hybrid; optimization; Hooke-Jeeves; Particle Swarm Optimization; Residential; Netherlands","en","master thesis","","","","","","","","","","","","Sustainable Energy Technology","","" "uuid:6d0e608e-b4d6-4d7f-8f6e-1ffed2802347","http://resolver.tudelft.nl/uuid:6d0e608e-b4d6-4d7f-8f6e-1ffed2802347","An optimization based approach to autonomous drifting: A scaled implementation feasibility study","Verlaan, Bram (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)","Keviczky, Tamas (mentor); Delft University of Technology (degree granting institution)","2019","Development of the autonomous vehicle has been a trending topic over the last few years. The automotive industry is continuously developing Advanced Driver-Assistance Systems (ADAS) that partially take over the driver’s workload. This has resulted in an increase in vehicle safety and a decrease in fatal crashes [1]. Full vehicle autonomy has not yet been reached, as the control systems involved are not yet capable of handling every situation. One of these critical situations is when a vehicle enters the unstable motion of drifting. A vehicle is prone to drifting on low-friction surfaces, and also during these generally unstable maneuvers, the autonomous system should be able to remain in control. The performance of an autonomous drifting controller should be exemplified by the experience of rally drivers in how to handle a vehicle and keep control of a vehicle while drifting. The objective of this thesis is to design a control system which is capable of handling a vehicle during a drifting motion and to follow a certain desired path. Vehicle dynamics are modeled as a three-state bicycle model to simplify the complex dynamics of the vehicle and the interaction between tyre and road. The definition of longitudinal wheel slip is reformulated to a smooth alternative to accommodate gradient based solving. With the system dynamics defined, the drifting motion is analyzed and equilibrium points are identified, showing differences between low- and high friction surfaces. Initially, a Model Predictive Control (MPC) strategy is applied with the purpose of steering the vehicle to desired drifting equilibria. Hereafter, the control system is extended to provide path following properties and addition of a dynamic velocity controller allows for a larger range of equilibria to be reached. The simulation setup intends to capture the experimental environment in the Network Embedded Robotics DCSC lab (NERDlab) at the Delft Center for Systems and Control (DCSC) department. Simulating a 1:10 scaled model allows to investigate the challenges that arise when implementing the control strategy on a scaled vehicle. These simulations show that autonomous drift control using the designed MPC strategy is possible, even when accounting for possible uncertainties such as delay, noise, and model mismatch.","optimization; control; autonomous; drifting; vehicle","en","master thesis","","","","","","","","","","","","Mechanical Engineering | Systems and Control","","" "uuid:e08c31c2-1371-465d-a5bc-666433945249","http://resolver.tudelft.nl/uuid:e08c31c2-1371-465d-a5bc-666433945249","Landing Gear Design Integration for the TU Delft Initiator","van Oene, Nick (TU Delft Aerospace Engineering)","Vos, Roelof (mentor); Brügemann, Vincent (graduation committee); Veldhuis, Leo (graduation committee); Delft University of Technology (degree granting institution)","2019","The Delft University of Technology is developing an MDO tool for the conceptual design of transport aircraft. However, the current program is not able to investigate the influence of the undercarriage design on the weight, drag and geometry of transport aircraft. This research proposes a new design method for the undercarriage, for which a new design module is created and integrated

into the Initiator architecture. This new method will allow the user to investigate the influence of the undercarriage design on the weight, drag and geometry transport aircraft concept.

By designing the undercarriage for six existing aircraft, it is shown that the updated Initiator is able to reliably and consistently design an undercarriage for a given transport aircraft concept. Also, two test cases demonstrate that the new method allows the user to evaluate the impact of the undercarriage on the drag, weight and geometry of the concept.","Landing gear; Undercarriage; Concept Design; MDO; Initiator; optimization","en","master thesis","","","","","","","","","","","","Aerospace Engineering","","" "uuid:396e5112-19a0-48d3-b83d-9341a9fad583","http://resolver.tudelft.nl/uuid:396e5112-19a0-48d3-b83d-9341a9fad583","Pumping when the wind blows: Demand response in the Dutch delta","van der Heijden, Ties (TU Delft Civil Engineering and Geosciences)","Abraham, Edo (mentor); van Nooijen, Ronald (mentor); Palensky, Peter (mentor); Lugt, Dorien (mentor); Delft University of Technology (degree granting institution)","2019","This thesis investigates the potential of a large pumping station in IJmuiden, the Nether-lands, for participating in Demand Response. Due to climate change, renewable energy is onthe rise. The intermittency of energy, together with its unpredictable supply, are a big hurdlefor the energy transition. Two methods are promising solutions to this problem; large scaleenergy storage and demand response. Since large scale energy storage is not yet economi-cally feasible, demand response has an important role to play in the early days of the energytransition.Using energy when it is generated requires a data-stream from the generation facilities onproduction, which is not (yet) widely available. The market price, however, is an indicationof the scarcity of energy, since it is based on the ratio between supply and demand. Besidesthat, there is a correlation between a low energy price and sustainable energy productionsince marginal costs of sustainable energy production are lower than fossil energy produc-tion. This makes using sustainable energy cheaper that fossil energy, and gives DemandResponse a business case.In this thesis, a Model Predictive Control is created that uses energy market data to minimizeenergy costs. Multiple energy markets are analyzed with respect for their suitability for thepumping station in IJmuiden to act on them. The day ahead market is called the APX inthe Netherlands, and this is where energy is bought and sold the day before consumption.The intraday market, also called the flexibility market, is where energy can be bought andsold up to 5 minutes before consumption. A strategy combining these two markets will beevaluated. This is done by using a predicted day ahead price, generated by a SARIMA model,to create a plan. This plan will then be followed, but deviations from the plan are allowedagainst intraday market price.Due to imperfections of the market (mismatch between supply and demand), imbalances areoccurring. These imbalances result in frequency deviations of the grid, and voltage devia-tions. Tenner, the Dutch TSO (Transmission system operator), is responsible for minimizingthese imbalances. In order to minimize the imbalance, TenneT gives a real-time indication ofthe imbalance on the grid, and positive contributions are rewarded while negative contribu-tions are punished. This is done through the use of the imbalance price; a price per volumeof imbalance caused or solved. The imbalance price is based on the aFRR market, wherebids can be done on possible activation. Since the imbalance market is a fast-acting market,it is not suitable for a large pumping station like IJmuiden. However, the aFRR market willbe analyzed in this thesis.The effects of expected future development, like sea level rise and energy market changes,will be analyzed and simulated as well. A higher sea level would result in more pumping, andless discharging under gravity. Which causes the the pump schedule to become less flexible.The results show that it is possible to apply demand response to a pumping station, and theintraday market makes it possible for the MPC to adjust its energy use during the day.The aFRR market analysis shows a lot of potential for the pumping station, possibly makingup for all energy costs made through the spot markets.The conclusion of this thesis is that Rijkswaterstaat can possibly save energy costs on pump-ing, based on the fixed energy price, provided by Rijkswaterstaat, they pay now. Based ona reference scenario where the MPC only minimizes energy use, and a fixed ENDEX energyprice, the proposed MPC makes about 10% less costs in the German market scenario. TheDutch market scenario does not show cost savings. In the Netherlands there is not muchcorrelation between low energy prices and renewable energy yet, since renewable energy isnot a big part of the energy mix in the Netherlands. This correlation is expected to becomemore present when the Dutch energy mix becomes more sustainable. This is expected toresult in lower CO2emission through the energy use of the pumping station. However, moreresearch is needed to confirm this.","pumping; demand; response; side; management; smart; grid; sustainable; energy; market; day ahead; intraday; optimization; pyomo; ipopt; NLP; mpc; model; predictive; control; schedule; water; ijmuiden; pumping station; ijsselmeer; markermeer; noordzeekanaal; amsterdam-rijnkanaal; rijkswaterstaat","en","master thesis","","","","","","","","","","","","Civil Engineering | Water Management","","52.470852, 4.601499" "uuid:1a15154f-7d08-4c5c-bdc1-4966f958e498","http://resolver.tudelft.nl/uuid:1a15154f-7d08-4c5c-bdc1-4966f958e498","Automated dig-limit optimization through simulated annealing","Hanemaaijer, Thijs (TU Delft Civil Engineering and Geosciences)","Wambeke, Tom (mentor); van Duijvenbode, Jeroen (mentor); Buxton, Mike (mentor); Soleymani Shishvan, Masoud (mentor); Delft University of Technology (degree granting institution)","2018","","dig-limit; simulated annealing; mine planning; dig-lines; optimization; meta-heuristic; ore-waste classification; dilution; ore loss","en","master thesis","","","","","","","","","","","","Applied Earth Sciences","","" "uuid:31642fd0-f382-4b9a-a78c-5bfdcb48fa31","http://resolver.tudelft.nl/uuid:31642fd0-f382-4b9a-a78c-5bfdcb48fa31","Optimizing closure works: A case study on the Kalpasar closure dam","de Jong, Han (TU Delft Civil Engineering and Geosciences)","Jonkman, Bas (mentor); Mooyaart, Leslie (mentor); Broos, Erik (mentor); van den Bos, Jeroen (graduation committee); Delft University of Technology (degree granting institution)","2018","Constructing a dam across a tidal basin has alway been a long-term integral solution to many water related problems of the surrounding area such as flooding, river control and fresh water storage. However, immense challenges are accompanied with the closure works of large basins. This research treats the closure strategy to close the Gulf of Khambhat in India. The project is known as ""Kalpasar"", which aims to create of a fresh water reservoir in the Gulf of Khambhat by constructing a 35 km dam across the estuary. The Kalpasar project

has been on the Indian Governments agenda since 1986. Royal Haskoning was involved in the pre-feasibility study, which was presented in 1998. However, due to an alignment change to a more northern position, earlier proposed closure work designs are now considered out of date.

To avoid irrelevance of this research through time and assist the Kalpasar development project with optimizing a new design for the closure works, this research treats the development of a fundamental parametric optimization tool to quickly perform a first-order evaluation of possible closure strategies on costs.

The tool as a product along with case results are delivered to the Kalpasar development project for further design optimization.

Closing the tidal basin involves closing a certain wet cross section along the chosen dam alignment through which large tidal currents penetrate caused by tidal differences up to 11 m. Complexity is caused by increasing tidal flow velocities due to increasing constriction of the wet cross section during the closure. The developed optimization tool can evaluate and compare six pre-programmed strategies to close a multi-sectional wet cross section in time on costs of three fundamental design requirement or ""cost factors"": Required dam material, bed protection and equipment. Using a multi-sectional storage model to compute the flow velocities in the gap, the channels and tidal flats can be individually modeled after which they are linked as a system. The model reacts as a system to changes in flow area by closing certain cross sections (a channel or a tidal flat). The individual cross sections can be closed strategically by defining their closure method (horizontal, vertical or sudden), execution phase and construction capacity. These are called ""strategic input parameters"". Defined for all sections, they determine the closure sequence of the system in time. Optimization is achieved when the strategic input parameters define a closing sequence which minimizes the combined cost of all cost factors.

Subsequent to the storage model, three computational models are introduced to quantify the required dam material, bed protection and equipment. Based on earlier research, the material model utilizes only quarried rock for gradual closures and sluice caissons for sudden closures. The equipment model utilizes large dump trucks for horizontal closures and ships or a temporary cable-way/bridge system for vertical closures. The construction capacity is linked to material and bed protection models, since both design requirements are time dependent. Increasing construction capacity can therefore decrease these requirements.

Since subsequent models largely depend on the flow velocity, an attempt to validate and calibrate the storage model was performed using results from previous research and a 2D-H Delft3D model. Deviations with respect to the Delft3D model were significantly large (factor 2-3), because storage models can only be utilized if the basin size and the remaining gap are small (usability limits). Therefore, calibration was performed by introducing an artificial contraction factor to compensate for the error in the flow velocity. An exponential relation was determined linking the error to the constriction percentage of the gap. With increasing constriction percentage, the error decreased due to increasing validity of the storage model usability limits. The artificial contraction factor can be used to optimize the closure of the Gulf of Khambhat. However, for general use, the model should be calibrated to each specific site.

Case study results show that using multiple cross sections to model the bathymetry with respect to a single cross section, the optimal strategy can change from fully vertical to a combination of horizontal and vertical with a specific capacity. Utilizing the developed model for the Kalpasar case is therefore recommendedbecause the complex bathymetry creates many possible strategies and can’t be reliably modeled with single cross-sectional models. The strategy that showed the most potential for further optimization is: First closing the tidal flats horizontally by forward dumping of rocks, while closing the channels up to 40% of their depth with dumping ships after which the remaining gap is closed vertically by a cable-way or bridge system. This strategy is commonly suggested by existing literature, thereby increasing reliability and validity of the optimization model.

A second case study showed negative effects of increasing construction capacity on the total cost. However, these case results are based on assumed costs and cost functions for equipment, which should be verified by contractors first. Bed protection requirements did decrease significantly by increasing construction capacity, showing potential for development of high capacity closure equipment to avoid these costs. Further future development should focus on vertical closure equipment to decrease both material and bed protection costs.

To conclude the recommendations, more case studies should be performed to quantify influences of parameters already included in the model, such as the permeability of the dam, the presence of a tidal power facility and the use of a sudden caisson closure to relieve the final closure. Secondly, further validation of the storage model is essential to generate more reliable results. Furthermore, research should be performed into cost functions of several existing or new high capacity equipment for vertical closures, relating costs to construction capacity to improve usability of the optimization model.

in reservoir simulation the number of parameters generally is extremely high, computation of this information is computationally expensive. Therefore, a multiscale framework is employed to improve the

computational efficiency of the forward simulation. Multiscale methods are able to solve the model equations at a computationally efficient coarse scale and can easily interpolate this solution to the fine

scale resolution. Next, we use a Lagrangian set-up together with a multiscale framework to re-derive an efficient formulation for the derivative computation. However, as the multiscale method is prone to errors, this derivative computation formulation is recast in an iterative fashion, using a residual based

iterative multiscale method to provide control of these errors. In this thesis we show that this method generates accurate gradients. In contract to the high accuracy of the method, this method comprises a computationally heavy smoothing step. This issue can be resolved by making smart use of the Lagrange

multipliers, to re-derive an efficient iterative multiscale solution strategy. The multipliers are used to identify important domains of the region for which smoothing is required and for which regions we may neglect the smoothing. We show that the newly proposed iterative multiscale goal oriented method is computationally more efficient and we show that method is promising for efficient derivative computation, but that more work is required to fully demonstrate the benefit of this method.","multiscale; gradient; computation; lagrange; multipliers; optimization; goal-oriented; adjoint; iterative mutliscale; Porous Media; Flow; reservoir simulation","en","master thesis","","","","","","","","","","","","Applied Mathematics","","" "uuid:19dd0340-faa2-416f-bc67-bbc9248a7154","http://resolver.tudelft.nl/uuid:19dd0340-faa2-416f-bc67-bbc9248a7154","Goal Oriented Optimization of Tailored Modes for Reduced Order Modelling: An alternative perspective on Large Eddy Simulation","Xavier, Donnatella Germaine (TU Delft Aerospace Engineering; TU Delft Aerodynamics, Wind Energy & Propulsion)","Hulshoff, Steven (mentor); Delft University of Technology (degree granting institution)","2018","This Masters thesis is a new perspective on Large Eddy Simulation. The capability of goal oriented model constrained optimization technique to generate stable reduced order models without any additional stabilization term or subgrid scale modelling has been demonstrated. The low dimensional projection modes sought by the optimization program comprise the dissipative scales implicitly, thereby ensuring energy balance and eliminating the need for an SGS model.","optimization; Lagrangian; large eddy simulation; variational multiscale; goal function","en","master thesis","","","","","","","","","","","","Aerospace Engineering | Aerodynamics and Wind Energy","","" "uuid:e129aa53-1cca-469e-bf09-80142d4b879c","http://resolver.tudelft.nl/uuid:e129aa53-1cca-469e-bf09-80142d4b879c","Multi-robot parcel sorting systems: Allocation and path finding","van den Heuvel, Bram (TU Delft Electrical Engineering, Mathematics and Computer Science)","van Iersel, Leo (mentor); Delft University of Technology (degree granting institution)","2018","The logistics industry is being modernized using information technology and robots. This change encompasses a new set of challenges in warehouses. Recently, some companies have started using robot fleets to sort products and parcels. This thesis studies those systems, and researches the combinatorial problems that arise within them. Three main optimization problems are identified: 1. Finding an optimal layout of the sorting system on the warehouse floor; 2. Allocating products or parcels to be sorted to robots; 3. Finding paths that all robots can follow concurrently, without colliding. These problems are considered one by one. The first problem is understood on an intuitive level, while the other two are considered more closely. For both problems, several algorithms are considered. Some utilize greedy heuristics while others model the problem at hand precisely using integer linear programming methods. The algorithm’s real world performance is then assessed using a simulation. Slow, ILP-based algorithms are found to produce optimal solutions for small instances. However, they don’t scale well, and are unable to solve large instances. Greedy approximation algorithms solve all problem instance sizes tested, but produce solutions of lower quality.","optimization; sorting; planning; allocation; path; collision; ILP; makespan; heuristic; greedy; disjoint; parcel; robot; hamiltonian; tree-width; dynamic; programming; multi; commodity; flow; conservation; a-star; rust; integrality; gap; benchmark; test","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics","","" "uuid:da43fc88-c219-446b-999d-24cd0e830a93","http://resolver.tudelft.nl/uuid:da43fc88-c219-446b-999d-24cd0e830a93","Route Optimisation For Mobility-On-Demand Systems With Ride-Sharing","van der Zee, Menno (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)","Alonso Mora, Javier (mentor); Delft University of Technology (degree granting institution)","2018","Privately owned cars are an unsustainable mode of transportation, especially in cities. New Mobility on Demand (MoD) services should offer a convenient and sustainable alternative to privately owned cars. Notable in this field is the recent uprise of ride-sharing services such as offered by companies like Uber and Grab. Such services, especially when allowing for multiple passengers to share a vehicle, could potentially be a valuable addition to existing modes of transport to offer fast and sustainable door-to-door transportation.

The optimisation of vehicle routes for a MoD fleet is a complex task, especially when allowing for multiple passengers to share a vehicle. Recent studies have presented algorithms that can optimise routes in real-time for large scale ride-sharing systems, but have left opportunities to further enhance fleet performance. The redistribution of idle vehicles towards areas of high demand and the utilisation of high capacity vehicles in a heterogeneous fleet has received little attention. This work presents a method to continuously redistribute idle vehicles towards areas of expected demand and an analysis of fleets with both buses and regular vehicles. Furthermore, a method is proposed to optimise vehicle routes while taking into account vehicle capacities and the future locations of vehicles in anticipation to predicted demand.

In simulations with historical taxi data of Manhattan, 99.8% of transportation requests can be served with a fleet of 3000 vehicles with an average waiting time of 57.4 seconds, and an average in-car delay of 13.7 seconds. Compared to earlier work, a decrease in walk-aways of 95% is obtained for 3000 vehicles, with a 86% decrease in average in-car delay and a 37% decrease in average waiting time. For a small fleet of 1000 small busses of capacity 8 still 84.6% of requests can be served with an average waiting time of 141 seconds and an average in-car delay of 269 seconds. In comparison to prior work, a decrease in walk-aways of 15% is obtained, with a 14% decrease in average in-car delay and a 2% decrease in average waiting time. A heterogeneous fleet of 1000 vehicles consisting of 500 buses and 500 regular vehicles using this new approach can serve approximately the same number of passengers as a homogeneous fleet of 1000 buses using earlier presented algorithms.","optimisation; routing; mobility-on-demand; ride-sharing; ride-sourcing; mobility; transport; optimization; Integer Linear Programming problem; ILP; Mixed integer linear programming; MILP","en","master thesis","","","","","","","","","","","","","","" "uuid:99d5ed9a-c706-4cb6-8caa-9dbc8c9822c9","http://resolver.tudelft.nl/uuid:99d5ed9a-c706-4cb6-8caa-9dbc8c9822c9","Searching for two optimal trajectories: A study on different approaches to global optimization of gravity-assist trajectories that have a backup departure opportunity","Perdeck, Matthias (TU Delft Aerospace Engineering)","Cowan, Kevin (mentor); Delft University of Technology (degree granting institution)","2018","In interplanetary space missions, it is convenient to have a second departure opportunity in case the first is missed. Two distinct approaches to minimizing the maximum of the two Delta-V budgets of such a trajectory pair, are developed. The first (‘a priori’) approach optimizes the variables of both trajectories at once. The second (‘a posteriori’) approach first minimizes Delta-V budgets for a range of discrete departure epochs, and then selects the pair of which the highest Delta-V is minimum. Furthermore, five different pruning and biasing methods are developed, these prove critical for computational efficiency (number of objective function evaluations). Application to three different gravity-assist (and deep space maneuver) trajectories to Saturn, reveals that the a priori approach is more computationally efficient on a trajectory with few variables (3) and that the a posteriori approach is more computationally efficient on a trajectory with many variables (22).","interplanetary; trajectory; optimization; optimisation; gravity-assit; space; flight; flyby","en","master thesis","","","","","","","","2023-06-07","","","","","","" "uuid:66eca2a7-321d-44cd-ba1d-9ad501f80177","http://resolver.tudelft.nl/uuid:66eca2a7-321d-44cd-ba1d-9ad501f80177","Evaluation and optimization of the control system of the Symphony Wave Power Device","Sfikas, Ilias (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Electrical Sustainable Energy)","Polinder, Henk (mentor); Dong, Jianning (graduation committee); Smets, Arno (graduation committee); Delft University of Technology (degree granting institution)","2018","Raising environmental concerns have stimulated the development of renewable energy, including energy from the oceans, which contain a huge potential. In this thesis, particular emphasis is given to wave energy, which can deliver up to 2 TW on a global scale. The aim of this thesis is to optimize the control system of the Symphony Wave Power Device, which is a point absorber, so that the energy that is being delivered to the electrical grid is maximal and the device functions in a stable way. The device is analytically described in terms of structural parts, operating principle and presentation of all the forces that act on the moving part, which is called the floater. The device is in fact a mass-spring-damper system, for which the spring constant needs to be tuned according to the period of the incoming waves, so as to maximize the energy extraction. For this tuning, not only the actual mass of the floater, but also the added equivalent mass due to the inertia of the inner turbine need to be taken into account.

The whole device is modelled with the help of a Matlab/Simulink programme, in which simulations can be performed, to observe the motion and make certain calculations. The already existing PI controller, which makes use of an energy error, is briefly described and the relevant calculations for the energy extraction are presented. The energy losses in the electrical parts also need to be taken into account.

To evaluate the current controller, it is necessary to calculate the upper boundary of the energy that the Symphony can obtain from a certain wave. This is done with the help of the GAMS software. The code, as well as the necessary assumptions and approximations, are presented in a mathematical way. The results, both in numerical and graphical form, provide a good insight as to how the ideal theoretical control system looks like.

Next, simulations are performed in the Matlab programme and comparisons with the GAMS results are made. The essential parts of the controller are tuned to their optimal values. Only a proportional part for the PI controller is needed and the energy should not flow in two directions.

The results show that, with correct tuning of the proportional part, as well as of the spring constant, the Symphony operates very well in all realistic sea states at the location where it will be placed. A high percentage of the theoretical energy boundary is being extracted from the waves and the motion of the floater is close to the optimal pattern. It is thus concluded that the existing controller has a remarkable performance, if regulated correctly. Finally, recommendations for future research on many levels are given.

This graduation study looks at optimization of OWF installation procedure with a targeted completion date as a priority. In this thesis, an optimization approach is built around an ECN in-house software, developed for simulating various OWF installation strategies. Ultimately, the result of the dissertation is to have a method that provides added flexibility to simulate different OWF installation planning while still obtaining optimal installation costs. A concise literature review describes the significance of the current research and the potential that metaheuristic approaches bring to solve installation scheduling problems. Thus, the genetic algorithm is chosen as the optimization procedure to use for current work. The objective of the optimization procedure throughout the research is minimizing the total installation cost. The target end date in this study is implemented in the form of a constraint to steer the optimizer solution within the specified limit. A new methodology is proposed to generate an automated planning for the different installation procedures to facilitate the link between the optimizer and ECN tool. The project also considers uncertainty introduced due to weather and describes the considerations made to account for the same. The new approach shows the potential of introducing an optimization procedure in OWF installation logistics and ultimately assisting in lowering the overall project costs.

In this report, new design water levels for Addicks and Barker Reservoir are calculated based on inﬂowing discharge into the reservoirs and precipitation directly onto the reservoirs, including data of Hurricane Harvey. These calculated design water levels are compared with the critical water levels calculated based on the failure mechanisms of the dams. This study shows that the original design water level of the dams, based on the Probable Maximum Flood, are 2.83 m and 1.01 m higher than the critical water level for which failure of the dams can occur due to piping for Addicks and Barker Reservoir. However, the maximum allowed water level which is currently maintained by the United State Army Corps of Engineers, is 2.19 m and 2.46 m below the calculated critical water level. During Hurricane Harvey, these maximum allowed water levels were exceeded with 3.46 m and 1.93 m.

The damage of residential properties upstream and downstream of the reservoirs are minimized based on the distribution of excess volume from the inﬂow of creeks and precipitation onto the reservoirs. The ratio of the amount of volume which should remain upstream of the dams and the volume discharged into the Buffalo Bayou is calculated for every considered event with its duration and return period. The ratio of Addicks Reservoir is the dominant ratio, which should be used for both reservoirs. Run-off alone already produces damage, especially for the 12h and 24h precipitation, so the Addicks and Barker Reservoirs should not release discharge into the Buffalo Bayou for small durations. For events with a longer duration, it would cause less damage to open the outlets of the reservoirs than to keep them closed. However, if the water level in the reservoir exceeds the critical water level for piping, it is advised to discharge more to the downstream area to prevent breaching of the dams. Since the critical water level is reached for approximately 25% of the events at Addicks Reservoir, mitigations against piping should be taken to improve the minimization of damage. For Barker Reservoir, the critical water level is not reached in the optimization. During big events, people living upstream will be more affected by the ﬂooding than people living downstream since this optimization is based on the damage minimization of residential properties.

The resulting set up was not able to find a solution for the complete trajectory. The trajectory was therefore split in three phases: take-off, acceleration and pull-up. The first two stages were optimized successfully and resulted in similar payload capacities as found in the literature with traditional methods. The final pull-up stage needs to be further investigated. Although this research has shown that global optimizers can be used for the ascent trajectory optimization, further research is required before the methods can be applied effectively.","space plane; optimization; trajectory","en","master thesis","","","","","","","","","","","","","","" "uuid:6bc3aacf-a97b-44bc-82f9-e7fc542852ad","http://resolver.tudelft.nl/uuid:6bc3aacf-a97b-44bc-82f9-e7fc542852ad","Energy-Optimized Toed Walking on Flexible Soles for Humanoids","van der Planken, Jonathan (TU Delft Mechanical, Maritime and Materials Engineering)","Vallery, Heike (mentor); Delft University of Technology (degree granting institution)","2017","In this research the role of thick flexible soles in energy-efficient humanoid walking is analyzed. It is

hypothesized that, the addition of underactuated degrees of freedom under the foot gives the robot

the potential to execute a pseudo-passive walking motion 1, which yields a decrease in ankle torque

and energy expenditure. Furthermore it is hypothesized that, if these principles are applied to toed

gait walking patterns, instead of flat foot walking patterns the decreases will be larger in magnitude.

To isolate the effects of adding a sole, a toe joint and both at the same time, four walking types are

compared in simulation; flat foot and toed gait walking, both with and without sole. To asses the cases

without sole, energy-optimized walking pattern generation is used. For walking on soles, the optimized

walking patterns are used as input for a deformation estimator that calculates the sole compression.

Simulation results show that the rolling motion of the sole reduces the ankle torque and the energy

consumption. The results prove that the reduction effects are especially large for toed gait walking,

thereby validating both the hypotheses.","flexible; Sole; Energy; optimization; gait; generation; humanoid; deformation; estimation; HRP-4; pseudo-passive; passive; walking","en","master thesis","","","","","","","","","","","","","","" "uuid:e6ab0d7e-5e43-4297-9369-cfe07c623eeb","http://resolver.tudelft.nl/uuid:e6ab0d7e-5e43-4297-9369-cfe07c623eeb","Quantized Distributed Optimization Schemes: A monotone operator approach","Jonkman, Jake (TU Delft Electrical Engineering, Mathematics and Computer Science)","Heusdens, Richard (mentor); Sherson, Thom (mentor); Delft University of Technology (degree granting institution)","2017","Recently, the effects of quantization on the Primal-Dual Method of Multipliers were studied.

In this thesis, we have used this method as an example to further investigate the effects of quantization on distributed optimization schemes in a much broader sense. Using monotone operator theory, the effect of quantization on all distributed optimization algorithms that can be cast as a monotone operator was researched for two different problem subclasses. The averaging problem was used as an example of a quadratic problem, while the Gaussian channel capacity problem was an example of the non-linear problem subclass. A fixed bit rate quantizer was used in combination with a dynamic cell width, to analyse the robustness of distributed optimization schemes against quantization effects. In particular, we have shown that for practical implementations it is possible to incorporate fixed bit rate quantization with dynamic cell width in a distributed optimization algorithm without loss of performance for both problem classes.

of all, the mesh should contain elements of good shapes and sizes. In addition, the sharp interfaces should coincide with the edges of the elements instead of intersecting with them. These requirements are formulated as an optimization problem with three terms, measuring the difference between the actual and prescribed scaling field, shape quality, and the area between prescribed curves and the nearest triangle edges. The solution of the optimization problem should provide the desired mesh. The mesh generator MESH2D was applied to obtain an initial mesh. The Matlab function minFunc was used to search for the minimum of the constructed objective function. Three weights balance the three terms in the objective function. When it comes to complicated models, these weights have to be chosen carefully to produce a reasonable mesh.","triangulation; optimization; seismic; finite-element","en","master thesis","","","","","","","","","","","","Applied Geophysics","","" "uuid:dabadd38-19f4-413e-b597-e8777a9bbb88","http://resolver.tudelft.nl/uuid:dabadd38-19f4-413e-b597-e8777a9bbb88","Using Topology Optimization for Actuator Placement within Motion Systems","Broxterman, Stefan (TU Delft Mechanical, Maritime and Materials Engineering)","Langelaar, Matthijs (mentor); Delft University of Technology (degree granting institution)","2017","Topology optimization is a strong approach for generating optimal designs which cannot be obtained using conventional optimization methods. Improving structural characteristics by changing the internal topology of a design domain has been fascinating scientists and engineers for years. Topology optimization can be described as a distribution of a given amount of material in a specified design domain, which is subjected to certain loading and boundary conditions. This domain can then be optimized to minimize specified objectives, for example compliance. For static problems, topology optimization is extensively used. The distribution of material, void and solid regions, can be used to solve several problems within the mechanical domain. However, this method of optimization is also used to optimize structures with respect to their resonant dynamics.

Design of actuator placement is used to determine the most optimal actuator layout for a given objective, for example reducing responses. Combined with topology optimization, both design variables can influence each other, and be optimized towards the wanted behavior. This is done in a static domain. When material is removed, the force layout is updated, which influences the material distribution again. It is shown that the combination of these design variables in the optimization process, contributes to a better result; weight reduction can be achieved, while large deformations are preserved.

Design of actuator placement, combined with topology optimization is also implemented in a dynamic domain. Since topology changes result in frequency response changes, the force placement is more sensitive. On the other hand, forces can be placed in a smart way, to ensure some mode shapes are not excited, whereas others are. By enabling positive and negative forces these forces can even be used to counteract or minimize certain modal responses. When implementing for example a harmonic excitation, the weight and total force can be linked together, to ensure accelerations are feasible. A weight reduction can thus lead to force reduction, which on its turn leads to less deformations. Especially in the high-precision industry, smart placement of actuators, including weight reduction can be very helpful. The combination of these phenomena could provide a new insight in creating accurate wafer stages.

Er worden twee modellen vergeleken: in Model 1 is het systeem vrij om verzoeken te accepteren of te weigeren, terwijl in Model 2 per zone beslist wordt of alle ritten al dan niet worden. De taxiservice wordt eerst toegepast op kleine schaal, waarna enkele aanpassingen gedaan worden om het odel ook op grote schaal te kunnen toepassen. Bij de toepassing op kleine schaal wordt altijd verlies gemaakt, omdat de ratio taxi's per zone erg hoog is. Voor de toepassing op grote schaal blijkt dat het model voor veel taxi's sterk overeenkomt met het model van Liang, maar voor kleinere hoeveelheden taxi's minder omdat het genereren van ritten in Liang's model minder homogeen gedaan wordt. Het optimale aantal taxi's om te gebruiken is altijd 20 of 40.","taxi; optimization; taxiservice; Autonomous Vehicles; Modelling","nl","bachelor thesis","","","","","","","","","","","","","","" "uuid:6b07b0c4-5534-42e4-9067-0dcb23fc0646","http://resolver.tudelft.nl/uuid:6b07b0c4-5534-42e4-9067-0dcb23fc0646","A Many-objective Tactical Stand Allocation: Stakeholder Trade-offs and Performance Planning: A London Heathrow Airport Case Study","Földes, Gergely István (TU Delft Aerospace Engineering)","Roling, Paul (mentor); Verhees, Martijn (graduation committee); Melkert, Joris (graduation committee); Curran, Richard (mentor); Delft University of Technology (degree granting institution)","2017","Airports are highly complex systems that can generate economic growth

on their own. Accordingly, airports should take proactive actions to

create a status quo between the stakeholders(the airport itself,

airlines, passengers) in the tactical planning of the aircraft stand

allocation. Namely, the harmonization between the stakeholders’

interests is either reactively or not at all considered, so one cannot

be certain that the objectives of the stakeholders are met. For that

reason, a methodology is developed using Weight Space Search on a

many-objective tactical stand allocation model to establish a

reference performance set from which decision alternatives are created

using the k-means clustering algorithm. Decision makers then can

proactively assess and choose decision alternatives on the performance

of the tactical stand allocation to identify how the different

stakeholders can achieve their goals in (partial) synergy. The airport

can also apply the concept of empathetic negotiation to establish a

favorable status quo.","airport; tactical stand allocation; planning; optimization","en","master thesis","","","","","","","","2019-06-23","","","","","","" "uuid:ef49b460-a456-433e-841a-acb793febc53","http://resolver.tudelft.nl/uuid:ef49b460-a456-433e-841a-acb793febc53","Optimization of the skidded load-out process","Verhoef, Nick (TU Delft Mechanical, Maritime and Materials Engineering)","Kaminski, Mirek (mentor); van Woerkom, Paul (graduation committee); Bos, Reinier (graduation committee); van Kester, Maurice (graduation committee); Delft University of Technology (degree granting institution)","2017","The structures which HMC installs offshore are fabricated onshore and subsequently moved onto a barge or ship, seafastened and then transported to the offshore location. The process of moving a structure from the onshore quayside to the barge or ship is called the load-out. This load-out can be performed by lifting, skidding or using a trailer (SPMT). This thesis research focused only on a skidded load-out onto a barge.

During the load-out the weight of the jacket or topside is gradually transferred from the quay to the barge. The barge gradually takes more of the load so ballast water needs to be continuously pumped or discharged depending on the location of the structure and the location of the ballast tank concerned. Improper ballasting during this process will cause the alignment between the quay and the barge to be disrupted which in turn causes peak loads in the topside or jacket and the barge. It is questioned if there are more suitable ballasting methods or a structural solution in order to lower these peak loads? It is also modeled what the effects are of quayside stiffness and the best method to model this stiffness.

Therefore a 2-D representation of the entire load-out is made. This model will be made using the finite element method, via a numerical model, in MATLAB. A base case load-out of a topside will be applied to this model. Using this model, optimizing the ballast configuration will be researched. Several different criteria for the optimization were tested and its different effects on the forces during the load-out were researched and quantified. The structural solution of relocating the skidbeams to an area of lower deck stiffness was also tested and the results studied. The effects of the quayside stiffness and modelling methods were also quantified using the 2-D MATLAB model.

The conclusion derived from the optimizations is that there are other ballast configurations which perform better in reducing the peak forces experienced during the load-out. The key to these optimizations is that they keep the barge-quay alignment as perfect as possible. If a critical element is present in the load-out the ballast configuration can be adjusted to lower the forces in this specific element. The results of the simulation in which the skidbeams were relocated show that this approach has no beneficial effects in reducing the forces during the load-out, mainly due to the presence of the transverse bulkheads in het barge. Furthermore for the modelling of the quayside it was proven that especially when using a low stiffness quayside, modelling the quayside without taking into account the foundation layer stiffness is inaccurate and can lead to lower forces in the model than which occur in reality.","load-out; optimization; skidded; ballast","en","master thesis","","","","","","","","","","","","","","" "uuid:696e112f-697b-49e8-a524-c5efbe0663da","http://resolver.tudelft.nl/uuid:696e112f-697b-49e8-a524-c5efbe0663da","Support Structure Optimization: On the use of load estimations for time efficient optimization of monopile support structures of offshore wind turbines","Maljaars, J.L.","Langelaar, M. (mentor)","2017","Over the years, the installed capacity of offshore wind turbines is increasing rapidly. However, the Levelized Costs Of Energy (LCOE) is still higher than the LCOE of traditional energy production methods like nuclear power or energy from coals or gas. This research focuses on a further decrease of the LCOE, by minimizing the mass of a monopile support structure of a wind turbine. This is done in a so called integrated way: Optimizing the tower and the foundation together. The design variables used in this research are the wall thickness and the diameter of every +-3 meter section. These can even be cylindrical or conical. To simplify the problem, a parametrization of the designs is used, which reduces the design variables from around 180 to 28. This is checked with existing designs. Due to the interaction between mostly the first eigenfrequency and eigenmode, the diameter and the waves, it is expected that several local optima exist. Therefore, the proposed optimization strategy is a Particle Swarm Optimization which can be used for a global search for an initial position for a gradient based optimization to find a local optimum, which is possibly the global optimum. In this research the focus is on the Particle Swarm Optimization. The constraints of the optimization are Fatigue, Buckling, the maximum deflection of the monopile, the angle of the conical parts and the D/t-ratio of the monopile. These are used in the initial design of support structures, so that the optimized designs are realistic. To take the constraints into account, the objective is taken as the mass extended by the penalized constraints. To reduce the optimization time, the evaluations of the objective function are done by using load estimations instead of extensive load calculations. Several methods are compared on a theoretical basis: Response Surface Methodology, Radial Basis Functions, Kriging, Support Vector Regression, Multi-adaptive Regression Splines and Non-Uniform Regression B-Splines. The performance of a selection of methods is checked on the problem, to come up with reliable estimation methods. To improve the accuracy of the estimations, interaction of Particle Swarm Optimization and the estimators is proposed via estimator updating. During this research, an optimization tool for monopile support structures is developed. This tool is able to use calculations or estimations of the loads. In order to study the behaviour of the proposed optimization approach and to compare it with the traditional design approach, several case studies are formulated based on a realistic design problem. These are optimized with the optimization tool. Using a constant tower diameter, the optimization tool is able to reduce the mass of the support structure with 13\%. Using the tower diameter also as design variable in the optimization gives a further reduction of the mass with 4\%. Several test runs are done, to check whether a global optimum is found or not.","wind energy; wind turbine; offshore wind turbine; support structure; optimization; estimators; radial basis functions; kriging; support vector regression; nurbs; response surface methodology; estimator updating; integrated optimization","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Precision & Microsystems Engineering (PME)","","Engineering Mechanics","","" "uuid:642f1076-2f8a-4ad3-91eb-ea7b6c40f2df","http://resolver.tudelft.nl/uuid:642f1076-2f8a-4ad3-91eb-ea7b6c40f2df","Tractable Reserve Scheduling Formulations for Alternating Current Power Grids with Uncertain Generation","ter Haar, O.A.","Keviczky, T. (mentor); Rostampour Samarin, V. (mentor)","2017","The increasing penetration of wind power generation introduces uncertainty in the behaviour of electric power grids. This work is concerned with the problem of day-ahead reserve scheduling (RS) for power systems with high levels of wind power penetration, and proposes a novel set-up that incorporates an alternating current (AC) Optimal Power Flow (OPF) formulation. The OPF-RS problem is non-convex and in general hard to solve. Using a convex relaxation technique, we focus on systems with uncertain generation and formulate a chance-constrained optimization problem to determine the minimum cost of production and reserves. Following a randomization technique, we approximate the chance constraints and provide a-priori feasibility guarantees in a probabilistic sense. However, the resulting problem is computationally intractable, due to the fact that the computation time complexity grows polynomially with respect to the size of the power network and scheduling horizon. In this thesis, we first use the so-called scenario approach to approximate a convex set which contains almost surely the probability mass distribution of underlying random events. We rely on the special property of reserve scheduling problems which leads to linear constraint functions with respect to the uncertain parameters. We can therefore formulate a robust problem for only the vertices of the approximated set. Using the proposed approach, the number of scenarios is reduced significantly which is beneficial for the tractability. Such a formulation requires the power network state to only be feasible for all vertices of the convex approximated set. To even further relax such a requirement, we develop a novel RS formulation by considering the network state as a non-linear parametrization function of the uncertainty. By using a conic combination of matrices, only three positive semidefinite constraints per time step are considered. Unlike existing works in RS, our proposed parametrization has a practical meaning and is directly related to the distribution of reserve power. Such a reformulation yields a reduction in computational complexity of OPF-RS problems. Finally, we extend our results to a more realistic size of power grids, using sparsity pattern and spatiality (multi-area) decomposition of the power networks, leading to a decomposed semidefinite programming (SDP) problem. To solve the SDP in a distributed setting, we formulate a distributed consensus optimization problem, and then the alternating direction method of multipliers (ADMM) algorithm is employed to coordinate local OPF-RS problems between neighbouring areas. The theoretical developments in aforementioned cases were validated on a realistic benchmark system and a discussion on the tractability of the resulting optimization problems by means of computational time analysis is presented.","power system; optimization; uncertainty; renewable energy; wind power generation; reserve scheduling; optimal power flow; reserve requirements; scenario approach; alternating direction method of multipliers; distributed solving; vertex enumeration; conic parametrization","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","","" "uuid:faa2c6dc-e5ca-4486-b607-d963f650dad2","http://resolver.tudelft.nl/uuid:faa2c6dc-e5ca-4486-b607-d963f650dad2","Improved Flexible Runway Use Modeling: A Multi-Objective Optimization Concerning Pairwise RECAT-EU Separation Minima, Reduced Noise Annoyance and Fuel Consumption at London Heathrow","van der Meijden, S.A.","Roling, P.C. (mentor); Visser, H.G. (mentor)","2017","A minimization of disturbance caused by aircraft noise events and a reduction of fuel consumption during the initial and final phase of flight. These are the two objectives that play an important role in the Flexible Runway Allocation Model. By taking into account fuel consumption alongside noise annoyance, this model enables to analyze and optimize runway allocation from a broader perspective. This study aims to identify the improvements that can be made with respect to the initial Flexible Runway Use Model. Accordingly, these enhancements should be implemented and quantified in order to establish the Improved Flexible Runway Allocation Model. The improvements that are found in this study relate to both objectives in the mixed integer linear programming optimization as well as particular linear constraints. A major contribution is made to the runway occupancy constraint, which has shown a transition from a single aircraft computational method to a pairwise flight separation approach based on RECAT-EU. The proposed Improved Flexible Runway Allocation Model is applied to a case study that represents daily operations at London Heathrow Airport. This model shows that, by assigning a small delay to inbound and/or outbound flights, significant contributions can be made with respect to noise annoyance in the vicinity of the airport as well as the overall fuel consumption from the airline’s perspective. By allowing opposite direction operations, flexibility is added to the use of the airport’s runway ends, which results in a more efficient utilization of the available capacity. The results of this analysis are visualized by means of a Pareto front, indicating the Pareto optimal solutions to a runway allocation assignment based on a differentiation in objective weights.","runway; allocation; capacity; MILP; Linear Programming; Heathrow; London; optimization; fuel; noise; noise annoyance; Pareto; RECAT-EU; separation minima; opposite direction operations; flexible; flexible runway allocation","en","master thesis","","","","","","","","","Aerospace Engineering","Control & Operations","","Air Transport & Operations (ATO)","","" "uuid:03af3d1b-98d8-4c14-99ff-a448b4f4b2d0","http://resolver.tudelft.nl/uuid:03af3d1b-98d8-4c14-99ff-a448b4f4b2d0","Models, Solutions and Relaxations of the Asymmetrical Capacitated Vehicle Routing Problem","Kerckhoffs, L.","Aardal, K.I. (mentor)","2017","In this thesis, we take a look at the Asymmetrical Capacitated Vehicle Routing Problem (ACVRP). We will take a look at different possible formulations for the problem and choose one based on the ease of implementation, the computation speed of solving it, and the available relaxations. The problem, and its relaxations, will be modeled and solved using AIMMS, a commercial modeling software. Using the methods described above, we model different cases and instances of the problem using a Two-Index Vehicle Flow formulation. We apply an Assignment Problem relaxation and a Linear Programming relaxation to each of the instances. We find that the problem is easiest to solve when all customers are relatively close to each other (as opposed to being placed in separate clusters that are relatively far from each other), and that the LP relaxation gives bounds with a fairly good quality in short periods of time.","TSP; Vehicle Routing; VRP; ACVRP; optimization; online supermarket; relaxation","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","Optimization","","" "uuid:69719e2d-5649-47da-a39e-e9107487eab1","http://resolver.tudelft.nl/uuid:69719e2d-5649-47da-a39e-e9107487eab1","Creating an optimal OR schedule regarding downstream resources","Carlier, M.","Van Essen, T. (mentor)","2017","A high percentage of hospital admissions is due to surgical interventions. The operating theatre, which holds the operating rooms (ORs), is therefore one of the key resources in hospitals. Managing the operating theatre and finding an optimal OR schedule is complex due to the many factors that influence it. Scheduling a surgery in an OR influences downstream facilities like the post anaesthesia care unit, intensive care unit and general patient wards. This research was conducted at Leiden University Medical Centre (LUMC), an academic teaching hospital in Leiden, the Netherlands. During the week, the LUMC experiences a large variation in bed occupancy at the patient wards due to the way surgeries are scheduled. The large variation in bed occupancy causes surgeries to be cancelled, because there are no beds available at the ward. Because the OR theatre is such an expensive resource, we want to find a schedule that utilises the OR optimally during opening times. In this research, we develop a clustering method to cluster surgical procedures into surgery groups based on surgery duration and the length of stay. Then, we extend a model that analytically determines the patient distributions over the wards and intensive care for a given OR schedule. We define a mixed integer programming model with the objective to maximise the OR utilisation and minimise the variation in bed occupancy at the wards and intensive care. The model produces an OR schedule with the defined surgery groups assigned to days in the OR. We use two different methods to solve the model: a global approach and a local search heuristic, i.e., simulated annealing. The model has one nonlinear constraint and a complex objective function. Therefore, we linearise the constraint and the objective function, which results in a mixed integer linear program that is proven to be 𝑁𝑃-hard. Both approaches are tested on a dataset provided by the LUMC. Furthermore, several scenarios are evaluated. We conclude that the mixed integer linear programming method performs better and faster than the simulated annealing procedure. To obtain an even better solution it is possible to use a combination of both. By using this method, the OR utilisation of the LUMC can improve by 11% and the variation in bed occupancy can be decreased by 80%.","master surgery schedule; Operating room scheduling; bed occupancy; mixed integer linear programming; simulated annealing; length of stay; optimization; hospital","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","","","" "uuid:5321a5d4-ab40-4403-b09c-70c617abfc77","http://resolver.tudelft.nl/uuid:5321a5d4-ab40-4403-b09c-70c617abfc77","Optimization of Island Electricity System: Transition to a sustainable electricity supply system on islands through the implementation of a hybrid system including ocean energy technologies","van Velzen, L.","Blok, K. (mentor)","2017","Climate change without adequate countermeasures has become one of humanity's greatest threat. Energy production by means of renewable energy sources is therefore one of the crucial measures that will play a paramount role in reducing the pollutant emissions of fossil fuel dependency. Small islands in particular are an exemplary case of the extraordinary dependence on oil, the energy system often being entirely dependant on diesel generators. The relative high cost of sustaining this practice in combination with the geoeconomic properties of islands provides a unique incentive for the transition to renewable energy. By definition, islands are surrounded by water, making them highly vulnerable to the effects of climate change. In addition to the risk of being surrounded by water, it also provides a vast set of possibilities. Harnessing energy from waves, tides and the difference in seawater temperatures (OTEC) are just some of the examples. In this thesis, the effect of ocean energy integration is investigated. A simulation and optimization model of the electricity supply system is developed. A multi-objective genetic algorithm optimization regarding cost (LCOE) and renewable energy integration is performed. The model covers; PV solar, wind, tidal, wave and OTEC as well as battery storage as components of a renewable energy system. The resulting model is applied to two case study islands (Shetland and Aruba), the effect of the hybrid system including ocean energy technologies is determined. The cost optimal system was found to produce energy with an LCOE below the conventional fossil fuel energy cost. This corresponds to a renewable energy share of approximately 65%, consisting solely of wind energy. The cost was determined to have a significant influence on the system configuration. Currently, due to the high cost of energy based on their pre-commercial stage, ocean energy sources are added to the energy mix at high renewable energy shares (above 75% renewable coverage). The hybrid systems including the ocean energy sources displayed an evenly spread energy production. Based on this study, the future of integrating ocean energy provides an encouraging outlook. Cost will need to be reduced further for ocean energy to become economically viable. With the right investments in ocean energy, this process can be accelerated and will become viable.","Ocean energy; renewable energy; electricity system; optimization; simulation","en","master thesis","","","","","","","","","Technology, Policy and Management","Engineering, Systems and Services","","","","" "uuid:1f228e88-c7e7-431d-96af-df1abb195edd","http://resolver.tudelft.nl/uuid:1f228e88-c7e7-431d-96af-df1abb195edd","Maintenance Optimization of Tidal Energy Arrays: Design of a Probabilistic Decision Support Tool for Optimizing the Maintenance Policy","De Nie, R.C.","Wolfert, A.R.M. (mentor); Jarquin Laguna, A. (mentor); Leontaris, G. (mentor); Hoogendoorn, C.F.D. (mentor)","2016","The increasing demand for electricity offers many opportunities for renewable energy production, of which one alternative is tidal stream energy. Several feasibility studies have shown that the global tidal stream energy potential can contribute significantly to producing renewable energy. This tidal energy can mostly be produced at the 'tidal hotspots', where the kinetic energy density is very high due to fast flowing tidal currents. However, the tidal technology is not yet cost competitive in comparison with other renewables, such as photovoltaic and wind energy, which is why further cost reductions and efficiency improvements are to be achieved. Interviews with existing tidal system developers provided insight in the cost breakdown and showed that maintenance accounts for a significant share of the total project costs. This is due to the harsh environmental conditions that impose a large uncertainty, which increase the complexity of selecting an optimal maintenance policy. Damen Shipyards has shown interest in entering the tidal industry and is exploring the cost reduction possibilities by developing their own tidal system. This thesis contributes to Damen Shipyards' research by performing a time series analysis of a tidal hotspot to identify and model the multivariate dependence of the governing environmental phenomena. A probabilistic decision support tool is developed for selecting the optimal maintenance policy. The decision support tool primarily determines when and to what extent corrective maintenance should be performed. The corresponding overall maintenance costs are also calculated and secondary information regarding the activity duration is given. By means of the probabilistic approach, which captures the weather window uncertainty due to the environmental randomness, the results can be interpreted by the user based on the desired confidence level. In this research the weather window uncertainty is implemented by simulating a large number of random, but statistically identical environmental time series, which are based on available measurement data of the tidal field at EMEC, located at the Orkney Islands in the United Kingdom. The multivariate dependence between the significant wave height, wave peak period, wind velocity and current velocity is identified in the measurement set and fully represented in the generated time series by means of a pair-copula construction simulation. The necessity for having time independence cannot be met in the original dataset, which is why a new simulation approach is developed. This method consists of a sequential simulation of pair-copula constructions to include both the time dependence and multivariate dependence in the synthetic time series. Simulation of the set of synthetic time series showed to be more effective for describing uncertainty with respect to exclusively using the original dataset, due to the possibility of including more environmental realizations. The tidal array is represented as a semi-Markov decision process, which captures all costs and transition processes related to the deterioration and maintenance decisions. A policy optimization algorithm can then be used to find the optimal set of decisions and the corresponding maintenance cost rate which includes both the direct and indirect maintenance costs. The novel tidal system design of Damen Shipyards is then plugged into the decision support tool to determine the optimal maintenance policy and maintenance costs. The effect of different levels of detail for representing the tidal system have been compared and the benefits in terms of cost reductions of using this decision support tool with respect to less advanced approaches have been highlighted. Furthermore, multiple scenarios have been elaborated to identify the sensitivities in the cases of accounting for unreliability in the failure rates, varying the number of platforms in the array and including the economic fluctuations of the maintenance vessel day rates.","probabilistic; tidal energy; maintenance policy; optimization; semi-Markov decision process; copula, multivariate dependence; decision support tool","en","master thesis","","","","","","","","2021-12-16","Mechanical, Maritime and Materials Engineering","Offshore & Dredging Engineering","","","","" "uuid:b10a0d00-3949-4122-a3db-6996d5596afb","http://resolver.tudelft.nl/uuid:b10a0d00-3949-4122-a3db-6996d5596afb","Supporting MDO through dynamic workflow (re)generation","Augustinus, R.","Hoogreef, M.F.M. (mentor)","2016","The use of advancements in computing technology has enabled designers to perform more thorough and more detailed design studies. Multidisciplinary Design Optimization (MDO) architectures provide users with guidelines on how to structure their MDO problem, including the linking of disciplines and how to perform the optimization. However, complex MDO problems can consist of tens of disciplines and hundreds of design variables. Thus, the set-up of these problems can be complex and time consuming. In an attempt to reduce the time required and complexity of this set up, the main goal in this thesis is: ""To develop and demonstrate a methodology for automatic workflow (re)generation to support MDO"". The method to fulfill these requirements consists of three main steps. The first is the automatic generation of microworkflows, workflows representing the different disciplines of the problem. The user will need to specify the inputs, outputs and operations, after which the workflows are automatically generated. The second step involves the automatic storage of workflows. Workflows are stored in a graph database, allowing the addition of semantics to the data. Adding semantics allows a reasoner to understand what the data means, enabling the inferring of data not explicitly defined. OWL (Web Ontology Language) ontologies are used to supply structure to the workflow data and add semantics. In addition, materialization scripts are present to regenerate stored workflows. The final step of the implementation involves the automatic generation of simulation workflows according to different MDO architectures. This generation involves the materialization and adjustment of microworkflows and the creation of a ‘higher level’ workflow that links the disciplines and performs the optimization. The implementation of the automatic architecture generation has been validated using three case studies of varying complexity, amount of disciplines and discipline couplings. These case studies have shown a reduction of 93 to 98 % of time spent on the generation of simulation workflows representing the problem using an MDO architecture. In addition, the approach reduces the required user expertise and minimizes the amount of information the user needs to provide.","automation; MDO; MDO architectures; simulation workflows; optimization; PIDO","en","master thesis","","","","","","","","","Aerospace Engineering","Flight Performance and Propulsion","","","","" "uuid:87dc296d-57f4-4506-829b-2c1d33982e15","http://resolver.tudelft.nl/uuid:87dc296d-57f4-4506-829b-2c1d33982e15","Fast MPC Solvers for Systems with Hard Real-Time Constraints","Zhang, X.","Keviczky, T. (mentor); Ferranti, L. (mentor)","2016","Model predictive control (MPC) is an advanced control technique that offers an elegant framework to solve a wide range of control problems (regulation, tracking, supervision, etc) and handle constraints on the plant. The control objectives and constraints are usually formulated as an optimization problem that the MPC controller has to solve (either offline or online) to return the control command for the plant. This master thesis proposes a novel primal-dual interior-point (PDIP) method for solving quadratic programming problems with linear inequality constraints that typically arise from MPC applications. Convergence of PDIP is studied both in primal and dual framework. We show that the solver converges quadratically to a suboptimal solution of the MPC problem. PDIP solvers rely on two phases: the damped and the pure Newton phases. Compared to state-of-the-art PDIP method, this new solver replaces the initial (linearly convergent) damped Newton phase (usually used to compute a medium-accuracy solution) with a dual solver based on Nesterov's fast gradient scheme (DFG) that converges super-linearly to a medium-accuracy solution. The switching strategy to the pure Newton phase, compared to the state of the art, is computed in the dual space to exploit the dual information provided by the DFG in the first phase. Removing the damped Newton phase has the additional advantage that this solver saves the computational effort required by backtracking line search. The effectiveness of the proposed solver is demonstrated by simulating it on a 2-dimensional discrete-time unstable system.","optimization; predictive control; model-based control; suboptimal control","en","master thesis","","","","","","","","2016-12-02","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","","" "uuid:5849d327-fa7a-4591-a468-0368b2713374","http://resolver.tudelft.nl/uuid:5849d327-fa7a-4591-a468-0368b2713374","Shading design workflow for architectural designers","López Ponce de Leon, L.E.","Turrin, M. (mentor); Van den Ham, E.R. (mentor)","2016","","building technology; computational design; climate design; optimization; virtual reality; workflow","en","master thesis","","","","","","","","2016-11-04","Architecture and The Built Environment","Building Technology","","","","" "uuid:d059fea6-2861-49b4-ae36-5d31db109231","http://resolver.tudelft.nl/uuid:d059fea6-2861-49b4-ae36-5d31db109231","Density Tapering for Sparse Planar Spiral Antenna Arrays","Keijsers, J.G.M.","Yarovyi, O. (mentor)","2016","Increasing demands for mobile internet access have led to exponential developments in mobile communications technologies. The next generation mobile technology is expected to exploit electronic beam steering and to have a higher operating frequency to facilitate a higher bandwidth. This places a heavy burden on the base station antenna arrays, which should be sparse to accommodate passively cooling the system. Conventional sparse array topologies suffer from undesirable radiation pattern characteristics such as grating lobes. Therefore, this work focused on exploring methods to synthesize the antenna elements' geometrical parameters to enhance the radiation pattern and to explore the limitations that arise due to the array's sparseness. To this end, both a deterministic and a stochastic method were proposed. Starting with an analytical window function as a continuous current distribution and approximating this by adjusting the antenna elements' radial coordinates results in the fact that the desired window's radiation pattern is only approximated in a limited field of view, depending on the sparseness. Full electromagnetic wave simulations are performed to show that downscaling the topology to make it more dense gives rise to increased coupling effects that deteriorate the array's performance. In addition to the deterministic method, a genetic algorithm optimization method is employed to stochastically obtain the optimal current distribution window. Approximating the optimal continuous current distribution again leads to the array factor following the optimal window's radiation pattern in a limited field of view. Furthermore, it is shown that for the conditions used in this work, the optimum continuous current distribution is also the optimum current distribution for finite element arrays, implying that only one optimization needs to be executed when designing such an array. Concluding, the applicability of density tapering to sparse arrays is limited. The inherent undersampling causes a limited realization of the window function's characteristics. Density tapering does improve the absolute performance of a sparse array in terms of peak sidelobe level, but may be useful if the region of interest is concentrated near the main beam. The requirements and in particular the region of interest of the application determine whether density tapering can be effectively employed.","antenna array; sparse array; density tapering; space tapering; optimization; genetic algorithm; feko; planar; spiral; sunflower; mutual coupling","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","Microwave Sensing, Signals & Systems / track: Telecommunications & Sensing Systems","","" "uuid:e9b513c8-751b-45a7-9d99-a51177c918a2","http://resolver.tudelft.nl/uuid:e9b513c8-751b-45a7-9d99-a51177c918a2","Optimizing truck driver schedules with dependent working shifts, drivers' legislation, and multiple time windows","Van Alphen, M.N.","van Essen, J.T. (mentor); Aardal, K.I. (mentor); Haneyah, S. (mentor)","2016","In logistics, minimizing the resource expenses can be done by minimizing the total number of hours that each truck driver has to work. This sum of working hours is referred to as the total schedule duration for all truck drivers. Minimizing this total schedule duration is the main goal in the optimization problem that we consider. The considered minimization problem is called the Total Schedule Duration with Dependent resource, Multiple Time Windows and European drivers’ legislation problem, i.e., the TSDDMTW-EU problem. A literature study is given on this TSDDMTW-EU problem. Different solution approaches and Mixed Integer Linear Programs (MILP) are discussed regarding in the scope of our project. We compose a model for the TSDDMTW-EU problem by giving a MILP, based on a model by Kopfer and Meyer(2008). Two different modeling approaches are suggested which are assessed on their performance. Furthermore, we prove that the TSDDMTWEU problem is NP-hard, and to conclude, a heuristic is evaluated with respect to its performance in objective values. Our main research contributions can be given by the following three aspects. First, a MILP is given for the complete European drivers’ legislation. All extensions in the legislation regarding a single truck driver are included. Second, knowledge is gained on the influence of dependent truck drivers on a Total Schedule Duration problem. And finally, we prove that adding the complete European drivers’ legislation to a problem, results in a NP-hard problem.","NP-hard; MILP; optimization; Europen drivers' legislation; dependent resources; schedule duration; multiple time windows; heuristic, complexity","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","","","" "uuid:c7baa01f-eb37-4bf1-aceb-3f58c575bdd1","http://resolver.tudelft.nl/uuid:c7baa01f-eb37-4bf1-aceb-3f58c575bdd1","Parallel Approach to Derivative-Free Optimization: Implementing the DONE Algorithm on a GPU","Munnix, J.H.T.","Verhaegen, M. (mentor)","2016","Researchers at Delft University of Technology have recently developed an algorithm for optimizing noisy, expensive and possibly nonconvex objective functions for which no derivatives are available. The data-based online nonlinear extremum-seeker (DONE) was originally developed for sensorless wavefront aberration correction in optical coherence tomography (OCT) and optical beam forming network (OBFN) tuning. In order to make the DONE algorithm suitable for large-scale problems, a parallel implementation using a graphics processing unit (GPU) is considered. This master thesis aims to develop such a parallel implementation which performs faster than the existing sequential implementation without much change in obtained accuracy. Since OBFN tuning is a problem that may involve a large amount of parameters, an OBFN simulation is to be used to compare the parallel implementation to the sequential implementation. The key of the DONE algorithm is solving a regularized linear least-squares problem in order to construct a smooth and low-cost surrogate function which does provide derivatives and can be optimized fairly easily. This master thesis first discusses the basics of parallel computing, after which several linear least-squares methods and several numerical optimization methods are investigated. These methods are compared and the most suitable methods for parallel computing are implemented and tested for increasing dimensions. The final parallel DONE implementation combines the recursive least-squares (RLS) method with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and optimizes the largescale OBFN simulation almost twice as fast as the sequential DONE implementation, without much change in obtained accuracy.","derivative-free; optimization; numerical; algorithm; linear; least-squares; random; fourier; expansion; rfe; data-based; online; nonlinear; extremum-seeker; done; parallel; parallelization; graphics; processing; unit; gpu; compute; unified; device; architecture; cuda","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control (DCSC)","","","","" "uuid:4cf2c369-b7b9-4db8-99da-9ab5b436f64e","http://resolver.tudelft.nl/uuid:4cf2c369-b7b9-4db8-99da-9ab5b436f64e","Solving the open day timetabling problem using integer linear programming","Gecmen, D.","Van den Berg, P.L. (mentor)","2016","The timetabling problem is known as a large class of problems that fall under the mathematical field of scheduling. A widely used method to solve timetabling problems is the integer linear programming method, which we have used to solve the open day timetabling problem. In this thesis we have addressed a timetabling problem for the open day at the Christian College Groevenbeek. This open day consists of two separate day parts: the morning and the afternoon. For both day parts we have received a data set. These data sets consist of students and their preferred studies. To solve the problem we created several mathematical models formulated as an ILP. After that, we implemented these models in AIMMS to solve them for the data sets we have received. The first model we created was the feasibility model. We used this model to determine the appropriate number of lecture halls and lecture hall capacities when we have 4 rounds on both day parts. To achieve 4 rounds on both day parts we found that the best combination to have is 19 lecture halls with a capacity of 30 and 1 lecture hall with a capacity of 40. In addition, for a good quality schedule objective functions were considered. The first objective was to minimize the number of presentations. The second objective was to minimize the total workload. The workload of a teacher is the total time a teacher is present at the college, which consists of the number of presentations he has to give and the number of gaps he has in his schedule. A gap is a round where a teacher is not scheduled to give a presentation, but has to be present. We added each objective and their corresponding constraints to the feasibility model. After applying both models to the data sets we concluded that the second objective resulted in a better schedule, as it achieves the theoretical minimum number of presentations and creates zero gaps in the schedules for both day parts. When we combined the schedules for both day parts this did not result in a good schedule as some studies still contained gaps between the two day parts. To improve this we minimized the total workload for both data sets combined.","scheduling problem; integer linear programming; optimization","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","","","" "uuid:92868b2d-87c4-4e0b-9761-698dd54f02f9","http://resolver.tudelft.nl/uuid:92868b2d-87c4-4e0b-9761-698dd54f02f9","Cruise Performance Optimization of the Airbus A320 through Flap Morphing","Orlita, M.","Vos, R. (mentor)","2016","In the era of increasing aviation traffic the conditions are right to promote design of ambitious concepts. At Fokker Aerostructures attention is drawn to smooth in-flight shape morphing to produce a structurally functional Variable Camber Trailing Edge Flap (VCTEF). The deployment mechanism would fit into the flap, not limiting other functionality such as Fowler motion, while at the same time allowing small camber variations during cruise. This is based on the assumption that such morphing will bring performance improvements which are commercially interesting. The main goal of this research was therefore to predict these performance benefits and thus the applicability for a specific case of the Airbus A320 aircraft in cruise flight. This aircraft is large enough to accommodate the technology, it is operated in great numbers and cruise is the most fuel demanding part of its mission. Since the concept is in the development phase the further task is to determine the morphing design setup which performs best. The amount of morphing is driven by a circular reference function, which is added to the base geometry at any desired streamwise cut of the wing by manipulation of the airfoil coordinates as seen on the cover. The design is specified by the points on the airfoil upper surface where the morphing begins and ends, boundaries of the morphing region where upper surface bending is allowed. As also found in other literature it is shown that morphing can bring drag reduction for a section, wing and the complete aircraft. This varies throughout the cruise, which is translated to more sophisticated performance indicators for comparison and evaluation of the benefits. The first indicator is the increase of range over the design mission for the given aircraft. The second and third are the fuel savings which can either be obtained by increasing the cruise end weight, or by decreasing the cruise beginning weight, both by the amount of the saved fuel while keeping the aircraft range constant. In order to evaluate these indicators, the Breguet range equation is used in a discretized form, utilizing an interpolated lift-to-drag ratio determined by aerodynamic analysis at 7 cruise points. This was done using both the 2D solverMSES and a quasi-3D tool Q3D developed at TU Delft comprising ofMSES and AVL vortex lattice solver. For the analysis a complete A320 model is required, which was not available and was created from the known performance data and partially assumed geometry. The unknown wing geometry was optimized with respect to the mid-cruise drag simulating an already efficient aircraft, as suggested by literature. Other model components were the horizontal stabilizer, fuselage and center of gravity position allowing trim at the reference cruise points and obtaining the lift requirements for the wing and a representative section. Under these lift requirements the 2D and 3D analyses were performed at individual cruise points to obtained improved lift-to-drag ratios which could be then used to evaluate the range improvement. Itwas found thatwith morphing in 2Dthe drag reduction can amount up to 9% at the beginning of cruise but parabolically decreases towards mid cruise after which it remains below 0.5%. This is primarily due to manipulation of the shockwave and the boundary layer at the given lift requirements, which is most dominant at high cruise lift coefficients. Since the induced drag was found not affected by the assumed morphing, such improvements are further scaled down when evaluated for the entire wing and even further from the aircraft point of view, resulting in a range improvement in order of 20km and fuel savings of below 0.5% of trip fuel. A sensitivity analysis on the design variables has shown that these performance benefits have small sensitivity to the size of the morphing region and that a very aft located regions are the most beneficial, suggesting that a small tab at the trailing edge might be a better and easier solution. In view of these results the smooth morphing concept is deemed not applicable for the cruise of short range aircraft such as A320. However, given the parabolic behaviour of the drag improvements, larger potential can be expected for long range aircraft, which is the main resulting recommendation of the conducted research. Furthermore it cannot be excluded that other regimes could benefit more from the morphing concept, such as high-lift, which would probably require wind-tunnel testing, as discussed in the final Appendix of this work.","morphing; camber; transonic; drag; optimization; cruise","en","master thesis","","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy & Propulsion","","Flight Performance and Propulsion","","" "uuid:eb4a8dd4-e024-48d7-9784-4bbecbebe1f1","http://resolver.tudelft.nl/uuid:eb4a8dd4-e024-48d7-9784-4bbecbebe1f1","The Heston model with Term Structure: Option Pricing and Calibration","van der Zwaard, T.","Oosterlee, C.W. (mentor); du Toit, J. (mentor)","2016","This thesis addresses the calibration of the Heston model with term structure (i.e. with piecewise constant parameters) to a set of European option prices from the FX market. Several option pricing methods are discussed and compared, among which the COS method, Lewis' method and the Andersen QE Monte Carlo scheme. Several modifications are proposed in order to improve the practical usability of the COS method in terms of speed, accuracy and robustness. The calibration of the Heston model with term structure is chosen as a benchmarking test-case for comparing several optimization techniques, that are both open-source as well as from licensed products. The performance the optimizers is measured in terms of speed of the calibration. In addition, a simple hedge test using the calibrated model is used as a secondary performance metric. The combined effort of finding the fastest optimization techniques and fastest pricing method has the potential of speeding up daily FX calibrations performed in many financial institutions.","option pricing; foreign exchange (FX) market; COS method; Heston model with term structure; calibration; optimization; benchmarking; hedging","en","master thesis","","","","","","","","2021-08-19","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","Numerical Analysis","","" "uuid:698e7ac8-23ac-4c13-831b-f9125838ff1c","http://resolver.tudelft.nl/uuid:698e7ac8-23ac-4c13-831b-f9125838ff1c","Multiple-Phase Trajectory Optimization for Formation Flight in Civil Aviation","van Hellenberg Hubar, M.E.G.","Visser, H.G. (mentor)","2016","A tool is developed that is able to optimize the trajectories of multiple aircraft that fly in formation in order to obtain the minimum total fuel consumption. Several experiments are conducted to investigate the benefits of formation flight for commercial aircraft. Finally, also the influences of wind and delay on the trajectories of the aircraft that join in the formation are examined","formation flight; civil aviation; GPOPS; multiple-phases; multiple aircraft; optimization; trajectory optimization; minimum fuel burn; minimum Direct Operating costs; multiple-phase trajectory optimization","en","master thesis","","","","","","","","","Aerospace Engineering","Control and Operations","","Aerospace Transport & Operations","","" "uuid:446a43d3-f729-4392-b72d-8f68737e5a64","http://resolver.tudelft.nl/uuid:446a43d3-f729-4392-b72d-8f68737e5a64","A Computational Intelligence Approach to Voltage and Power Control in HV-MTDC Grids","Agbemuko, A.J.","van der Meijden, M.A.M.M. (mentor); Popov, M. (mentor); Ndreko, M. (mentor)","2016","An ever increasing interest in renewable energy sources (RES) in order to reduce global carbon emissions and alleviate problems posed by climate changes has lead to dramatic increase particularly in offshore wind energy projects. Several different projects are currently being executed and so many are being planned for the future. The location of wind power plants is increasingly going farther away from shore with more and more challenges and opportunities. With the increased distance to shore, HVAC (High Voltage Alternating Current) transmission is consequently becoming impossible to be used as a result of increase in cable charging currents with distance. As such, HVDC (High Voltage Direct Current) transmission is becoming the only alternative to transmit power from far offshore wind power plants to shore. Two ways of doing this; point-to-point connection and multi-terminal connection. Multi-terminal connection offers dramatic improvement in flexibility, security, reliability of power supply, and with current advancement in power electronic control, offers so much. Thus, a multi-terminal grid connection is the subject of this thesis. The most important issue for multi-terminal grids has been it's controllability. Two most important controllable parameters in multi-terminal grids is voltage and power with voltage being more important as IGBT (insulated gate bipolar transistors) switches are still very sensitive devices. Beside, controlling voltage entails controlling power as they are both dependent on each other. This Thesis proposes a new computational-based control philosophy for the direct voltage and active power control of a VSC (Voltage Source Converter) based multi-terminal offshore DC grid. The limitations of the classical control strategies to HV-MTDC (High Voltage Multi-terminal Direct Current) grids were studied and in particular direct voltage droop control strategy. The main drawbacks of classical voltage droop control is its difficulty to reach power reference set-points and does not ensure a minimum loss profile in the event of contingencies. The proposed strategy is capable of addressing these weaknesses by combining the advantages of the droop controller such as robustness and exceptional ability to compensate for imbalance during contingencies, and the advantages of the constant active power controller which has the ability to reach easily power set points. Thus, the direct voltage droop control strategy and constant active power control strategy were combined, simultaneously solving the drawbacks of each. The advantages of the new Fuzzy controller is the reduced computational effort, the high degree of flexibility, limited influence of topology or the size of grid, and the near zero percentage error. The control strategy is demonstrated by means of time domain simulations for a three terminal VSC-based offshore HVDC grid system used for the grid connection of large offshore wind power plants. Furthermore, a high level controller in the form of an optimal dispatcher was implemented using a genetic algorithm (GA) optimizer to form a complete hierarchical control system - The fuzzy based strategy is used at the local layer and an optimizer/scheduler at the upper layer. The Optimizer optimizes for losses and provides optimal reference set points to the fuzzy-based controllers. The optimizer also checks regularly the available wind power and when it changes, it defines new set points. It uses the new information on wind generated to recalculate set points and return all nodal power fixed terminals to pre-disturbance levels. However results of the GA optimizer in comparison with traditional Newton-Raphson method do not show considerable improvement in reducing losses as expected and this was confirmed by similar works reported in literature. Finally, simulation results are presented in order to demonstrates the capabilities of the proposed control strategy in meeting all the design objectives. No deviation in power or voltage from the references, no influence of topology, configuration, or size. Hence, there will be no need for a secondary corrective action to alleviate the deviations.","HVDC; VSC; MTDC; knowledge-based control; fuzzy control; genetic algorithm; transmission system; offshore grids; HV-MTDC; hierarchical control; optimization","en","master thesis","","","","","","","","2020-07-11","Electrical Engineering, Mathematics and Computer Science","Electrical Sustainable Energy (ESE)","","Intelligent Electrical Power Grids","","" "uuid:cca2c63c-29c6-4e1c-88f5-f2600bcedbce","http://resolver.tudelft.nl/uuid:cca2c63c-29c6-4e1c-88f5-f2600bcedbce","Model-based optimization of drilling fluid density and viscosity","Roijmans, R.F.H.","Jansen, J.D. (mentor)","2016","Optimization of drilling fluid properties is an essential part of cost effective drilling operations and process safety. Currently fluid properties are measured and optimized manually by human engineers with different skills and experience which might lead to nonoptimum drilling fluid properties that deteriorate its functionalities. Automated drilling fluid management is still at an early development stage. Several vendors are actively developing automated skids to measure drilling fluid properties in real time [1] [2], and several authors also have published scientific work on the use of the real-time measurement as a component of automated control systems that dose mud additives automatically to meet the mud specifications or setpoints defined by human engineers [3] [4]. During the well planning stage, the design process of mud specifications is carried out by engineers checking several scenarios using well planning software and their experience to come up with drilling fluid specifications. When hole cleaning and/or borehole stability conditions change during the actual drilling process that warrant updates or changes of drilling fluid properties, the specifications are updated in an ad-hoc manner, relying on the skills of human engineers. This thesis focusses on the development of a model-based optimization module for drilling fluid properties to help engineers in the planning and drilling phase to automatically derive drilling fluid specifications that meet the hole cleaning criteria, and satisfy the downhole pressure requirement and constraints set on the operating ranges of drilling parameters. The optimization framework will use proxy models derived from well hydraulics software that predicts cuttings concentration and downhole pressure as a function of the drilling fluid properties. Three objective functions for the optimization module are given as examples in this thesis. The first two objective functions deal with the hole cleaning criteria while the last one is a cost function that combines the cost of hole cleaning and downhole pressure management. The optimization module has been tested on a case study based on real field data. Given an objective function, multiple constraints, and proxy models, the module takes only a few seconds to find the optimum mud property values and drilling parameters such as flow rates, rotary speed and rate of penetration. A benchmark with the field data shows that the optimum drilling fluid properties and parameters result in significant improvement of the hole cleaning state while the downhole pressure requirement and constraints on the drilling parameters can still be satisfied. When a cost function is defined as a combination of hole cleaning and downhole pressure management, the module also gives a quantified benefit of the trade-off between maximizing hole cleaning and minimizing losses. Since this module can perform optimization very efficiently compared to the ad-hoc processes done by human engineers, this module may be of significant value for operating units to use in the planning and drilling phase and also in the future as an outer optimization loop for automatic drilling fluid control systems.","drilling fluid; automation; optimization; density; viscosity","en","master thesis","","","","","","","","2017-01-06","Civil Engineering and Geosciences","Geoscience & Engineering","","Petroleum Engineering","","" "uuid:6fe5df50-ff93-4624-a50c-37fb6331eedf","http://resolver.tudelft.nl/uuid:6fe5df50-ff93-4624-a50c-37fb6331eedf","Tailored SID & Profile Allocation for Amsterdam Airport Schiphol","Ceulemans, B.","Visser, H.G. (mentor); Roling, P.C. (mentor)","2016","Currently, only one Standard Instrument Departure (SID) track and one flight procedure is used per runway departure fix combination. In contrast to tailored arrivals, the potential benefit of tailored departures has been left relatively undiscovered. The research objective is to quantify the potential benefit of tailored SID-s and profile allocation for Amsterdam Airport Schiphol by developing a model that is capable of simulating departure trajectories per runway departure fix and optimize the overall allocation of departing aircraft for noise and fuel consumption. The proposed methodology includes a two-step modelling framework. The two models involve the design of novel tailored departure trajectories using a multi objective genetic algorithm and the computation of optimal flight allocation by means of Mixed Integer Linear Programming (MILP). A case study is presented and serves as proof of concept.","allocation; capacity; trajectory optimization; linear programming; MILP; tailored departures; Schiphol; airport; departures; fuel; noise; optimization; multi objective genetic algorithm","en","master thesis","","","","","","","","","Aerospace Engineering","Aerospace Transport & Operations","","","","" "uuid:c5e0bc71-db3a-4619-a658-b0a773f45904","http://resolver.tudelft.nl/uuid:c5e0bc71-db3a-4619-a658-b0a773f45904","Exploration of optimal orbits in the strongly perturbed environment of the 2001 SN263 triple asteroid system","Obrecht, G.","Doornbos, E.N. (mentor); Cowan, K. (mentor)","2016","For the past 20 years, the small bodies of the solar system, such as asteroids and comets, have been increasingly gathering the interest of scientists and space agencies. The latter have been multiplying the number of space missions to study them. Brazil does not want to be left out and has been working on its own mission, ASTER, which has the particularity of having as a target a triple asteroid system. Although adding great scientific interest to the mission, this characteristic considerably complicates the mission design, by making the space probe move in a complex gravitational field and submitting it to very strong perturbations forces. Following the past researches on the ASTER mission, which mostly dealt with the characterisation of the 2001 SN263 asteroid system, this work focuses on the preliminary design of mission orbits suitable for the exploration of the asteroids. Two phases of the mission are considered: the arrival in the system, which requires a parking orbit; and the exploration phase. For the latter, two scenarios are studied: a parallel and a sequential observations of the system. To find the optimal orbits for each of these cases, a computer tool has been designed, which comprises an orbit integrator able to propagate the trajectory of a spacecraft within the asteroid system, and an optimiser which uses evolutionary algorithms to find optima from a 5-dimensional search space in a single- or multi-dimensional objective space, according to objective functions that can be chosen and adapted to match the case considered. The computer tool performs well for all cases, and allows to draw general conclusions on which kind of orbits to consider for the ASTER mission. The results show that the solar radiation pressure is by far the most problematic perturbation and is hence driving the properties of the solutions. Among all cases, many optima are terminator orbits, which are by nature strong against solar radiation perturbations. Moreover, orbits closer to the bodies are more stable, and any trajectory too distant from the bodies will be blown away. This work concludes on the suitability of the optimisation methods selected to the orbit design for this mission, although it is advised to still improve the software to model the dynamics of the system in a more detailed manner, and on recommendations for the ASTER mission. No satisfying parking orbit has been found and the relative strength of the solar radiation pressure implies that there does not exist orbits sufficiently remote from the bodies to serve as parking orbits. It is recommended to investigate other solutions with active orbit maintenance. As for the exploration phase, the sequential observation scheme shows its superiority. Satisfying observation orbits can be found about all three bodies, which is not the case for the parallel observation because of the zones of instability present in-between the bodies.","orbit; optimization; asteroids; triple asteroid; perturbations; four body problem; ASTER","en","master thesis","","","","","","","","","Aerospace Engineering","Astrodynamics and Space Missions","","","","" "uuid:a91c3cf8-01a2-4747-aaea-f56367256905","http://resolver.tudelft.nl/uuid:a91c3cf8-01a2-4747-aaea-f56367256905","Artificial neural networks for determining the optimal process conditions of a gasification process","Vellekoop, E.C.","De Jong, W. (mentor); Winkel, R. (mentor)","2016","","artificial neural networks; pyrolysis; gasification; biomass; optimization; process conditions","en","master thesis","","","","","","","","2020-04-22","Mechanical, Maritime and Materials Engineering","Process & Energy","","","","" "uuid:13fd72f6-946b-4fc1-bcfd-ee1d737abe85","http://resolver.tudelft.nl/uuid:13fd72f6-946b-4fc1-bcfd-ee1d737abe85","Robust fleet planning under stochastic demand","Sa, C.A.A.","Santos, B.F. (mentor); Clarke, J.P. (mentor)","2016","The research objective of this thesis is to develop an innovative airline fleet planning concept that is capable to consider the long-term stochastic nature of air travel demand while generating meaningful results in reasonable computation times. The proposed methodology aims to identify robust fleets, in terms of profit generating capability across a long-term planning horizon under stochastic demand, through the adoption of a portfolio of fleets (each of different size or composition) and a three-step modeling framework. The three models involve the simulation and sampling of stochastic demand using the mean reverting Ornstein-Uhlenbeck process, iteration over an optimization model that optimally allocates each fleet from the portfolio given the demand sample values, and a scenario generation model that generates scenarios across the planning horizon. A case study is presented and serves as proof of concept.","airline fleet planning; stochastic demand; optimization","en","master thesis","","","","","","","","","Aerospace Engineering","Air Transport and Operations","","","","" "uuid:23623188-d987-49eb-883b-ea52e15f7842","http://resolver.tudelft.nl/uuid:23623188-d987-49eb-883b-ea52e15f7842","Flexible Arrival & Departure Runway Allocation Using Mixed-Integer Linear Programming: A Schiphol Airport Case Study","Delsen, J.G.","Roling, P.C. (mentor); Visser, H.G. (mentor)","2016","Runway capacity of a complex runways system can be limited by several factors. Currently, the runway usage at Amsterdam Airport Schiphol (AAS) is described by a preference list established by multiple stakeholders. It makes an important trade-off between minimizing noise exposure to the environment and maximizing capacity. The existing model does not take into account fuel burn and the ensued emissions for the current and future demand in flights. This study tries to address this issue. A model has been developed using Mixed-Integer Linear Programming (MILP) by which flights can be allocated to runways, while optimizing for fuel and noise. The research has the following research question: Can fuel burn be significantly reduced for aircraft operating at Amsterdam Airport by utilizing a novel flexible arrival and departure runway allocation model, using a predefined set of variables and rules, accounting for noise annoyance, runway capacity and the current and future demand of flights? The runway allocation model developed for this study is able to assign aircraft to runways based upon an optimization trade-off between fuel usage and noise exposure to the environment. Selecting a shorter flight- or taxi route may result in lower fuel burn and emissions, while separation- and noise regulations are maintained. A multitude of scenarios is simulated using the allocation model. Different runway configurations are tested. Additionally, different peak moments varying during the day are compared to see when flexible allocation is feasible and most profitable. A set of Pareto optimal solutions can be evaluated in order to determine the most optimal runway allocation distribution. The conclusion that can be drawn from this research is that flexible allocation can have significant impact on both fuel usage and emissions, while adhering to the current regulations. Depending on the flexibility of available runways, mainly restricted by separation- and noise regulations, runway demand, local conditions and maintenance, savings are possible. For scenarios where there is room for flexibility, savings are evident. For restricted scenarios, due to wind- or visibility conditions, potential savings exist, although to a lesser extend. The level of runway demand plays a role, as most flexibility and potential savings are obtainable during off-peaks. Annual savings can amount to significant fuel and emission reduction. The described runway allocation tool has the generic abilities of being scalable to wide variety of airports and their characteristics. Other airports, a larger set of aircraft and aircraft types, different arrival and departure operations can all be added to the model due to the generic characteristics. This aids further research and eventual application of flexible arrival and departure runway allocation in the aviation industry.","runway; allocation; capacity; MILP; linear programming; Schiphol; airport; scheduling; optimization; fuel; noise","en","master thesis","","","","","","","","","Aerospace Engineering","Control & Operations","","Air Transport & Operations","","" "uuid:24fa41ff-5d3c-4abb-b0b5-cd230e9bf89c","http://resolver.tudelft.nl/uuid:24fa41ff-5d3c-4abb-b0b5-cd230e9bf89c","LED-based photocatalytic reactor design","Li, Z.","Stankiewicz, A. (mentor); Khodadadian, F.M. (mentor)","2016","As a promising technology, photocatalysis shows its unique advantages and potential in many disciplines, from hydrogen production, indoor and outdoor air purification, remediation of non-biodegradable molecular in the factory, to the organic synthesis with high selectivity. In recent years, photocatalytic semiconductor process has shown a high potential for the contaminated air remediation. Compared with conventional technology, photocatalysis in pollutant degradation shows advantages as a low-cost, environmentally friendly and sustainable treatment technology to align with the ‘zero’ waste scheme. The objective of this thesis was to perform the methodology in the LED-based photocatalytic reactor optimization by minimizing the reactor cost. The methodology is provided to be reliable and several parameters are checked based on the impact that each parameter has on the reactor optimization.","photocatalysis; photocatalytic reactor; optimization; mathematical model; Light Emitting Diode (LED)","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Process and Energy","","SPET - Energy Technology","","51.999335, 4.371127" "uuid:4d22ccdd-f4e7-458e-b455-7405eaba6245","http://resolver.tudelft.nl/uuid:4d22ccdd-f4e7-458e-b455-7405eaba6245","Topology optimization of 3D linkages with application to morphing winglets","De Jong, T.A.","De Breuker, R. (mentor); Gillebaart, E. (mentor)","2016","Topology optimization is the process of optimizing both the material layout and the connectivity inside a design domain. The first paper on topology optimization dates back to 1904, when the Australian inventor Michell derived optimality criteria for minimum weight truss structures. In 1988 Bendsøe and Kikuchi published the pioneering paper ""Homogenization approach to topology optimization"", laying the foundation of numerical optimization methods for topology optimization. Since then, extensive research has been performed both in academia and industry trying to solve different topology optimization problems. Due to its general applicability, topology optimization has been applied to the design of many morphing aircraft structures including morphing leading edges, trailing edges, or both. It has also been applied to complete morphing wings. Morphing structures have the ability to change their shape throughout the flight. This allows for possible weight savings and/or drag reduction, resulting in a reduced fuel consumption. Despite the great interest in morphing winglets from both Airbus and Boeing, topology optimization has not yet been used to design morphing winglets, except for previous work done by E. Gillebaart and R. De Breuker. This thesis continues with the research by focusing on the following research objective: ""Developing a software tool to design amechanism for morphing winglets, using ground-structure based topology optimization, by improving, extending, and expanding the previous 2D inhouse tool."" The research in this thesis is based on previous work done by the faculty. The previous 2D tool is improved, its capabilities are extended and the tool is expanded to 3D. The current tool effectively demonstrates how topology optimization, based on the ground-structure approach, can be used to obtainmechanisms for morphing winglets. A two step optimization strategy is formulated, where the mechanism is designed in the first step and sized to obtain minimum weight in the second step. Both optimizations are done using the globally convergent method of moving asymptotes (GCMMA) optimizer, combined with the adjoint sensitivity technique. Due to the large rotations of the winglet, geometric non-linearity is taken into account using the Green-Lagrange strain measure. Various mechanisms for morphing winglets were successfully designed and sized both in 2D and in 3D. In 2D mechanisms were found where the cant angle could be regulated, in 3D mechanisms were found where both the cant angle and the toe angle could be regulated. An aerodynamic load case of 5 [kN] was defined. In 2D half of this loading was assumed to act on the mechanism, resulting in a minimum weight of 15.0 [kg]. In 3D the minimum weight was found to be 48.0 [kg].","topology optimization; Green-Lagrange; geometric non-linearity; optimization; GCMMA; morphing; morphing winglet","en","master thesis","","","","","","","","","Aerospace Engineering","Aerospace Structures and Materials","","Aerospace Structures and Computational Mechanics","","" "uuid:9e0a6ef6-53ac-422d-9771-16cd8225072c","http://resolver.tudelft.nl/uuid:9e0a6ef6-53ac-422d-9771-16cd8225072c","Wing aerostructural optimization using the Individual Discipline Feasible architecture","Hoogervorst, J.E.K.","Elham, A. (mentor)","2016","At present, on the aviation market a need exists for lighter and more efficient aircraft than the ones dominating the airspace today. Beside the reduction in operating costs and air pollutants of these new generation aircraft, this reduction in fuel use can result in several advantages with respect to the performance of the aircraft like increased range, increased payload capacity, decreased of take-off field length and decreased take-off noise. The present thesis is an effort to contribute to this reduction of fuel use by performing a gradient-based aerostructural wing optimization of a modern high-speed transport aircraft, the Airbus A320, for minimal necessary fuel weight while maintaining its range specification. The novelty of this work is the use of the Individual Discipline Feasible (IDF) architecture instead of the traditional Multidisciplinary Feasible architecture. Using the IDF approach the disciplines within the aerostructural optimization are completely decoupled. The consistency of the system as a whole is maintained by the use of equality constraints to equate the output of one discipline to the input of another. No coupled sensitivity information is required because of this decoupled system. This makes the system not only simpler, but also provides more freedom in software choice for the disciplinary analyses. Furthermore, the time to perform optimization is reduced as the work of making the system consistent is removed from the computationally expensive individual disciplines and put it in the hands of the cheap optimization algorithm. The CFD solver SU2 is used within the aerodynamic discipline to deform the grid, calculate the flow properties and gain sensitivities of lift and drag with respect to surface perturbations of the wing. The Euler model is used and the viscous drag component is calculated using a separate estimation. For the structural discipline the FEMWET software is used, providing the structural data including the static aeroelastic deformation of the wing. The optimization design variables are selected to be the angle of attack, the exterior shape of the wing, being the airfoil and planform shapes, and the thicknesses of the equivalent panels representing the internal wing box. The problem is constraint by compression, tension, shear, buckling and fatigue failure modes. Moreover it is constraint by a minimum aileron effectiveness and a maximum wing loading. The aerodynamic analysis is performed under cruise conditions while the wing structure is analyzed under the critical load cases of the reference aircraft. The optimization algorithm chosen is the Sparse Nonlinear Optimizer, based on the Sequential Quadratic Programming optimization algorithm. The optimization resulted in a reduction of the aircraft fuel weight of 11%. This has been achieved by reducing induced drag through an increase in span and an improved lift distribution, by reducing wave drag by improved airfoil shapes and by reducing wing structural weight by a reduction in wing sweep.","aerostructural; MDO; wing; optimization; SU2; FEMWET","en","master thesis","","","","","","","","","Aerospace Engineering","Flight Performance and Propulsion","","Flight Performance and Propulsion","","" "uuid:dee2d842-ca91-4086-8ef8-771cfee1b0dc","http://resolver.tudelft.nl/uuid:dee2d842-ca91-4086-8ef8-771cfee1b0dc","Aerogravity assists: Hypersonic maneuvering to improve planetary gravity assists","Hess, J.R.","Mooij, E. (mentor); Sudmeijer, K.J. (mentor)","2016","Interplanetary missions use gravitational slingshots around planetary bodies to adjust their heliocentric velocity or inclination for quite some time. The momentum exchange that can be achieved during a so-called gravity assist is limited by the mass of the planetary body. To overcome this limitation, an aerogravity assist was proposed, a maneuver where, in addition to the gravitational forces, use is made of aerodynamic forces to increase the bending angle of the velocity, hence increasing the momentum exchange. To investigate how efficient an aerogravity assist can change the interplanetary orbital inclination and velocity, a simulator was developed that is capable of simulating both the gravitational and aerodynamic forces on a vehicle during an aerogravity assist. It was determined that waverider is a type of vehicle suitable for aerogravity assists due to their large lift-to-drag ratio, which reduces the energy dissipation in the atmosphere. The aerodynamic characteristics of a number of waverider shapes were evaluated, after which the one with the largest lift-to-drag ratio was selected. Furthermore, a numerical optimization algorithm was used to develop a reference trajectory planner. Finally, a guidance algorithm based on the tracking of drag accelerations was developed and tested to investigate if the found trajectories would still be feasible under the influence of uncertainties and perturbations. The angle over which the trajectory is bent is a measure for the effectiveness of the aerogravity assist. Using the reference trajectory planner, the maximum possible atmospheric bending angle was investigated for an aerogravity assist at Mars and Jupiter for different initial velocities. From this analysis, it was concluded that extremely high velocities were involved in the aerogravity assist at Jupiter, which resulted in large mechanical and thermal loads. These loads would limit the achievable bending angle when the velocities become too large. For the entry velocities investigated, the velocity bending angle could be increased by 10% for high entry (80.0 km/s) velocities and up to 143% for a relatively low entry velocity (68.0 km/s). For an entry velocity of 80.0 km/s, the initial heat-flux peak exceeded the imposed constraints, which prevented the optimization algorithm of finding any solutions. The maximum velocity bending angle that could be achieved at Jupiter was 125.1 degrees at an entry velocity of 68.0 km/s. At Mars, although the heat loads were still larger than for an Earth entry, it is believed that thermal protection systems can be designed that could handle the heat loads. The velocity bending angle could be increased by 490% to 818% depending on the arrival velocity, with a maximum velocity bending angle of 178.5 degrees at an entry velocity of 9.0 km/s. To investigate the effect of an aerogravity assist on an actual mission, two existing missions has been selected: Rosetta for Mars and Ulysses for Jupiter. Although both spacecraft did not have an aerodynamic shape, which means an aerogravity assist could not have been performed during the actual mission, it has been assumed that these vehicles would have had the geometry of a waverider. During the investigation of Rosetta swing-by at Mars, a reference trajectory was generated to investigate the amount of velocity decrease that could have been achieved using an aerogravity assist. It was determined that the reduction in velocity could be increased by 167% with respect to a gravity assist: from 2.3 km/s for a gravity assist to 6.2 km/s for an aerogravity assist. For Jupiter, it was investigated if the orbital inclination could be changed using the aerodynamic force only. As the entry velocity exceeded 80.0 km/s, the heat flux constrained was removed from the trajectory optimization to allow the optimization algorithm to find solutions. It was possible to change the orbital inclination by 54.2 degrees, but at an extremely large heat load of 40,620 W/cm2. This reconfirms that even though orbital inclination changes are possible using aerodynamic forces, Jupiter is unsuitable for aerogravity assists due to the high velocities and large heat loads associated with an atmospheric maneuver at this planet. Finally, using the aerogravity assist trajectory found for Rosetta, which was generated generated with the reference trajectory planner, the guidance algorithm was tested. The guidance algorithm was capable of tracking a drag reference under the influence of uncertain initial flight-path angles. The maximum offset in velocity bending angle occurred for a steep entry and was 1.06 degrees while the maximum offset in hyperbolic excess velocity occurred during a shallow entry and was 1.88 m/s. Furthermore, the tracking was also successful when a more accurate atmosphere model and perturbations were taken into account. For this analysis, the maximum offset in velocity bending angle and hyperbolic excess velocity were 1.24 degrees and 2.14 m/s respectively.","astrodynamics; aerogravity; gravity; assist; optimization; hypersonics; waverider","en","master thesis","","","","","","","","2017-02-24","Aerospace Engineering","Astrodynamics and Space Missions","","Space Exploration","","" "uuid:ebb54c53-3c79-4e67-9405-7451e2eca850","http://resolver.tudelft.nl/uuid:ebb54c53-3c79-4e67-9405-7451e2eca850","Development of an optimization framework for landing gear design","Van Ginneken, P.","Voskuijl, M. (mentor); Vergouwen, P. (mentor)","2016","An opportunity was identified to improve the traditional landing gear design process. Especially in the conceptual design phase, lots of man-hours are consumed by making the same calculations over and over again, for different concepts. Often, an existing gear is therefore used as initial starting point, to simplify the design process. This results in little technical progress. Additionally, integration between the different disciplines involved is sub-optimal which can lead to inconsistent results. In this thesis, an optimization framework is described that can do the preliminary design of a landing gear fully automated. It ensures that communication between disciplines is respected by adding a top-level optimizer which is in charge of changing the design variables. The realization of this framework greatly reduces the repetitive tasks in the design phase of a landing gear. This makes the design phase less limited to traditional architectures while leaving more time to evaluate non-standard solutions that may be lighter, safer and/or cheaper.","MDO; optimization; landing gear; aerospace engineering; MDF","en","master thesis","","","","","","","","2016-02-10","Aerospace Engineering","Flight Performance and Propulsion","","Flight Performance and Propulsion","","" "uuid:bc447900-1e77-44a3-aa2b-fc0d5f5c292f","http://resolver.tudelft.nl/uuid:bc447900-1e77-44a3-aa2b-fc0d5f5c292f","Vertical Collaboration in a Two-Level Supply Chain: An Agent-Based Modeling Approach","Braams, F.P.","Ludema, M.W. (mentor); Tavasszy, L.A. (mentor); Oey, M.A. (mentor); Vergouwen, Y. (mentor)","2016","Collaboration in the supply chain is nowadays seen in the scientific community as the “next best thing” in supply chain optimization (Ballot, 2015; Barratt & Oliveira, 2001; Barratt, 2004; Ireland & Bruce, 2000). Although widely investigated and often mentioned in literature, the concept of supply chain collaboration is not precisely defined (Barratt, 2004). It can be roughly described as: collaboration in the supply chain are all the joint efforts of the stakeholders within a supply chain to improve the overall performance(Barratt & Oliveira, 2001; Barratt, 2004). Procter & Gamble (P&G) can be regarded as one of the largest fast moving consumer goods (FMCG) companies in the world (MBASkool, 2015). Although performing quite well, P&G feels that they can still improve their supply chain (Olsthoorn, 2015). They have expressed the feeling that their main challenge lies in improving their supply chain while being more externally focused (Demange, 2015). P&G have therefore issued this specific project; assessing the effects of vertical collaboration in (one of) their supply chain(s) and provide a handle on how to implement this concept. This master thesis report discusses the effect of vertical collaboration in a two-level supply chain; collaboration between manufacturer (Procter & Gamble) and the retailer (Retailer X). With the help of a case-study on the product Dreft Automatic Dishwashing (ADW), the goal was to quantify the effect of increased vertical collaboration within a real-life supply chain. To help structure this research the following research question was drafted: “Could vertical collaboration in the supply chain of Dreft ADW lead to better service, cost and cash results?” In order to develop an answer to this question, first a literature study was conducted to better define the concept of vertical collaboration. Then, based on a data-analysis on the current state of the supply chain, the problem areas were disclosed for which interventions were devised on the basis of the concept of vertical collaboration. Subsequently, an Agent-Based Model (ABM) of the supply chain of Dreft ADW was designed, to simulate the effects of these interventions so as to provide the data on the effects of vertical collaboration in the supply chain of Dreft ADW. With the aid of the model we were able to both answer the research question as well as to provide the problem owner (Procter & Gamble) an approach to best optimize their supply chain by using the concept of vertical collaboration. The interventions that were used to embody the effect of vertical collaboration and subsequently tested in the ABM were: - Production batch size alignment to retailer orders. - Alignment of order information sharing process between retailer and manufacturer during promotions or continuous interaction. - Use of real-time up-to-date information throughout the supply chain in ordering and replenishment. - Use of POS data in the ordering and replenishment process. The results of the model show that vertical collaboration in the supply chain of Dreft ADW could indeed lead to better service cost and cash results. By implementing the interventions in four sequent steps, service levels can be increased without increasing inventory levels. Next to that, cost savings of up to 2.4% of the gross value of the sold products can be achieved.","supply chain; collaboration; optimization; FMCG; agent-based modeling; batch-size; real-time information; stakeholder alignment","en","master thesis","","","","","","","","","Technology, Policy and Management","Engineering Systems and Services","","Transport and Logistics","","" "uuid:7897c8cd-1435-4d39-9693-4eda558a3e6b","http://resolver.tudelft.nl/uuid:7897c8cd-1435-4d39-9693-4eda558a3e6b","Determination of the body force generated by a plasma actuator through numerical optimization","Hofkens, A.","Kotsonis, M. (mentor)","2016","In order to extract the body force field that is generated by a plasma actuator from velocity data, most researchers disregard the influence of the pressure gradient to obtain a spatial and temporal description of the body force field. There is however some discussion whether this assumption is valid or not. The current research tries to compute the body force field by using a numerical optimization procedure, using a \matlab optimization routine combined with an \openfoam solver which was adapted to accommodate for the body force term. Many simplifications had to be made to be able to perform the optimization in a reasonable amount of time, among which were a fairly coarse numerical grid, a first order discretisation scheme and a parametrization of the body force field. Due to this last simplification, no real conclusions can be drawn with regard to the spatial distribution of the body force, but the integral body forces in x- and y-direction display more or less valid behaviour and correspond to previous research. It is also shown that the pressure gradient has the same order of magnitude as the body force density in all 8 cases, which means that this research challenges the assumption that the pressure gradient is of little importance when trying to obtain the body force from velocity data.","plasma actuator; AC-DBD; optimization; body force","en","master thesis","","","","","","","","","Aerospace Engineering","Aerodynamics, Wind Energy, Flight Performance and Propulsion","","Aerodynamics","","" "uuid:fc0a57ce-33df-4cd2-a61b-66e702cc9cf8","http://resolver.tudelft.nl/uuid:fc0a57ce-33df-4cd2-a61b-66e702cc9cf8","Earth frozen orbits: Design, injection and stability","Hoogland, J.","Noomen, R. (mentor)","2016","A frozen orbit is an orbit chosen such that the effect of perturbations on (a combination of) the mean orbital elements is minimized. The concept first appeared in literature in 1978, and was applied that same year to the Seasat mission. This altimetry mission featured strict requirements on the accuracy of the altitude of the satellite above the sea surface. By designing an orbit for which the mean eccentricity and mean argument of periapsis remain static, the satellite’s altitude will theoretically be constant, depending only on location of the sub-satellite point. Classically, the theory behind frozen orbits is only based on the J2- and J3-term of the spherical har- monics gravity field model and clever manipulation of the Lagrange planetary equations. Through considerable analytical effort, it is possible to include all other zonal gravity field terms into the equation, but this approach is limited to perturbations that can be cast into the form of a disturbing potential. The aim of this thesis is to find a numerical method that overcomes this limit and to use that method to investigate the effects of including third-body gravity, atmospheric drag and solar radiation pressure on the mean orbital elements. To do this, the frozen orbit problem is formulated as an optimization problem. Use is made of Differential Evolution (DE) and grid searching to simulate many trajectories and to find a set of injection parameters that results in a minimal variation in the mean eccentricity and mean argument of periapsis. The mean elements are reconstructed from the osculating elements by making use of the Eckstein-Ustinov theory and subsequent numerical averaging. In combination with Precise Orbit Determination (POD) data, this reconstruction is used to investigate the variations in the mean orbital elements of ERS-2 and TOPEX/Poseidon. Subsequently, the numerical method is applied to various orbital dynamics models. When applied to zonal gravity fields, the new method is found to be in good agreement with analytical solutions. The influence of other perturbations on solutions found in zonal models is examined, and it is found that taking these perturbations into account during the optimization process does not lead to significant improvements with respect to the simple zonal case, nor does it lead to significant changes in the found injection conditions. For the assumed satellite characteristics, radiation pressure is found to be the most influential perturbation, causing fluctuations in the mean eccentricity of ±3%.","astrodynamics; frozen orbit; orbital perturbations; mission design; optimization; orbit injection","en","master thesis","","","","","","","","","Aerospace Engineering","Space Engineering (SpE)","","Astrodynamics and Space Missions (AS)","","" "uuid:9cffd913-b3d7-4f0f-a566-0c73f8a7d490","http://resolver.tudelft.nl/uuid:9cffd913-b3d7-4f0f-a566-0c73f8a7d490","JAY: Kitting as optimization tool of aircraft maintenance","Kok, L.M.","Santema, S.C. (mentor)","2015","Kitting as (LEAN) method in handling and supplying material and tools is not used in the aircraft maintenance process of KLM. That was one of the researchers first findings when starting this study for KLM Royal Dutch airlines. Kitting is the gathering of components and parts needed for the manufacture of a particular assembly or product. Individual components are gathered together, as a kit, and issued to the point of use (Bozer and McGinnis, 1992). This study investigates and designs a new concept process and physical tool cart using the method of kitting, that will facilitate the optimization of the KLM aircraft maintenance process: Design of a process, cart (proof of concept) and implementation map using the concept of kitting to facilitate aviation mechanics to work more efficiently,improving the productivity with to at least 10%, preferably more. Based on the results of the study and proof of concept, the researcher suggests KLM to further investigate the application of the method of kitting as optimization tool to increase productivity in aircraft maintenance. B787 and be on This is especially looking at the current addition of the 20 B787’s to the KLM fleet. New aircraft are more self monitoring than ever before, sending exact information about needed maintenance in advance, using predictive “health monitoring” systems. With this data available, kits containing all parts and materials needed for the specified maintenance can be combined. Kitting anticipates on the future developments in both aircraft design and it”s needed maintenance. To be ready for the future, KLM should start today.","Aircraft Maintenance; KLM; process; optimization; Boeing 787; Boeing 737; A-check; tool cart; Tool Trolley; LEAN; SIX SIGMA","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Product Innovation Management","","Master of Science Integrated Product Design","","" "uuid:41e4de26-825c-493e-b89a-1e460969bd30","http://resolver.tudelft.nl/uuid:41e4de26-825c-493e-b89a-1e460969bd30","Changes of the Loads Envelope for Wing Stiffness Modifications, In the Frame of Multidisciplinary Design Optimization Purposes","Van Der Wurff, S.","Scharpenberg, M. (mentor)","2015","This work focuses on a multidisciplinary design optimization of an aircraft wing. Among others, a structural optimization of the wing stiffness is performed. For a certain stiffness model, a set of relevant load cases can be determined that have high chance to cause active constraints. The purpose of this research is to investigate if the set of relevant load cases changes, when a wing stiffness modification occurs. If changes in the relevant set of load cases are small, the decision can be made to calculate a constant set of relevant load cases, in order to reduce computation time in the optimization routine.","aircraft; optimization; flight dynamics; wing; stiffness; flexibility; loads; structure; loads envelope; load cases","en","master thesis","","","","","","","","2019-01-01","Mechanical, Maritime and Materials Engineering","Precision and Microsystems Engineering","","Engineering Mechanics","","" "uuid:0c5a1796-49d7-4e7d-9431-57448f9ce09a","http://resolver.tudelft.nl/uuid:0c5a1796-49d7-4e7d-9431-57448f9ce09a","Optimizing inventory planning for aircraft component maintenance","Alizadeh, K.","Curran, R. (mentor); Verhagen, W.J.C. (mentor)","2015","This research aims to improve the inventory planning for a logistical provider who offers aircraft component maintenance and availability to its customer. To this purpose, a classification model is introduced which makes use of two existing classification methods i.e. Analytic Hierarchy Process (AHP) and Cost Criterion in order to produce a superior classification strategy. Subsequently, the weights obtained through the classification are utilized in a Non Linear Integer Programming (NLIP) Problem in order to optimize the inventory levels. This integrated approach resulted into significant savings of up to 25%. In order to validate the suitability and robustness of the new model, its practical performance is verified through a series of discrete event simulations.","optimization; inventory; spare parts; classification","en","master thesis","","","","","","","","","Aerospace Engineering","Air Transport & Operations","","","","" "uuid:0607236e-c6d1-41a7-873b-c487065cea34","http://resolver.tudelft.nl/uuid:0607236e-c6d1-41a7-873b-c487065cea34","Optimization of Ice-Class Propellers","Huisman, T.J.","Van Terwisga, T.J.C. (mentor)","2015","The main objective of this Master’s thesis is to develop an optimization routine to improve ice-class propeller design methodology using the design space within the ice-class rules. Ice impacts on a ship propeller give additional design demands to ensure reliability and safety. Consequently, ice class propellers feature thicker blades, therewith compromising fuel efficiency. However, ships trading the Baltic states and Scandinavia only sail two to five percent of their time in ice-infested waters. Propulsive efficiency should hence be optimized for ice-free conditions only, while still having sufficient ice performance and strength. The Finnish Swedish Ice Class Rules prescribe loads on the propeller blade as five load cases of uniform pressure that should be applied on the propeller blade. The Non-Dominated Sorting Genetic Algorithm II (NSGAII) is coupled to MARIN’s in-house propeller geometry generator, hydrodynamic boundary element analysis method PROCAL and a finite element analysis to evaluate the propeller blade strength. Both the radial and chordwise propeller distributions are parameterized by means of Bézier curves into optimization design variables. With these expansions, the computational framework is capable to automatically satisfy the ice-class stress constraints while converging to the best possible objective values. Each propeller within the optimization is iterated on mean pitch towards a design thrust. The four optimization objectives that are considered in this Master’s thesis are propeller efficiency, thrust variation throughout the ship’s wake field, propeller mass and ice-induced loading. Efficiency is considered as main objective while thrust variation is intended to provide interaction with the wake field. Besides the practical importance of the mass objective, it also guides the optimization towards high efficiency and maximum allowable material stresses. Based on a steady simulation of ice milling by means of an idealized ice-load pressure distribution, the ice-induced loading can be estimated as quantification of ice-performance. Best practice guidelines on the usage of PROCAL within the optimization are developed based on grid refinement and numerical uncertainty studies. Four different implementations of the finite element method are compared to the solution from a dense tetrahedral solid element mesh. Linear shell elements appear to perform best, both in terms of computational time and accuracy. A case study shows that ice-induced loading can be reduced as function of particularly the pitch distribution and blade profile geometry. It is also observed that the optimization searches for the weaknesses within the computational methods. For instance, it appears that the current ice-class rules allow highly skewed propellers, despite damage cases in practise. The optimization results are encouraging for future work concerning the optimization of blade profiles, although further work is required. It appears that the thrust variation objective steers towards flat chordwise pressure distributions. Cavitation computations are not yet included in the optimization, nonetheless, the optimized propellers show only little cavitation in the tip region. In conclusion, the optimization seems to provide a well-balanced starting point towards the design of high efficiency ice-class propellers.","ice; propeller; optimization; genetic; class","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","Ship Hydromechanics / Resistance and Propulsion","","" "uuid:0dd55171-6768-4e46-b6cd-970eb912e2ac","http://resolver.tudelft.nl/uuid:0dd55171-6768-4e46-b6cd-970eb912e2ac","Aerodynamic Design Optimization of the MTT Radial Micro Turbine","Govindarajan, S.","Colonna, P. (mentor); Pini, M. (mentor); Visser, W.P.J. (mentor)","2015","Micro turbines are touted to become the prime system for the combined heat and power(CHP) applications in light of their significant advantages in terms of performance, size, costs and reduced CO2 emissions [1]. Micro Turbine Technology B.V. (MTT) is currently developing a 3KW recuperated micro turbine for such applications. Commercially available off the shelf turbocharger components are used since they provide high performance with relatively low costs since they are mass produced.The drawback in using these components is that they are manufactured for the automotive sector and inherently operate at different conditions than the MTT operating point. Here in lies an interesting scope for performance improvement by optimizing the turbine and within the current work the focus is on aerodynamic optimization of the radial inflow turbine used in the MTT system. This study is a follow-up from the recommendations provided in [2] and [3]. A goal driven optimization is performed on the rotor geometry using ANSYS DesignXplorer and a total of four design solutions were obtained. The most important findings from the response surfaces, sensitivity analysis from the optimization were: From the parametric sensitivity it was clear that all of the six design variables have a significant impact on efficiency. The exducer angles have the most predominant effect on efficiency with that of shroud larger than hub. All of the optimal candidates exhibited an increase in the total-to-total efficiency ranging from a minimum of 6.38 percentage points to a maximum of 7.90 percentage points as compared to the baseline geometry. This efficiency improvement was accompanied by an increase in mass flow rate with a minimum value of 69.15 g/s and a maximum of 73.51 g/s. These design solutions are then coupled with the diffuser domain to study the performance characteristics and the interaction between the components. The most important outcomes from these simulations were: The efficiency of the rotor drops by 3 percentage points on an average due to the additional pressure losses introduced when coupled with the diffuser. The diffuser performance has improved and the Cp experiences a maximum increase of 17.63 percentage points (Candidate D) and a minimum of 12.70 percentage points (Candidate B). The swirl coefficient for optimum diffuser performance is found at values close to 0.22. If the swirl coefficient is increased or decreased from this optimum, diffuser performance drops. The best design solution in terms of rotor efficiency and overall total-to-static efficiency is Candidate C. However it exhibits a poorer diffuser performance than the other optimal candidates. From this study, it is apparent that there is compromise between rotor and diffuser performance. The improvement in rotor efficiency(