; #$%&'()*+,.Oh+'0HP
$TU Delft Repository search results0TU Delft Repository search results (max. 1000)TU Delft LibraryTU Delft Library@Rc>@Rc>՜.+,0HPX`hp
x
WorksheetFeuilles de calcul
B=%r8X"1Calibri1Calibri1Calibri1
Calibri 83ffff̙̙3f3fff3f3f33333f33333.v5 TU Delft Repositoryg Guuidrepository linktitleauthorcontributorpublication yearabstract
subject topiclanguagepublication type publisherisbnissnpatent
patent statusbibliographic noteaccess restrictionembargo datefaculty
departmentresearch group programmeprojectcoordinates)uuid:2529862bd90c4ee7a995fcc54281e65aDhttp://resolver.tudelft.nl/uuid:2529862bd90c4ee7a995fcc54281e65aNAnnualized hours: Comparing an exact optimization model with its approximationZBouwmeester, Marjolein (TU Delft Electrical Engineering, Mathematics and Computer Science)van Essen, Theresia (mentor); van Elderen, Emiel (graduation committee); Dubbeldam, Johan (graduation committee); van der Veen, Egbert (graduation committee); Delft University of Technology (degree granting institution)In this thesis, we propose two mixed integer linear program formulations for an optimization problem that incorporates annualized hours: an exact one and an approximation. The objective of our model consists of three weighted parts: a part which minimizes the difference between working hours and contract hours for each employee per week, a part which minimizes over and under staffing, and a part which minimizes the difference between contract hours and working hours for each employee over the total planning period. Additionally, the working hours need to be distributed over shifts of a fixed shift duration. We also consider an extension where skills are introduced. In this case, employees can only work on a task for which they are qualified. To test the proposed formulations, a random data generator is provided by ORTEC. The model should be solvable for a data set up to 100 employees and 52 weeks (and 5 skills). We have tested it on several data sets of that size with varying weights in our objective function. We have compared the run time of our exact model with the run time of the approximate model for different weights. The approximate model gave a relatively quick approximation of the optimal solution when we do not consider skills, and when we do consider skills and vary the weight for the first part of the objective function. For varying the weight on the second part, we used a time limited version of our exact model to approximate the optimal solution. To be able to approximate the optimal solution when varying weight on the third part of the objective function, the approximate model is used with extra weight on the first part, instead of the third.0Annualised hours; MILP; optimization; Schedulingenbachelor thesis)uuid:45de7f7d6ebe4b98b370f0b79e08eca8Dhttp://resolver.tudelft.nl/uuid:45de7f7d6ebe4b98b370f0b79e08eca8gOptimization of steel plate girders in bending: Using FEManalyses on S690 and S890 steel plate girdersvan Gemeren, WouterAbspoel, Roland (mentor); Veljkovic, Milan (graduation committee); Hendriks, Max (graduation committee); Feijen, Mark (graduation committee); Delft University of Technology (degree granting institution)A parametric study, using the FEMsoftware package ABAQUS, validated on experimental results found by Abspoel, was conducted on S690 steel plate girders to address if the results of the previous researches was valid. Using a slightly different analytical model the results show the maximum web slenderness of this steel grade in both 6000 mm2 and 12000mm2 was the same. It was shown that the maximum bending moment was found when the top flange was able to yield, shortly followed by sudden collapse.<br/>A new parametric study, to expend the optimizing plate girders using S890 steel was conducted as well to address the usefulness of steels with higher yield strength in plate girders subjected to bending. This study again used the geometry used by Abspoel. The results showed a decrease in maximum web slenderness, but still a significant increase in bending moment capacity compared to the S690 plate girders. It was shown that using an optimized S890 plate girders compared to hot rolled section made also from S890 steel, could reduce the use of steel by more than 80%.<br/>After the parametric studies showed increasing< capacity, the geometry used to numerically model the plate girders, was critically addressed, using small scale numerical studies using FEMsoftware. These tests showed that not only the slenderness of the web was a factor in the bending moment capacity of a plate girder, but also the flange geometry plays a significant role. It was shown that increasing the length of the tested part of the girder, the failure mode could change from flange yielding to an instable mode in which the flange rotated around its longitudinal axis, resulting in a much lower bending moment capacity.<br/>An extra investigation in using a hybrid steel composition resulted in showing the potential of this optimization. Because by adding lower grade steel, more ductility was shown due to these parts yielding prior to yielding of the compressive flange, resulting in possible safer design.aSteel Plate girders; optimization; bending moment capacity; plate buckling; Slender plate girders
master thesis)uuid:6eb18a633cc543d5a38ca8992b9cddd1Dhttp://resolver.tudelft.nl/uuid:6eb18a633cc543d5a38ca8992b9cddd1Future City Hydrogen: Reality or Utopia?: A technoeconomical feasibility study of an optimal standalone SolarElectrolyzerBatteryFuelCell system for residential utilizationVTamarzians, Michel (TU Delft Electrical Engineering, Mathematics and Computer Science)Smets, Arno (mentor); Isabella, Olindo (graduation committee); Rueda Torres, Jose (graduation committee); Delft University of Technology (degree granting institution)
The population worldwide is growing rapidly which leads to an increase of the energy demand. Simultaneously, the established energy resources are being depleted and contribute negatively to the climate. The necessity for a sustainable and inexhaustible energy source, to deal with the increasing energy demand in an ecological friendly approach, will play a key role in the 21st century. One of the most predictable and inexhaustible renewable energy sources is the Sun. Nevertheless, changing weather conditions, like rain and clouds, winter and summer, result in daily and seasonal fluctuations. A reliable standalone solar system requires a profound storage method to tackle the daily and seasonal fluctuations that can potentially result in deficit or dumped energy.<br/>Generally, a battery bank is adopted in standalone solar systems, but the low energy density makes a battery bank not suitable as a seasonal storage method. A seasonal storage method can be implemented by the production and consumption of the chemical product hydrogen. Hydrogen has a high energy density compared to batteries (142 MJ/kg vs 0.95 MJ/kg), but the low roundtrip efficiency prevents implementing hydrogen as a daily storage method. For a highly reliable and optimal sized standalone energy system, a combination of both a battery bank and the chemical product hydrogen are used as a profound storage method. The combined storage method can be used in times<br/>of excess and deficit energy. This results in a so called standalone hybrid PVElectrolyzerBatteryFC energy system. In this final thesis project a standalone hybrid PVElectolyzerBatteryFC energy system is modelled and optimized to determine the current and future feasibility, both technologically and economically, for residential utilization. A simulation model of the hybrid energy system is designed in TRNSYS. The model is optimized by minimizing the loss of load probability (LLP) and levelized cost of energy (LCOE) for the standalone hybrid PVElectolyzerBatterFC energy system at residential level in TRNOPT. Several cases are optimized based on the electrical, heat and mobility demand. The used optimization method is a combination of particle swarm optimization (PSO) and HookeJeeves optimization algorithms implemented by GenOpt.<br/>It is established that the proposed standalone hybrid PVElectrolyzerBatteryFC is technically feasible for the fulfillment of the annual electrical demand of a typical Dutch household. The feasible system size consists of 19 PV modules, battery capacity of 25.5 kWh < and a tank volume of 1.24 cubic meters for a LCOE of 1.04 /kWh. If the future prices of the main components can be reduced to 0.01 /Wp for PV, 0.01 /Wh for battery and 0.01 /W for electrolyzer and fuel cell the hybrid system can potentially reach a LCOE of 0.28 /kWh. Reduction of the prices can be realized by large scale production, large scale implementation and technology maturity. In the end, a LCOE of 0.17 /kWh can be realized by renewable energy systems if these future prices are realized and the following conditions are met: (1) fully covered roof area by PV modules and (1) the production, consumption and storage of hydrogen should be centralized to scatter the infrastructure costs over all the consumers. This can induce a so called hydrogen economy in the future, whereby the hydrogen gas can be the sustainable link between the increasing energy demand and the depleting fossil fuels.SolarBatteryHydrogen System; Alkaline Electrolyzer; PEM fuel cell; Autonomous; Hybrid; optimization; HookeJeeves; Particle Swarm Optimization; Residential; NetherlandsSustainable Energy Technology)uuid:6d0e608eb4d64d7f8f6e1ffed2802347Dhttp://resolver.tudelft.nl/uuid:6d0e608eb4d64d7f8f6e1ffed2802347`An optimization based approach to autonomous drifting: A scaled implementation feasibility studyvVerlaan, Bram (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)VKeviczky, Tamas (mentor); Delft University of Technology (degree granting institution) Development of the autonomous vehicle has been a trending topic over the last few years. The automotive industry is continuously developing Advanced DriverAssistance Systems (ADAS) that partially take over the driver s workload. This has resulted in an increase in vehicle safety and a decrease in fatal crashes [1]. Full vehicle autonomy has not yet been reached, as the control systems involved are not yet capable of handling every situation. One of these critical situations is when a vehicle enters the unstable motion of drifting. A vehicle is prone to drifting on lowfriction surfaces, and also during these generally unstable maneuvers, the autonomous system should be able to remain in control. The performance of an autonomous drifting controller should be exemplified by the experience of rally drivers in how to handle a vehicle and keep control of a vehicle while drifting. The objective of this thesis is to design a control system which is capable of handling a vehicle during a drifting motion and to follow a certain desired path. Vehicle dynamics are modeled as a threestate bicycle model to simplify the complex dynamics of the vehicle and the interaction between tyre and road. The definition of longitudinal wheel slip is reformulated to a smooth alternative to accommodate gradient based solving. With the system dynamics defined, the drifting motion is analyzed and equilibrium points are identified, showing differences between low and high friction surfaces. Initially, a Model Predictive Control (MPC) strategy is applied with the purpose of steering the vehicle to desired drifting equilibria. Hereafter, the control system is extended to provide path following properties and addition of a dynamic velocity controller allows for a larger range of equilibria to be reached. The simulation setup intends to capture the experimental environment in the Network Embedded Robotics DCSC lab (NERDlab) at the Delft Center for Systems and Control (DCSC) department. Simulating a 1:10 scaled model allows to investigate the challenges that arise when implementing the control strategy on a scaled vehicle. These simulations show that autonomous drift control using the designed MPC strategy is possible, even when accounting for possible uncertainties such as delay, noise, and model mismatch.4optimization; control; autonomous; drifting; vehicle,Mechanical Engineering  Systems and Control)uuid:e08c31c21371465da5bc666433945249Dhttp://resolver.tudelft.nl/uuid:e08c31c21371465da5bc666433945249:Landing Gear Design Integration for the TU Delft Initiato< r/van Oene, Nick (TU Delft Aerospace Engineering)Vos, Roelof (mentor); Brgemann, Vincent (graduation committee); Veldhuis, Leo (graduation committee); Delft University of Technology (degree granting institution)The Delft University of Technology is developing an MDO tool for the conceptual design of transport aircraft. However, the current program is not able to investigate the influence of the undercarriage design on the weight, drag and geometry of transport aircraft. This research proposes a new design method for the undercarriage, for which a new design module is created and integrated<br/>into the Initiator architecture. This new method will allow the user to investigate the influence of the undercarriage design on the weight, drag and geometry transport aircraft concept.<br/>By designing the undercarriage for six existing aircraft, it is shown that the updated Initiator is able to reliably and consistently design an undercarriage for a given transport aircraft concept. Also, two test cases demonstrate that the new method allows the user to evaluate the impact of the undercarriage on the drag, weight and geometry of the concept.ILanding gear; Undercarriage; Concept Design; MDO; Initiator; optimizationAerospace Engineering)uuid:396e511219a048d3b83d9341a9fad583Dhttp://resolver.tudelft.nl/uuid:396e511219a048d3b83d9341a9fad583?Pumping when the wind blows: Demand response in the Dutch deltaBvan der Heijden, Ties (TU Delft Civil Engineering and Geosciences)Abraham, Edo (mentor); van Nooijen, Ronald (mentor); Palensky, Peter (mentor); Lugt, Dorien (mentor); Delft University of Technology (degree granting institution)This thesis investigates the potential of a large pumping station in IJmuiden, the Netherlands, for participating in Demand Response. Due to climate change, renewable energy is onthe rise. The intermittency of energy, together with its unpredictable supply, are a big hurdlefor the energy transition. Two methods are promising solutions to this problem; large scaleenergy storage and demand response. Since large scale energy storage is not yet economically feasible, demand response has an important role to play in the early days of the energytransition.Using energy when it is generated requires a datastream from the generation facilities onproduction, which is not (yet) widely available. The market price, however, is an indicationof the scarcity of energy, since it is based on the ratio between supply and demand. Besidesthat, there is a correlation between a low energy price and sustainable energy productionsince marginal costs of sustainable energy production are lower than fossil energy production. This makes using sustainable energy cheaper that fossil energy, and gives DemandResponse a business case.In this thesis, a Model Predictive Control is created that uses energy market data to minimizeenergy costs. Multiple energy markets are analyzed with respect for their suitability for thepumping station in IJmuiden to act on them. The day ahead market is called the APX inthe Netherlands, and this is where energy is bought and sold the day before consumption.The intraday market, also called the flexibility market, is where energy can be bought andsold up to 5 minutes before consumption. A strategy combining these two markets will beevaluated. This is done by using a predicted day ahead price, generated by a SARIMA model,to create a plan. This plan will then be followed, but deviations from the plan are allowedagainst intraday market price.Due to imperfections of the market (mismatch between supply and demand), imbalances areoccurring. These imbalances result in frequency deviations of the grid, and voltage deviations. Tenner, the Dutch TSO (Transmission system operator), is responsible for minimizingthese imbalances. In order to minimize the imbalance, TenneT gives a realtime indication ofthe imbalance on the grid, and positive contributions are rewarded while negative contributions are punished. This is done through the use of the imbalance price; a price per volumeof imbalance caused or solved. The imba< lance price is based on the aFRR market, wherebids can be done on possible activation. Since the imbalance market is a fastacting market,it is not suitable for a large pumping station like IJmuiden. However, the aFRR market willbe analyzed in this thesis.The effects of expected future development, like sea level rise and energy market changes,will be analyzed and simulated as well. A higher sea level would result in more pumping, andless discharging under gravity. Which causes the the pump schedule to become less flexible.The results show that it is possible to apply demand response to a pumping station, and theintraday market makes it possible for the MPC to adjust its energy use during the day.The aFRR market analysis shows a lot of potential for the pumping station, possibly makingup for all energy costs made through the spot markets.The conclusion of this thesis is that Rijkswaterstaat can possibly save energy costs on pumping, based on the fixed energy price, provided by Rijkswaterstaat, they pay now. Based ona reference scenario where the MPC only minimizes energy use, and a fixed ENDEX energyprice, the proposed MPC makes about 10% less costs in the German market scenario. TheDutch market scenario does not show cost savings. In the Netherlands there is not muchcorrelation between low energy prices and renewable energy yet, since renewable energy isnot a big part of the energy mix in the Netherlands. This correlation is expected to becomemore present when the Dutch energy mix becomes more sustainable. This is expected toresult in lower CO2emission through the energy use of the pumping station. However, moreresearch is needed to confirm this.'pumping; demand; response; side; management; smart; grid; sustainable; energy; market; day ahead; intraday; optimization; pyomo; ipopt; NLP; mpc; model; predictive; control; schedule; water; ijmuiden; pumping station; ijsselmeer; markermeer; noordzeekanaal; amsterdamrijnkanaal; rijkswaterstaat$Civil Engineering  Water Management52.470852, 4.601499)uuid:1a15154f7d084c5cbdc14966f958e498Dhttp://resolver.tudelft.nl/uuid:1a15154f7d084c5cbdc14966f958e498<Automated diglimit optimization through simulated annealing?Hanemaaijer, Thijs (TU Delft Civil Engineering and Geosciences)Wambeke, Tom (mentor); van Duijvenbode, Jeroen (mentor); Buxton, Mike (mentor); Soleymani Shishvan, Masoud (mentor); Delft University of Technology (degree granting institution)diglimit; simulated annealing; mine planning; diglines; optimization; metaheuristic; orewaste classification; dilution; ore lossApplied Earth Sciences)uuid:31642fd0f3824b9aa78c5bfdcb48fa31Dhttp://resolver.tudelft.nl/uuid:31642fd0f3824b9aa78c5bfdcb48fa31BOptimizing closure works: A case study on the Kalpasar closure dam9de Jong, Han (TU Delft Civil Engineering and Geosciences)Jonkman, Bas (mentor); Mooyaart, Leslie (mentor); Broos, Erik (mentor); van den Bos, Jeroen (graduation committee); Delft University of Technology (degree granting institution)8Constructing a dam across a tidal basin has alway been a longterm integral solution to many water related problems of the surrounding area such as flooding, river control and fresh water storage. However, immense challenges are accompanied with the closure works of large basins. This research treats the closure strategy to close the Gulf of Khambhat in India. The project is known as "Kalpasar", which aims to create of a fresh water reservoir in the Gulf of Khambhat by constructing a 35 km dam across the estuary. The Kalpasar project<br/>has been on the Indian Governments agenda since 1986. Royal Haskoning was involved in the prefeasibility study, which was presented in 1998. However, due to an alignment change to a more northern position, earlier proposed closure work designs are now considered out of date.<br/>To avoid irrelevance of this research through time and assist the Kalpasar development project with optimizing a new design for the closure works, this research treats the development of a fundamental parametric optimization tool to quickly perform a firstorder ev< aluation of possible closure strategies on costs. <br/>The tool as a product along with case results are delivered to the Kalpasar development project for further design optimization.<br/>Closing the tidal basin involves closing a certain wet cross section along the chosen dam alignment through which large tidal currents penetrate caused by tidal differences up to 11 m. Complexity is caused by increasing tidal flow velocities due to increasing constriction of the wet cross section during the closure. The developed optimization tool can evaluate and compare six preprogrammed strategies to close a multisectional wet cross section in time on costs of three fundamental design requirement or "cost factors": Required dam material, bed protection and equipment. Using a multisectional storage model to compute the flow velocities in the gap, the channels and tidal flats can be individually modeled after which they are linked as a system. The model reacts as a system to changes in flow area by closing certain cross sections (a channel or a tidal flat). The individual cross sections can be closed strategically by defining their closure method (horizontal, vertical or sudden), execution phase and construction capacity. These are called "strategic input parameters". Defined for all sections, they determine the closure sequence of the system in time. Optimization is achieved when the strategic input parameters define a closing sequence which minimizes the combined cost of all cost factors. <br/>Subsequent to the storage model, three computational models are introduced to quantify the required dam material, bed protection and equipment. Based on earlier research, the material model utilizes only quarried rock for gradual closures and sluice caissons for sudden closures. The equipment model utilizes large dump trucks for horizontal closures and ships or a temporary cableway/bridge system for vertical closures. The construction capacity is linked to material and bed protection models, since both design requirements are time dependent. Increasing construction capacity can therefore decrease these requirements.<br/>Since subsequent models largely depend on the flow velocity, an attempt to validate and calibrate the storage model was performed using results from previous research and a 2DH Delft3D model. Deviations with respect to the Delft3D model were significantly large (factor 23), because storage models can only be utilized if the basin size and the remaining gap are small (usability limits). Therefore, calibration was performed by introducing an artificial contraction factor to compensate for the error in the flow velocity. An exponential relation was determined linking the error to the constriction percentage of the gap. With increasing constriction percentage, the error decreased due to increasing validity of the storage model usability limits. The artificial contraction factor can be used to optimize the closure of the Gulf of Khambhat. However, for general use, the model should be calibrated to each specific site.<br/>Case study results show that using multiple cross sections to model the bathymetry with respect to a single cross section, the optimal strategy can change from fully vertical to a combination of horizontal and vertical with a specific capacity. Utilizing the developed model for the Kalpasar case is therefore recommendedbecause the complex bathymetry creates many possible strategies and can t be reliably modeled with single crosssectional models. The strategy that showed the most potential for further optimization is: First closing the tidal flats horizontally by forward dumping of rocks, while closing the channels up to 40% of their depth with dumping ships after which the remaining gap is closed vertically by a cableway or bridge system. This strategy is commonly suggested by existing literature, thereby increasing reliability and validity of the optimization model. <br/>A second case study showed negative effects of increasing construction capacity on the total cost. However, these case results are based on assumed costs < and cost functions for equipment, which should be verified by contractors first. Bed protection requirements did decrease significantly by increasing construction capacity, showing potential for development of high capacity closure equipment to avoid these costs. Further future development should focus on vertical closure equipment to decrease both material and bed protection costs.<br/>To conclude the recommendations, more case studies should be performed to quantify influences of parameters already included in the model, such as the permeability of the dam, the presence of a tidal power facility and the use of a sudden caisson closure to relieve the final closure. Secondly, further validation of the storage model is essential to generate more reliable results. Furthermore, research should be performed into cost functions of several existing or new high capacity equipment for vertical closures, relating costs to construction capacity to improve usability of the optimization model.<brBKhambhat; closure dam; closure works; optimization; kalpasar; tool,Hydraulic Engineering  Hydraulic Structures$Kalpasar Project, Royal HaskoningDHV21.993060, 72.318580)uuid:2f82f333a0c94119ae0f66bb0694c324Dhttp://resolver.tudelft.nl/uuid:2f82f333a0c94119ae0f66bb0694c324'Profit Optimization in Express NetworksTvan Dijk, Casper (TU Delft Electrical Engineering, Mathematics and Computer Science)TAardal, Karen (mentor); Delft University of Technology (degree granting institution)Parcel delivery companies offer timeguaranteed transportation of parcels, letters and packages, picked up at one customer and delivered at another. The time in which this has to be done depends on the service level that the customer pays for. To transport the parcels, a network is used consisting of facilities and links, where the facilities have either a regional collection and distribution function, or an interregional processing function. The market share that a company has depends on the price and service time the company offers the customers in comparison with the offer of competitors in the market. Optimizing the total profit, the total revenue minus the total cost, is an important objective for these companies. The classical approach to this, is to first determine the prices for the different services, which in turn determine the demand, and then to minimize the costs in the network. This project is aimed at finding a solution approach that integrates this process, and in this way finds better quality solutions in a shorter amount of time. The chosen approach uses efficient formulations and a general purpose MILPsolver for relatively easy problems, and a local search algorithm based on local branching for more difficult problems.express network; parcel; hub and spoke; profit; profit optimization; optimization; mathematical modelling; price setting; formulation; local search; local branching; price strategy; market share"Applied Mathematics  Optimization)uuid:369fabdd992e47e381eeddcc55756085Dhttp://resolver.tudelft.nl/uuid:369fabdd992e47e381eeddcc55756085Derivative Computation using an AdjointBased GoalOriented Iterative Multiscale Lagrangian Framework: Application to Reservoir SimulationTde Zeeuw, Wessel (TU Delft Electrical Engineering, Mathematics and Computer Science)Heemink, Arnold (mentor); Jesus de Moraes, Rafael (graduation committee); Jansen, Jan Dirk (graduation committee); Verlaan, Martin (graduation committee); Delft University of Technology (degree granting institution)g
Even though negative effects on the use of crude oil have surfaced over the past years, our energy matrix still largely relies on this energy source. The production of oil, therefore, plays an important role in our society. Unfortunately, the process of oil production is highly uncertain. There are uncertainties associated on the production strategy, e.g. where and how many wells should be drilled and how these wells should operate because of the uncertainties associated with the limited knowledge about the subsurface. In this thesis we are dealing with the uncert< ainty of the rock permeability distribution. Typically, rock permeabilities in the rock vary, but from the outside this can t be perceived. If these rock permeabilities are estimated inaccurately, they will result in inaccurate pressure solutions. Then, this can lead to faulty decisions regarding the oil exploitation. To resolve this issue, a data assimilation technique may be applied to correct these model parameters based on mismatch of simulated data and observations. For this optimization technique, often gradient information is required. Since<br/>in reservoir simulation the number of parameters generally is extremely high, computation of this information is computationally expensive. Therefore, a multiscale framework is employed to improve the<br/>computational efficiency of the forward simulation. Multiscale methods are able to solve the model equations at a computationally efficient coarse scale and can easily interpolate this solution to the fine<br/>scale resolution. Next, we use a Lagrangian setup together with a multiscale framework to rederive an efficient formulation for the derivative computation. However, as the multiscale method is prone to errors, this derivative computation formulation is recast in an iterative fashion, using a residual based<br/>iterative multiscale method to provide control of these errors. In this thesis we show that this method generates accurate gradients. In contract to the high accuracy of the method, this method comprises a computationally heavy smoothing step. This issue can be resolved by making smart use of the Lagrange<br/>multipliers, to rederive an efficient iterative multiscale solution strategy. The multipliers are used to identify important domains of the region for which smoothing is required and for which regions we may neglect the smoothing. We show that the newly proposed iterative multiscale goal oriented method is computationally more efficient and we show that method is promising for efficient derivative computation, but that more work is required to fully demonstrate the benefit of this method.multiscale; gradient; computation; lagrange; multipliers; optimization; goaloriented; adjoint; iterative mutliscale; Porous Media; Flow; reservoir simulationApplied Mathematics)uuid:19dd0340faa2416fbc67bbc9248a7154Dhttp://resolver.tudelft.nl/uuid:19dd0340faa2416fbc67bbc9248a7154}Goal Oriented Optimization of Tailored Modes for Reduced Order Modelling: An alternative perspective on Large Eddy SimulationmXavier, Donnatella Germaine (TU Delft Aerospace Engineering; TU Delft Aerodynamics, Wind Energy & Propulsion)WHulshoff, Steven (mentor); Delft University of Technology (degree granting institution)This Masters thesis is a new perspective on Large Eddy Simulation. The capability of goal oriented model constrained optimization technique to generate stable reduced order models without any additional stabilization term or subgrid scale modelling has been demonstrated. The low dimensional projection modes sought by the optimization program comprise the dissipative scales implicitly, thereby ensuring energy balance and eliminating the need for an SGS model.Voptimization; Lagrangian; large eddy simulation; variational multiscale; goal function4Aerospace Engineering  Aerodynamics and Wind Energy)uuid:e129aa531cca469ebf0980142d4b879cDhttp://resolver.tudelft.nl/uuid:e129aa531cca469ebf0980142d4b879c?Multirobot parcel sorting systems: Allocation and path findingXvan den Heuvel, Bram (TU Delft Electrical Engineering, Mathematics and Computer Science)Vvan Iersel, Leo (mentor); Delft University of Technology (degree granting institution)The logistics industry is being modernized using information technology and robots. This change encompasses a new set of challenges in warehouses. Recently, some companies have started using robot fleets to sort products and parcels. This thesis studies those systems, and researches the combinatorial problems that arise within them. Three main optimization problems are identified: 1. Finding an optimal layout of the sorting system on the< warehouse floor; 2. Allocating products or parcels to be sorted to robots; 3. Finding paths that all robots can follow concurrently, without colliding. These problems are considered one by one. The first problem is understood on an intuitive level, while the other two are considered more closely. For both problems, several algorithms are considered. Some utilize greedy heuristics while others model the problem at hand precisely using integer linear programming methods. The algorithm s real world performance is then assessed using a simulation. Slow, ILPbased algorithms are found to produce optimal solutions for small instances. However, they don t scale well, and are unable to solve large instances. Greedy approximation algorithms solve all problem instance sizes tested, but produce solutions of lower quality.optimization; sorting; planning; allocation; path; collision; ILP; makespan; heuristic; greedy; disjoint; parcel; robot; hamiltonian; treewidth; dynamic; programming; multi; commodity; flow; conservation; astar; rust; integrality; gap; benchmark; test)uuid:da43fc88c219446b999d24cd0e830a93Dhttp://resolver.tudelft.nl/uuid:da43fc88c219446b999d24cd0e830a93CRoute Optimisation For MobilityOnDemand Systems With RideSharing{van der Zee, Menno (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)ZAlonso Mora, Javier (mentor); Delft University of Technology (degree granting institution)B Privately owned cars are an unsustainable mode of transportation, especially in cities. New Mobility on Demand (MoD) services should offer a convenient and sustainable alternative to privately owned cars. Notable in this field is the recent uprise of ridesharing services such as offered by companies like Uber and Grab. Such services, especially when allowing for multiple passengers to share a vehicle, could potentially be a valuable addition to existing modes of transport to offer fast and sustainable doortodoor transportation.<br/><br/>The optimisation of vehicle routes for a MoD fleet is a complex task, especially when allowing for multiple passengers to share a vehicle. Recent studies have presented algorithms that can optimise routes in realtime for large scale ridesharing systems, but have left opportunities to further enhance fleet performance. The redistribution of idle vehicles towards areas of high demand and the utilisation of high capacity vehicles in a heterogeneous fleet has received little attention. This work presents a method to continuously redistribute idle vehicles towards areas of expected demand and an analysis of fleets with both buses and regular vehicles. Furthermore, a method is proposed to optimise vehicle routes while taking into account vehicle capacities and the future locations of vehicles in anticipation to predicted demand.<br/><br/>In simulations with historical taxi data of Manhattan, 99.8% of transportation requests can be served with a fleet of 3000 vehicles with an average waiting time of 57.4 seconds, and an average incar delay of 13.7 seconds. Compared to earlier work, a decrease in walkaways of 95% is obtained for 3000 vehicles, with a 86% decrease in average incar delay and a 37% decrease in average waiting time. For a small fleet of 1000 small busses of capacity 8 still 84.6% of requests can be served with an average waiting time of 141 seconds and an average incar delay of 269 seconds. In comparison to prior work, a decrease in walkaways of 15% is obtained, with a 14% decrease in average incar delay and a 2% decrease in average waiting time. A heterogeneous fleet of 1000 vehicles consisting of 500 buses and 500 regular vehicles using this new approach can serve approximately the same number of passengers as a homogeneous fleet of 1000 buses using earlier presented algorithms.optimisation; routing; mobilityondemand; ridesharing; ridesourcing; mobility; transport; optimization; Integer Linear Programming problem; ILP; Mixed integer linear programming; MILP)uuid:99d5ed9ac7064cb68caa9dbc8c9822c9Dhttp://resolver.tudelft.nl/uuid:99< d5ed9ac7064cb68caa9dbc8c9822c9Searching for two optimal trajectories: A study on different approaches to global optimization of gravityassist trajectories that have a backup departure opportunity2Perdeck, Matthias (TU Delft Aerospace Engineering)SCowan, Kevin (mentor); Delft University of Technology (degree granting institution)In interplanetary space missions, it is convenient to have a second departure opportunity in case the first is missed. Two distinct approaches to minimizing the maximum of the two DeltaV budgets of such a trajectory pair, are developed. The first ( a priori ) approach optimizes the variables of both trajectories at once. The second ( a posteriori ) approach first minimizes DeltaV budgets for a range of discrete departure epochs, and then selects the pair of which the highest DeltaV is minimum. Furthermore, five different pruning and biasing methods are developed, these prove critical for computational efficiency (number of objective function evaluations). Application to three different gravityassist (and deep space maneuver) trajectories to Saturn, reveals that the a priori approach is more computationally efficient on a trajectory with few variables (3) and that the a posteriori approach is more computationally efficient on a trajectory with many variables (22).[interplanetary; trajectory; optimization; optimisation; gravityassit; space; flight; flyby
20230607)uuid:66eca2a7321d44cdba1d9ad501f80177Dhttp://resolver.tudelft.nl/uuid:66eca2a7321d44cdba1d9ad501f80177SEvaluation and optimization of the control system of the Symphony Wave Power DeviceySfikas, Ilias (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Electrical Sustainable Energy)Polinder, Henk (mentor); Dong, Jianning (graduation committee); Smets, Arno (graduation committee); Delft University of Technology (degree granting institution)
Raising environmental concerns have stimulated the development of renewable energy, including energy from the oceans, which contain a huge potential. In this thesis, particular emphasis is given to wave energy, which can deliver up to 2 TW on a global scale. The aim of this thesis is to optimize the control system of the Symphony Wave Power Device, which is a point absorber, so that the energy that is being delivered to the electrical grid is maximal and the device functions in a stable way. The device is analytically described in terms of structural parts, operating principle and presentation of all the forces that act on the moving part, which is called the floater. The device is in fact a massspringdamper system, for which the spring constant needs to be tuned according to the period of the incoming waves, so as to maximize the energy extraction. For this tuning, not only the actual mass of the floater, but also the added equivalent mass due to the inertia of the inner turbine need to be taken into account.<br/>The whole device is modelled with the help of a Matlab/Simulink programme, in which simulations can be performed, to observe the motion and make certain calculations. The already existing PI controller, which makes use of an energy error, is briefly described and the relevant calculations for the energy extraction are presented. The energy losses in the electrical parts also need to be taken into account.<br/>To evaluate the current controller, it is necessary to calculate the upper boundary of the energy that the Symphony can obtain from a certain wave. This is done with the help of the GAMS software. The code, as well as the necessary assumptions and approximations, are presented in a mathematical way. The results, both in numerical and graphical form, provide a good insight as to how the ideal theoretical control system looks like.<br/>Next, simulations are performed in the Matlab programme and comparisons with the GAMS results are made. The essential parts of the controller are tuned to their optimal values. Only a proportional part for the PI controller is needed and the energy should not flow in two directions. <br/>The results show that, with corre< ct tuning of the proportional part, as well as of the spring constant, the Symphony operates very well in all realistic sea states at the location where it will be placed. A high percentage of the theoretical energy boundary is being extracted from the waves and the motion of the floater is close to the optimal pattern. It is thus concluded that the existing controller has a remarkable performance, if regulated correctly. Finally, recommendations for future research on many levels are given.<br1waves; control; oscillation; spring; optimization
20230205)uuid:c1ba7b4e129f480ca8947b4727ee0faeDhttp://resolver.tudelft.nl/uuid:c1ba7b4e129f480ca8947b4727ee0faeA model based approach to the automatic generation of block division plans: On the effective usefulness in ship production optimization algorithmIde Bruijn, Dirk (TU Delft Mechanical, Maritime and Materials Engineering)TCoenen, Jenny (mentor); Delft University of Technology (degree granting institution)6European shipyards are building increasing amounts of complex ships such as offshore, dredging or naval ships that are engineeredtoorder. The block division is made when only preliminary hull structural design, functional compartments and the location of major equipment are known. The existing production scheduling optimization algorithms use a single manually created block division as a fixed input. Automatic generation of block division plans can potentially optimize the currently created production planning solutions because all work directly related to the construction of the ship on a shipyard is decomposed by the different chosen blocks.The preliminary design information and general arrangement of the ship, implicit knowledge of engineers and detailed information from comparable reference ships are known. A block division generator is developed to create multiple feasible Block division solutions. The different block division solutions result in deviations to relevant optimization objectives. It is concluded that it is possible to automatically generate block division plans that can be effectively used in ship production optimization algorithm. Due to simplifications in the block division generator and the erection sequence optimization algorithm, no quantitative optimization potential can be determined.EBlock division; automatic; ship production; ship design; optimization)uuid:14461c4690a344b88b64ac6dae303f88Dhttp://resolver.tudelft.nl/uuid:14461c4690a344b88b64ac6dae303f88UOptimization of offshore wind farm installation procedure with a targeted finish dateXVigney Kumar, Vigney (TU Delft Electrical Engineering, Mathematics and Computer Science)nZaayer, Michiel (mentor); Dewan, Ashish (mentor); Delft University of Technology (degree granting institution)9 Offshore Wind Farm (OWF) installation procedure is a complicated phase requiring excellent management of resources for timely completion of tasks. The wind farm installation period involves crucial stages like streamlining onshore logistics, transportation of components on vessels, installation of foundations, wind turbines, and cable laying, etc. With wind turbine size growing and OWFs moving into deeper waters, the complexity of the installation procedure is also intensifying correspondingly. Moreover, the installation process can experience uncertainty due to harsh weather conditions, possible equipment failures and component delivery delays during the build. Additionally, the resources used in the installation phase are needed for the subsequent projects and have limited flexibility with end dates. This finally results in a considerable ambiguity in end date and cost incurred during the project. <br/>This graduation study looks at optimization of OWF installation procedure with a targeted completion date as a priority. In this thesis, an optimization approach is built around an ECN inhouse software, developed for simulating various OWF installation strategies. Ultimately, the result of the dissertation is to have a method that provides added flexibility to simulate different OWF installation plannin< g while still obtaining optimal installation costs. A concise literature review describes the significance of the current research and the potential that metaheuristic approaches bring to solve installation scheduling problems. Thus, the genetic algorithm is chosen as the optimization procedure to use for current work. The objective of the optimization procedure throughout the research is minimizing the total installation cost. The target end date in this study is implemented in the form of a constraint to steer the optimizer solution within the specified limit. A new methodology is proposed to generate an automated planning for the different installation procedures to facilitate the link between the optimizer and ECN tool. The project also considers uncertainty introduced due to weather and describes the considerations made to account for the same. The new approach shows the potential of introducing an optimization procedure in OWF installation logistics and ultimately assisting in lowering the overall project costs.<brIoffshore wind energy; Installation; ECN Install; optimization; Windenergy)uuid:7285b0a3ace84568b35d2a6dc8180f6bDhttp://resolver.tudelft.nl/uuid:7285b0a3ace84568b35d2a6dc8180f6bKAddicks and Barker Dams: An optimization to minimize damage due to floodingzBrussee, Anneroos (TU Delft Civil Engineering and Geosciences; TU Delft Hydraulic Engineering); van der Doef, Laura (TU Delft Civil Engineering and Geosciences; TU Delft Hydraulic Engineering); Jansen, Lise (TU Delft Civil Engineering and Geosciences; TU Delft Hydraulic Engineering); Oostrum, Natasja (TU Delft Civil Engineering and Geosciences; TU Delft Hydraulic Engineering)Jonkman, Bas (mentor); Kothuis, Baukje (mentor); Sebastian, Toni (mentor); van Berchum, Erik (mentor); Delft University of Technology (degree granting institution)The Addicks and Barker Reservoirs, built in the forties, are located in Houston and collect precipitation and runoff from upstream areas to reduce ood risks along Buffalo Bayou to protect downtown Houston. During Hurricane Harvey (August 25  August 30, 2017), the precipitation reached a new record of 910 mm [36.2 inches] in a 4 day period in Houston. The gates of Addicks and Barker Reservoirs were opened during the night of 2728 August which led to major damages due to downstream ooding. Besides, nongovernment owned land upstream was ooded due to high water levels in the reservoirs.<br/>In this report, new design water levels for Addicks and Barker Reservoir are calculated based on inowing discharge into the reservoirs and precipitation directly onto the reservoirs, including data of Hurricane Harvey. These calculated design water levels are compared with the critical water levels calculated based on the failure mechanisms of the dams. This study shows that the original design water level of the dams, based on the Probable Maximum Flood, are 2.83 m and 1.01 m higher than the critical water level for which failure of the dams can occur due to piping for Addicks and Barker Reservoir. However, the maximum allowed water level which is currently maintained by the United State Army Corps of Engineers, is 2.19 m and 2.46 m below the calculated critical water level. During Hurricane Harvey, these maximum allowed water levels were exceeded with 3.46 m and 1.93 m.<br/>The damage of residential properties upstream and downstream of the reservoirs are minimized based on the distribution of excess volume from the inow of creeks and precipitation onto the reservoirs. The ratio of the amount of volume which should remain upstream of the dams and the volume discharged into the Buffalo Bayou is calculated for every considered event with its duration and return period. The ratio of Addicks Reservoir is the dominant ratio, which should be used for both reservoirs. Runoff alone already produces damage, especially for the 12h and 24h precipitation, so the Addicks and Barker Reservoirs should not release discharge into the Buffalo Bayou for small durations. For events with a longer duration, it would cause less damage to open the outlets o< f the reservoirs than to keep them closed. However, if the water level in the reservoir exceeds the critical water level for piping, it is advised to discharge more to the downstream area to prevent breaching of the dams. Since the critical water level is reached for approximately 25% of the events at Addicks Reservoir, mitigations against piping should be taken to improve the minimization of damage. For Barker Reservoir, the critical water level is not reached in the optimization. During big events, people living upstream will be more affected by the ooding than people living downstream since this optimization is based on the damage minimization of residential properties.<brQdam failure; design water level; flooding; damage; optimization; hurricane Harveystudent reportMaster Project ReportMP225
29.71, 95.40)uuid:f259641b8c22423d8c8543943ecf4fa5Dhttp://resolver.tudelft.nl/uuid:f259641b8c22423d8c8543943ecf4fa5KStopping pattern and frequency optimization for multiple transport servicesbvan Beurden, Merlijn (TU Delft Civil Engineering and Geosciences; TU Delft Transport and Planning)Hoogendoorn, Serge (mentor); Cats, Oded (mentor); Warnier, Martijn (mentor); Delft University of Technology (degree granting institution)In my thesis, I have done research into the optimization of stopping patterns and frequencies of public transport services within a network. I have developed a model that takes both passenger and operator costs into account. The model has been run for a small fictional network and for the metro and train network of Amsterdam, to explore how the current public transport services could be improved.YStopping pattern; frequency; optimization; Public Transport; Genetic Algorithm; Amsterdam)uuid:64683ba450b4485fbe7422ec766cbf0dDhttp://resolver.tudelft.nl/uuid:64683ba450b4485fbe7422ec766cbf0d6Global Ascent Trajectory Optimization of a Space Plane:Spillenaar Bilgen, Jesper (TU Delft Aerospace Engineering)SMooij, Erwin (mentor); Delft University of Technology (degree granting institution) The main goal of any launch vehicle is to bring as much payload to space as possible. Space planes have been studied for decades, as they are thought to be more cost effective for frequent access to space than traditional expendable launchers. This research aims at optimizing the payload capacity of a single stage to orbit (SSTO) horizontal takeoff horizontal landing (HTHL) space plane, by flying a minimum fuel an heat load trajectory. Trajectory optimization of launch vehicles is traditionally performed with local optimization methods. The objective of this research is to find a good approach to optimize the ascent trajectory with a global optimizer. To achieve this goal, a simulation model is set up. This model propagates the ascent trajectory based on a guided angle of attack and throttle setting as a function of the normalized energy state of the vehicle. The guidance parameters are defined at a number of control nodes and are stored in a decision vector. This vector is randomly initialized and subsequently optimized. The performance of different optimization methods and problem settings is assessed based on the convergence of the optimized populations with respect to a set of evaluation objectives. Specifically multiobjective (MO) global optimizers were selected for this research. The performance of MOEA/D, NSGAII and NSGAIItabu was compared. MOEA/D was found to give the most consistent result and fastest convergence. Different combinations of control parameters were used. The use of thrustvector control improved the convergence and the quality of the results, as long as the problem dimension was not oversized. Also various approaches for constraint and objective handling were evaluated. As the objectives and constraints were highly conflicting, a penalty function had to be included to reduce the sensitivity to premature convergence to noflying solutions. <br/>The resulting set up was not able to find a solution for the complete trajectory. The trajectory was therefore split in three phases: takeoff, acce< leration and pullup. The first two stages were optimized successfully and resulted in similar payload capacities as found in the literature with traditional methods. The final pullup stage needs to be further investigated. Although this research has shown that global optimizers can be used for the ascent trajectory optimization, further research is required before the methods can be applied effectively.%space plane; optimization; trajectory)uuid:6bc3aacfa97b44bc82f9e7fc542852adDhttp://resolver.tudelft.nl/uuid:6bc3aacfa97b44bc82f9e7fc542852ad=EnergyOptimized Toed Walking on Flexible Soles for HumanoidsSvan der Planken, Jonathan (TU Delft Mechanical, Maritime and Materials Engineering)UVallery, Heike (mentor); Delft University of Technology (degree granting institution)In this research the role of thick flexible soles in energyefficient humanoid walking is analyzed. It is<br/>hypothesized that, the addition of underactuated degrees of freedom under the foot gives the robot<br/>the potential to execute a pseudopassive walking motion 1, which yields a decrease in ankle torque<br/>and energy expenditure. Furthermore it is hypothesized that, if these principles are applied to toed<br/>gait walking patterns, instead of flat foot walking patterns the decreases will be larger in magnitude.<br/>To isolate the effects of adding a sole, a toe joint and both at the same time, four walking types are<br/>compared in simulation; flat foot and toed gait walking, both with and without sole. To asses the cases<br/>without sole, energyoptimized walking pattern generation is used. For walking on soles, the optimized<br/>walking patterns are used as input for a deformation estimator that calculates the sole compression.<br/>Simulation results show that the rolling motion of the sole reduces the ankle torque and the energy<br/>consumption. The results prove that the reduction effects are especially large for toed gait walking,<br/>thereby validating both the hypotheses.flexible; Sole; Energy; optimization; gait; generation; humanoid; deformation; estimation; HRP4; pseudopassive; passive; walking)uuid:e6ab0d7e5e4342979369cfe07c623eebDhttp://resolver.tudelft.nl/uuid:e6ab0d7e5e4342979369cfe07c623eebHQuantized Distributed Optimization Schemes: A monotone operator approachQJonkman, Jake (TU Delft Electrical Engineering, Mathematics and Computer Science)pHeusdens, Richard (mentor); Sherson, Thom (mentor); Delft University of Technology (degree granting institution)Recently, the effects of quantization on the PrimalDual Method of Multipliers were studied.<br/>In this thesis, we have used this method as an example to further investigate the effects of quantization on distributed optimization schemes in a much broader sense. Using monotone operator theory, the effect of quantization on all distributed optimization algorithms that can be cast as a monotone operator was researched for two different problem subclasses. The averaging problem was used as an example of a quadratic problem, while the Gaussian channel capacity problem was an example of the nonlinear problem subclass. A fixed bit rate quantizer was used in combination with a dynamic cell width, to analyse the robustness of distributed optimization schemes against quantization effects. In particular, we have shown that for practical implementations it is possible to incorporate fixed bit rate quantization with dynamic cell width in a distributed optimization algorithm without loss of performance for both problem classes.<brXquantization; PDMM; monotone operators; PrimalDual; Method of Multipliers; optimizationElectrical Engineering / Circuits and Systems)uuid:987820a06a9c460f8d5aa7b6c1ba29e7Dhttp://resolver.tudelft.nl/uuid:987820a06a9c460f8d5aa7b6c1ba29e7@Triangulation for seismic modelling with optimization techniques^Wang, Weijun (TU Delft Civil Engineering and Geosciences; TU Delft Geoscience and Engineering)Mulder, Wim (mentor); Slob, Evert (graduation committee); Wellmann, J. F. (graduation committee); Delft University of Technology (degree granting instit< ution)fThe finiteelement method can easily handle complicated geometries because of the application of unstructured meshes. Unlike the Cartesian grid used in the finitedifference method, the unstructured mesh can follow the sharp interfaces that separate two layers of different properties. Therefore, the finiteelement method can provide more accurate solutions for the simulation of seismic wave propagation. Meshes of good quality are required for the finiteelement simulation. However, it is not trivial to set up an appropriate mesh. First<br/>of all, the mesh should contain elements of good shapes and sizes. In addition, the sharp interfaces should coincide with the edges of the elements instead of intersecting with them. These requirements are formulated as an optimization problem with three terms, measuring the difference between the actual and prescribed scaling field, shape quality, and the area between prescribed curves and the nearest triangle edges. The solution of the optimization problem should provide the desired mesh. The mesh generator MESH2D was applied to obtain an initial mesh. The Matlab function minFunc was used to search for the minimum of the constructed objective function. Three weights balance the three terms in the objective function. When it comes to complicated models, these weights have to be chosen carefully to produce a reasonable mesh.4triangulation; optimization; seismic; finiteelementApplied Geophysics)uuid:dabadd3819f4413eb597e8777a9bbb88Dhttp://resolver.tudelft.nl/uuid:dabadd3819f4413eb597e8777a9bbb88HUsing Topology Optimization for Actuator Placement within Motion SystemsLBroxterman, Stefan (TU Delft Mechanical, Maritime and Materials Engineering)ZLangelaar, Matthijs (mentor); Delft University of Technology (degree granting institution)U Topology optimization is a strong approach for generating optimal designs which cannot be obtained using conventional optimization methods. Improving structural characteristics by changing the internal topology of a design domain has been fascinating scientists and engineers for years. Topology optimization can be described as a distribution of a given amount of material in a specified design domain, which is subjected to certain loading and boundary conditions. This domain can then be optimized to minimize specified objectives, for example compliance. For static problems, topology optimization is extensively used. The distribution of material, void and solid regions, can be used to solve several problems within the mechanical domain. However, this method of optimization is also used to optimize structures with respect to their resonant dynamics.<br/><br/>Design of actuator placement is used to determine the most optimal actuator layout for a given objective, for example reducing responses. Combined with topology optimization, both design variables can influence each other, and be optimized towards the wanted behavior. This is done in a static domain. When material is removed, the force layout is updated, which influences the material distribution again. It is shown that the combination of these design variables in the optimization process, contributes to a better result; weight reduction can be achieved, while large deformations are preserved.<br/><br/>Design of actuator placement, combined with topology optimization is also implemented in a dynamic domain. Since topology changes result in frequency response changes, the force placement is more sensitive. On the other hand, forces can be placed in a smart way, to ensure some mode shapes are not excited, whereas others are. By enabling positive and negative forces these forces can even be used to counteract or minimize certain modal responses. When implementing for example a harmonic excitation, the weight and total force can be linked together, to ensure accelerations are feasible. A weight reduction can thus lead to force reduction, which on its turn leads to less deformations. Especially in the highprecision industry, smart placement of actuators, including weight reduction can be very helpful. The < combination of these phenomena could provide a new insight in creating accurate wafer stages.<brtopology; optimization; actuator; placement; motion; systems; design; supports; load; load placement; eigenmodes; eigenfrequency; stefan; broxterman; tudelft; msc; pme; precision; mechanisms; compliant; case; wafer; stage; harmonic; excitationPME / PMEEM)uuid:87a93c28cd5f434896cff5855ea77ac1Dhttp://resolver.tudelft.nl/uuid:87a93c28cd5f434896cff5855ea77ac1LOptimaliseren van de service van een taxisysteem met zelfrijdende voertuigenQVooijs, Irene (TU Delft Electrical Engineering, Mathematics and Computer Science)Lin, Hai Xiang (mentor); van der Woude, Jacob (mentor); van Elderen, Emiel (graduation committee); van Iersel, Leo (mentor); Delft University of Technology (degree granting institution)}Dit project behandelt de programmering en toepassing van een taxiservice, waarbij de passagiers zich vanaf of naar het treinstation willen verplaatsen. Met behulp van geheeltallig lineair programmeren wordt bepaald welke ritten door de taxi's moeten worden gereden, om de winst te maximaliseren. Dit model is gebaseerd op het model wat beschreven is in het verslag van Xiao Liang et al. uit 2016. <br/>Er worden twee modellen vergeleken: in Model 1 is het systeem vrij om verzoeken te accepteren of te weigeren, terwijl in Model 2 per zone beslist wordt of alle ritten al dan niet worden. De taxiservice wordt eerst toegepast op kleine schaal, waarna enkele aanpassingen gedaan worden om het odel ook op grote schaal te kunnen toepassen. Bij de toepassing op kleine schaal wordt altijd verlies gemaakt, omdat de ratio taxi's per zone erg hoog is. Voor de toepassing op grote schaal blijkt dat het model voor veel taxi's sterk overeenkomt met het model van Liang, maar voor kleinere hoeveelheden taxi's minder omdat het genereren van ritten in Liang's model minder homogeen gedaan wordt. Het optimale aantal taxi's om te gebruiken is altijd 20 of 40.?taxi; optimization; taxiservice; Autonomous Vehicles; Modellingnl)uuid:6b07b0c4553442e490670dcb23fc0646Dhttp://resolver.tudelft.nl/uuid:6b07b0c4553442e490670dcb23fc0646A Manyobjective Tactical Stand Allocation: Stakeholder Tradeoffs and Performance Planning: A London Heathrow Airport Case Study7Fldes, Gergely Istvn (TU Delft Aerospace Engineering)Roling, Paul (mentor); Verhees, Martijn (graduation committee); Melkert, Joris (graduation committee); Curran, Richard (mentor); Delft University of Technology (degree granting institution)GAirports are highly complex systems that can generate economic growth<br/>on their own. Accordingly, airports should take proactive actions to<br/>create a status quo between the stakeholders(the airport itself,<br/>airlines, passengers) in the tactical planning of the aircraft stand<br/>allocation. Namely, the harmonization between the stakeholders <br/>interests is either reactively or not at all considered, so one cannot<br/>be certain that the objectives of the stakeholders are met. For that<br/>reason, a methodology is developed using Weight Space Search on a<br/>manyobjective tactical stand allocation model to establish a<br/>reference performance set from which decision alternatives are created<br/>using the kmeans clustering algorithm. Decision makers then can<br/>proactively assess and choose decision alternatives on the performance<br/>of the tactical stand allocation to identify how the different<br/>stakeholders can achieve their goals in (partial) synergy. The airport<br/>can also apply the concept of empathetic negotiation to establish a<br/>favorable status quo.:airport; tactical stand allocation; planning; optimization
20190623)uuid:ef49b460a456433e841aacb793febc53Dhttp://resolver.tudelft.nl/uuid:ef49b460a456433e841aacb793febc53,Optimization of the skidded loadout processGVerhoef, Nick (TU Delft Mechanical, Maritime and Materials Engineering)Kaminski, Mirek (mentor); van Woerkom, Paul (graduation committee); Bos, Reinier (graduation committee); van Kester, Maurice (graduation committee); Delft University of Technology (degree gran< ting institution)
The structures which HMC installs offshore are fabricated onshore and subsequently moved onto a barge or ship, seafastened and then transported to the offshore location. The process of moving a structure from the onshore quayside to the barge or ship is called the loadout. This loadout can be performed by lifting, skidding or using a trailer (SPMT). This thesis research focused only on a skidded loadout onto a barge.<br/>During the loadout the weight of the jacket or topside is gradually transferred from the quay to the barge. The barge gradually takes more of the load so ballast water needs to be continuously pumped or discharged depending on the location of the structure and the location of the ballast tank concerned. Improper ballasting during this process will cause the alignment between the quay and the barge to be disrupted which in turn causes peak loads in the topside or jacket and the barge. It is questioned if there are more suitable ballasting methods or a structural solution in order to lower these peak loads? It is also modeled what the effects are of quayside stiffness and the best method to model this stiffness.<br/>Therefore a 2D representation of the entire loadout is made. This model will be made using the finite element method, via a numerical model, in MATLAB. A base case loadout of a topside will be applied to this model. Using this model, optimizing the ballast configuration will be researched. Several different criteria for the optimization were tested and its different effects on the forces during the loadout were researched and quantified. The structural solution of relocating the skidbeams to an area of lower deck stiffness was also tested and the results studied. The effects of the quayside stiffness and modelling methods were also quantified using the 2D MATLAB model.<br/>The conclusion derived from the optimizations is that there are other ballast configurations which perform better in reducing the peak forces experienced during the loadout. The key to these optimizations is that they keep the bargequay alignment as perfect as possible. If a critical element is present in the loadout the ballast configuration can be adjusted to lower the forces in this specific element. The results of the simulation in which the skidbeams were relocated show that this approach has no beneficial effects in reducing the forces during the loadout, mainly due to the presence of the transverse bulkheads in het barge. Furthermore for the modelling of the quayside it was proven that especially when using a low stiffness quayside, modelling the quayside without taking into account the foundation layer stiffness is inaccurate and can lead to lower forces in the model than which occur in reality.(loadout; optimization; skidded; ballast)uuid:696e112f697b49e8a524c5efbe0663daDhttp://resolver.tudelft.nl/uuid:696e112f697b49e8a524c5efbe0663daSupport Structure Optimization: On the use of load estimations for time efficient optimization of monopile support structures of offshore wind turbinesMaljaars, J.L.Langelaar, M. (mentor)Over the years, the installed capacity of offshore wind turbines is increasing rapidly. However, the Levelized Costs Of Energy (LCOE) is still higher than the LCOE of traditional energy production methods like nuclear power or energy from coals or gas. This research focuses on a further decrease of the LCOE, by minimizing the mass of a monopile support structure of a wind turbine. This is done in a so called integrated way: Optimizing the tower and the foundation together. The design variables used in this research are the wall thickness and the diameter of every +3 meter section. These can even be cylindrical or conical. To simplify the problem, a parametrization of the designs is used, which reduces the design variables from around 180 to 28. This is checked with existing designs. Due to the interaction between mostly the first eigenfrequency and eigenmode, the diameter and the waves, it is expected that several local optima exist. Therefore, the proposed optimization < strategy is a Particle Swarm Optimization which can be used for a global search for an initial position for a gradient based optimization to find a local optimum, which is possibly the global optimum. In this research the focus is on the Particle Swarm Optimization. The constraints of the optimization are Fatigue, Buckling, the maximum deflection of the monopile, the angle of the conical parts and the D/tratio of the monopile. These are used in the initial design of support structures, so that the optimized designs are realistic. To take the constraints into account, the objective is taken as the mass extended by the penalized constraints. To reduce the optimization time, the evaluations of the objective function are done by using load estimations instead of extensive load calculations. Several methods are compared on a theoretical basis: Response Surface Methodology, Radial Basis Functions, Kriging, Support Vector Regression, Multiadaptive Regression Splines and NonUniform Regression BSplines. The performance of a selection of methods is checked on the problem, to come up with reliable estimation methods. To improve the accuracy of the estimations, interaction of Particle Swarm Optimization and the estimators is proposed via estimator updating. During this research, an optimization tool for monopile support structures is developed. This tool is able to use calculations or estimations of the loads. In order to study the behaviour of the proposed optimization approach and to compare it with the traditional design approach, several case studies are formulated based on a realistic design problem. These are optimized with the optimization tool. Using a constant tower diameter, the optimization tool is able to reduce the mass of the support structure with 13\%. Using the tower diameter also as design variable in the optimization gives a further reduction of the mass with 4\%. Several test runs are done, to check whether a global optimum is found or not.wind energy; wind turbine; offshore wind turbine; support structure; optimization; estimators; radial basis functions; kriging; support vector regression; nurbs; response surface methodology; estimator updating; integrated optimization.Mechanical, Maritime and Materials Engineering*Precision & Microsystems Engineering (PME)Engineering Mechanics)uuid:642f10762f8a4ad391ebea7b6c40f2dfDhttp://resolver.tudelft.nl/uuid:642f10762f8a4ad391ebea7b6c40f2dfgTractable Reserve Scheduling Formulations for Alternating Current Power Grids with Uncertain Generationter Haar, O.A.6Keviczky, T. (mentor); Rostampour Samarin, V. (mentor)/The increasing penetration of wind power generation introduces uncertainty in the behaviour of electric power grids. This work is concerned with the problem of dayahead reserve scheduling (RS) for power systems with high levels of wind power penetration, and proposes a novel setup that incorporates an alternating current (AC) Optimal Power Flow (OPF) formulation. The OPFRS problem is nonconvex and in general hard to solve. Using a convex relaxation technique, we focus on systems with uncertain generation and formulate a chanceconstrained optimization problem to determine the minimum cost of production and reserves. Following a randomization technique, we approximate the chance constraints and provide apriori feasibility guarantees in a probabilistic sense. However, the resulting problem is computationally intractable, due to the fact that the computation time complexity grows polynomially with respect to the size of the power network and scheduling horizon. In this thesis, we first use the socalled scenario approach to approximate a convex set which contains almost surely the probability mass distribution of underlying random events. We rely on the special property of reserve scheduling problems which leads to linear constraint functions with respect to the uncertain parameters. We can therefore formulate a robust problem for only the vertices of the approximated set. Using the proposed approach, the number of scenarios is reduced significantly which is b< eneficial for the tractability. Such a formulation requires the power network state to only be feasible for all vertices of the convex approximated set. To even further relax such a requirement, we develop a novel RS formulation by considering the network state as a nonlinear parametrization function of the uncertainty. By using a conic combination of matrices, only three positive semidefinite constraints per time step are considered. Unlike existing works in RS, our proposed parametrization has a practical meaning and is directly related to the distribution of reserve power. Such a reformulation yields a reduction in computational complexity of OPFRS problems. Finally, we extend our results to a more realistic size of power grids, using sparsity pattern and spatiality (multiarea) decomposition of the power networks, leading to a decomposed semidefinite programming (SDP) problem. To solve the SDP in a distributed setting, we formulate a distributed consensus optimization problem, and then the alternating direction method of multipliers (ADMM) algorithm is employed to coordinate local OPFRS problems between neighbouring areas. The theoretical developments in aforementioned cases were validated on a realistic benchmark system and a discussion on the tractability of the resulting optimization problems by means of computational time analysis is presented.power system; optimization; uncertainty; renewable energy; wind power generation; reserve scheduling; optimal power flow; reserve requirements; scenario approach; alternating direction method of multipliers; distributed solving; vertex enumeration; conic parametrization+Delft Center for Systems and Control (DCSC))uuid:faa2c6dce5ca4486b607d963f650dad2Dhttp://resolver.tudelft.nl/uuid:faa2c6dce5ca4486b607d963f650dad2Improved Flexible Runway Use Modeling: A MultiObjective Optimization Concerning Pairwise RECATEU Separation Minima, Reduced Noise Annoyance and Fuel Consumption at London Heathrowvan der Meijden, S.A.,Roling, P.C. (mentor); Visser, H.G. (mentor)A minimization of disturbance caused by aircraft noise events and a reduction of fuel consumption during the initial and final phase of flight. These are the two objectives that play an important role in the Flexible Runway Allocation Model. By taking into account fuel consumption alongside noise annoyance, this model enables to analyze and optimize runway allocation from a broader perspective. This study aims to identify the improvements that can be made with respect to the initial Flexible Runway Use Model. Accordingly, these enhancements should be implemented and quantified in order to establish the Improved Flexible Runway Allocation Model. The improvements that are found in this study relate to both objectives in the mixed integer linear programming optimization as well as particular linear constraints. A major contribution is made to the runway occupancy constraint, which has shown a transition from a single aircraft computational method to a pairwise flight separation approach based on RECATEU. The proposed Improved Flexible Runway Allocation Model is applied to a case study that represents daily operations at London Heathrow Airport. This model shows that, by assigning a small delay to inbound and/or outbound flights, significant contributions can be made with respect to noise annoyance in the vicinity of the airport as well as the overall fuel consumption from the airline s perspective. By allowing opposite direction operations, flexibility is added to the use of the airport s runway ends, which results in a more efficient utilization of the available capacity. The results of this analysis are visualized by means of a Pareto front, indicating the Pareto optimal solutions to a runway allocation assignment based on a differentiation in objective weights.runway; allocation; capacity; MILP; Linear Programming; Heathrow; London; optimization; fuel; noise; noise annoyance; Pareto; RECATEU; separation minima; opposite direction operations; flexible; flexible runway allocationControl & Operations Air Transport & < Operations (ATO))uuid:03af3d1b98d84c1499ffa448b4f4b2d0Dhttp://resolver.tudelft.nl/uuid:03af3d1b98d84c1499ffa448b4f4b2d0YModels, Solutions and Relaxations of the Asymmetrical Capacitated Vehicle Routing ProblemKerckhoffs, L.Aardal, K.I. (mentor)In this thesis, we take a look at the Asymmetrical Capacitated Vehicle Routing Problem (ACVRP). We will take a look at different possible formulations for the problem and choose one based on the ease of implementation, the computation speed of solving it, and the available relaxations. The problem, and its relaxations, will be modeled and solved using AIMMS, a commercial modeling software. Using the methods described above, we model different cases and instances of the problem using a TwoIndex Vehicle Flow formulation. We apply an Assignment Problem relaxation and a Linear Programming relaxation to each of the instances. We find that the problem is easiest to solve when all customers are relatively close to each other (as opposed to being placed in separate clusters that are relatively far from each other), and that the LP relaxation gives bounds with a fairly good quality in short periods of time.NTSP; Vehicle Routing; VRP; ACVRP; optimization; online supermarket; relaxation8Electrical Engineering, Mathematics and Computer Science&Delft Institute of Applied MathematicsOptimization)uuid:69719e2d564947daa39ee9107487eab1Dhttp://resolver.tudelft.nl/uuid:69719e2d564947daa39ee9107487eab1>Creating an optimal OR schedule regarding downstream resourcesCarlier, M.Van Essen, T. (mentor) A high percentage of hospital admissions is due to surgical interventions. The operating theatre, which holds the operating rooms (ORs), is therefore one of the key resources in hospitals. Managing the operating theatre and finding an optimal OR schedule is complex due to the many factors that influence it. Scheduling a surgery in an OR influences downstream facilities like the post anaesthesia care unit, intensive care unit and general patient wards. This research was conducted at Leiden University Medical Centre (LUMC), an academic teaching hospital in Leiden, the Netherlands. During the week, the LUMC experiences a large variation in bed occupancy at the patient wards due to the way surgeries are scheduled. The large variation in bed occupancy causes surgeries to be cancelled, because there are no beds available at the ward. Because the OR theatre is such an expensive resource, we want to find a schedule that utilises the OR optimally during opening times. In this research, we develop a clustering method to cluster surgical procedures into surgery groups based on surgery duration and the length of stay. Then, we extend a model that analytically determines the patient distributions over the wards and intensive care for a given OR schedule. We define a mixed integer programming model with the objective to maximise the OR utilisation and minimise the variation in bed occupancy at the wards and intensive care. The model produces an OR schedule with the defined surgery groups assigned to days in the OR. We use two different methods to solve the model: a global approach and a local search heuristic, i.e., simulated annealing. The model has one nonlinear constraint and a complex objective function. Therefore, we linearise the constraint and the objective function, which results in a mixed integer linear program that is proven to be 5A5Chard. Both approaches are tested on a dataset provided by the LUMC. Furthermore, several scenarios are evaluated. We conclude that the mixed integer linear programming method performs better and faster than the simulated annealing procedure. To obtain an even better solution it is possible to use a combination of both. By using this method, the OR utilisation of the LUMC can improve by 11% and the variation in bed occupancy can be decreased by 80%.master surgery schedule; Operating room scheduling; bed occupancy; mixed integer linear programming; simulated annealing; length of stay; optimization; hospital)uuid:5321a5d4ab404403b09c70c617abfc77Dhttp://resolver.< tudelft.nl/uuid:5321a5d4ab404403b09c70c617abfc77Optimization of Island Electricity System: Transition to a sustainable electricity supply system on islands through the implementation of a hybrid system including ocean energy technologiesvan Velzen, L.Blok, K. (mentor)N Climate change without adequate countermeasures has become one of humanity's greatest threat. Energy production by means of renewable energy sources is therefore one of the crucial measures that will play a paramount role in reducing the pollutant emissions of fossil fuel dependency. Small islands in particular are an exemplary case of the extraordinary dependence on oil, the energy system often being entirely dependant on diesel generators. The relative high cost of sustaining this practice in combination with the geoeconomic properties of islands provides a unique incentive for the transition to renewable energy. By definition, islands are surrounded by water, making them highly vulnerable to the effects of climate change. In addition to the risk of being surrounded by water, it also provides a vast set of possibilities. Harnessing energy from waves, tides and the difference in seawater temperatures (OTEC) are just some of the examples. In this thesis, the effect of ocean energy integration is investigated. A simulation and optimization model of the electricity supply system is developed. A multiobjective genetic algorithm optimization regarding cost (LCOE) and renewable energy integration is performed. The model covers; PV solar, wind, tidal, wave and OTEC as well as battery storage as components of a renewable energy system. The resulting model is applied to two case study islands (Shetland and Aruba), the effect of the hybrid system including ocean energy technologies is determined. The cost optimal system was found to produce energy with an LCOE below the conventional fossil fuel energy cost. This corresponds to a renewable energy share of approximately 65%, consisting solely of wind energy. The cost was determined to have a significant influence on the system configuration. Currently, due to the high cost of energy based on their precommercial stage, ocean energy sources are added to the energy mix at high renewable energy shares (above 75% renewable coverage). The hybrid systems including the ocean energy sources displayed an evenly spread energy production. Based on this study, the future of integrating ocean energy provides an encouraging outlook. Cost will need to be reduced further for ocean energy to become economically viable. With the right investments in ocean energy, this process can be accelerated and will become viable.LOcean energy; renewable energy; electricity system; optimization; simulation!Technology, Policy and Management!Engineering, Systems and Services)uuid:1f228e88c7e7431d96afdf1abb195eddDhttp://resolver.tudelft.nl/uuid:1f228e88c7e7431d96afdf1abb195eddMaintenance Optimization of Tidal Energy Arrays: Design of a Probabilistic Decision Support Tool for Optimizing the Maintenance PolicyDe Nie, R.C.kWolfert, A.R.M. (mentor); Jarquin Laguna, A. (mentor); Leontaris, G. (mentor); Hoogendoorn, C.F.D. (mentor)The increasing demand for electricity offers many opportunities for renewable energy production, of which one alternative is tidal stream energy. Several feasibility studies have shown that the global tidal stream energy potential can contribute significantly to producing renewable energy. This tidal energy can mostly be produced at the 'tidal hotspots', where the kinetic energy density is very high due to fast flowing tidal currents. However, the tidal technology is not yet cost competitive in comparison with other renewables, such as photovoltaic and wind energy, which is why further cost reductions and efficiency improvements are to be achieved. Interviews with existing tidal system developers provided insight in the cost breakdown and showed that maintenance accounts for a significant share of the total project costs. This is due to the harsh environmental conditions that impose a large uncertainty, which increase th< e complexity of selecting an optimal maintenance policy. Damen Shipyards has shown interest in entering the tidal industry and is exploring the cost reduction possibilities by developing their own tidal system. This thesis contributes to Damen Shipyards' research by performing a time series analysis of a tidal hotspot to identify and model the multivariate dependence of the governing environmental phenomena. A probabilistic decision support tool is developed for selecting the optimal maintenance policy. The decision support tool primarily determines when and to what extent corrective maintenance should be performed. The corresponding overall maintenance costs are also calculated and secondary information regarding the activity duration is given. By means of the probabilistic approach, which captures the weather window uncertainty due to the environmental randomness, the results can be interpreted by the user based on the desired confidence level. In this research the weather window uncertainty is implemented by simulating a large number of random, but statistically identical environmental time series, which are based on available measurement data of the tidal field at EMEC, located at the Orkney Islands in the United Kingdom. The multivariate dependence between the significant wave height, wave peak period, wind velocity and current velocity is identified in the measurement set and fully represented in the generated time series by means of a paircopula construction simulation. The necessity for having time independence cannot be met in the original dataset, which is why a new simulation approach is developed. This method consists of a sequential simulation of paircopula constructions to include both the time dependence and multivariate dependence in the synthetic time series. Simulation of the set of synthetic time series showed to be more effective for describing uncertainty with respect to exclusively using the original dataset, due to the possibility of including more environmental realizations. The tidal array is represented as a semiMarkov decision process, which captures all costs and transition processes related to the deterioration and maintenance decisions. A policy optimization algorithm can then be used to find the optimal set of decisions and the corresponding maintenance cost rate which includes both the direct and indirect maintenance costs. The novel tidal system design of Damen Shipyards is then plugged into the decision support tool to determine the optimal maintenance policy and maintenance costs. The effect of different levels of detail for representing the tidal system have been compared and the benefits in terms of cost reductions of using this decision support tool with respect to less advanced approaches have been highlighted. Furthermore, multiple scenarios have been elaborated to identify the sensitivities in the cases of accounting for unreliability in the failure rates, varying the number of platforms in the array and including the economic fluctuations of the maintenance vessel day rates.probabilistic; tidal energy; maintenance policy; optimization; semiMarkov decision process; copula, multivariate dependence; decision support tool
20211216Offshore & Dredging Engineering)uuid:b10a0d0039494122a3db6996d5596afbDhttp://resolver.tudelft.nl/uuid:b10a0d0039494122a3db6996d5596afb6Supporting MDO through dynamic workflow (re)generationAugustinus, R.Hoogreef, M.F.M. (mentor)The use of advancements in computing technology has enabled designers to perform more thorough and more detailed design studies. Multidisciplinary Design Optimization (MDO) architectures provide users with guidelines on how to structure their MDO problem, including the linking of disciplines and how to perform the optimization. However, complex MDO problems can consist of tens of disciplines and hundreds of design variables. Thus, the setup of these problems can be complex and time consuming. In an attempt to reduce the time required and complexity of this set up, the main goal in this thesis is: "To develop and demonstrate a< methodology for automatic workflow (re)generation to support MDO". The method to fulfill these requirements consists of three main steps. The first is the automatic generation of microworkflows, workflows representing the different disciplines of the problem. The user will need to specify the inputs, outputs and operations, after which the workflows are automatically generated. The second step involves the automatic storage of workflows. Workflows are stored in a graph database, allowing the addition of semantics to the data. Adding semantics allows a reasoner to understand what the data means, enabling the inferring of data not explicitly defined. OWL (Web Ontology Language) ontologies are used to supply structure to the workflow data and add semantics. In addition, materialization scripts are present to regenerate stored workflows. The final step of the implementation involves the automatic generation of simulation workflows according to different MDO architectures. This generation involves the materialization and adjustment of microworkflows and the creation of a higher level workflow that links the disciplines and performs the optimization. The implementation of the automatic architecture generation has been validated using three case studies of varying complexity, amount of disciplines and discipline couplings. These case studies have shown a reduction of 93 to 98 % of time spent on the generation of simulation workflows representing the problem using an MDO architecture. In addition, the approach reduces the required user expertise and minimizes the amount of information the user needs to provide.Lautomation; MDO; MDO architectures; simulation workflows; optimization; PIDO!Flight Performance and Propulsion)uuid:87dc296d57f44506829b2c1d33982e15Dhttp://resolver.tudelft.nl/uuid:87dc296d57f44506829b2c1d33982e15<Fast MPC Solvers for Systems with Hard RealTime Constraints Zhang, X.,Keviczky, T. (mentor); Ferranti, L. (mentor)BModel predictive control (MPC) is an advanced control technique that offers an elegant framework to solve a wide range of control problems (regulation, tracking, supervision, etc) and handle constraints on the plant. The control objectives and constraints are usually formulated as an optimization problem that the MPC controller has to solve (either offline or online) to return the control command for the plant. This master thesis proposes a novel primaldual interiorpoint (PDIP) method for solving quadratic programming problems with linear inequality constraints that typically arise from MPC applications. Convergence of PDIP is studied both in primal and dual framework. We show that the solver converges quadratically to a suboptimal solution of the MPC problem. PDIP solvers rely on two phases: the damped and the pure Newton phases. Compared to stateoftheart PDIP method, this new solver replaces the initial (linearly convergent) damped Newton phase (usually used to compute a mediumaccuracy solution) with a dual solver based on Nesterov's fast gradient scheme (DFG) that converges superlinearly to a mediumaccuracy solution. The switching strategy to the pure Newton phase, compared to the state of the art, is computed in the dual space to exploit the dual information provided by the DFG in the first phase. Removing the damped Newton phase has the additional advantage that this solver saves the computational effort required by backtracking line search. The effectiveness of the proposed solver is demonstrated by simulating it on a 2dimensional discretetime unstable system.Ioptimization; predictive control; modelbased control; suboptimal control
20161202)uuid:5849d327fa7a4591a4680368b2713374Dhttp://resolver.tudelft.nl/uuid:5849d327fa7a4591a4680368b27133743Shading design workflow for architectural designersLpez Ponce de Leon, L.E./Turrin, M. (mentor); Van den Ham, E.R. (mentor)bbuilding technology; computational design; climate design; optimization; virtual reality; workflow
20161104&Architecture and The Built EnvironmentBuilding Technology)uuid:d059fea6286149b4ae365d31db< 109231Dhttp://resolver.tudelft.nl/uuid:d059fea6286149b4ae365d31db1092318Density Tapering for Sparse Planar Spiral Antenna ArraysKeijsers, J.G.M.Yarovyi, O. (mentor)k Increasing demands for mobile internet access have led to exponential developments in mobile communications technologies. The next generation mobile technology is expected to exploit electronic beam steering and to have a higher operating frequency to facilitate a higher bandwidth. This places a heavy burden on the base station antenna arrays, which should be sparse to accommodate passively cooling the system. Conventional sparse array topologies suffer from undesirable radiation pattern characteristics such as grating lobes. Therefore, this work focused on exploring methods to synthesize the antenna elements' geometrical parameters to enhance the radiation pattern and to explore the limitations that arise due to the array's sparseness. To this end, both a deterministic and a stochastic method were proposed. Starting with an analytical window function as a continuous current distribution and approximating this by adjusting the antenna elements' radial coordinates results in the fact that the desired window's radiation pattern is only approximated in a limited field of view, depending on the sparseness. Full electromagnetic wave simulations are performed to show that downscaling the topology to make it more dense gives rise to increased coupling effects that deteriorate the array's performance. In addition to the deterministic method, a genetic algorithm optimization method is employed to stochastically obtain the optimal current distribution window. Approximating the optimal continuous current distribution again leads to the array factor following the optimal window's radiation pattern in a limited field of view. Furthermore, it is shown that for the conditions used in this work, the optimum continuous current distribution is also the optimum current distribution for finite element arrays, implying that only one optimization needs to be executed when designing such an array. Concluding, the applicability of density tapering to sparse arrays is limited. The inherent undersampling causes a limited realization of the window function's characteristics. Density tapering does improve the absolute performance of a sparse array in terms of peak sidelobe level, but may be useful if the region of interest is concentrated near the main beam. The requirements and in particular the region of interest of the application determine whether density tapering can be effectively employed.antenna array; sparse array; density tapering; space tapering; optimization; genetic algorithm; feko; planar; spiral; sunflower; mutual couplingMicroelectronicsRMicrowave Sensing, Signals & Systems / track: Telecommunications & Sensing Systems)uuid:e9b513c8751b45a79d99a51177c918a2Dhttp://resolver.tudelft.nl/uuid:e9b513c8751b45a79d99a51177c918a2pOptimizing truck driver schedules with dependent working shifts, drivers' legislation, and multiple time windowsVan Alphen, M.N.Evan Essen, J.T. (mentor); Aardal, K.I. (mentor); Haneyah, S. (mentor)In logistics, minimizing the resource expenses can be done by minimizing the total number of hours that each truck driver has to work. This sum of working hours is referred to as the total schedule duration for all truck drivers. Minimizing this total schedule duration is the main goal in the optimization problem that we consider. The considered minimization problem is called the Total Schedule Duration with Dependent resource, Multiple Time Windows and European drivers legislation problem, i.e., the TSDDMTWEU problem. A literature study is given on this TSDDMTWEU problem. Different solution approaches and Mixed Integer Linear Programs (MILP) are discussed regarding in the scope of our project. We compose a model for the TSDDMTWEU problem by giving a MILP, based on a model by Kopfer and Meyer(2008). Two different modeling approaches are suggested which are assessed on their performance. Furthermore, we prove that the TSDDMTWEU problem is NPhard, < and to conclude, a heuristic is evaluated with respect to its performance in objective values. Our main research contributions can be given by the following three aspects. First, a MILP is given for the complete European drivers legislation. All extensions in the legislation regarding a single truck driver are included. Second, knowledge is gained on the influence of dependent truck drivers on a Total Schedule Duration problem. And finally, we prove that adding the complete European drivers legislation to a problem, results in a NPhard problem.NPhard; MILP; optimization; Europen drivers' legislation; dependent resources; schedule duration; multiple time windows; heuristic, complexity)uuid:c7baa01feb374bf1aceb3f58c575bdd1Dhttp://resolver.tudelft.nl/uuid:c7baa01feb374bf1aceb3f58c575bdd1[Parallel Approach to DerivativeFree Optimization: Implementing the DONE Algorithm on a GPUMunnix, J.H.T.Verhaegen, M. (mentor)Researchers at Delft University of Technology have recently developed an algorithm for optimizing noisy, expensive and possibly nonconvex objective functions for which no derivatives are available. The databased online nonlinear extremumseeker (DONE) was originally developed for sensorless wavefront aberration correction in optical coherence tomography (OCT) and optical beam forming network (OBFN) tuning. In order to make the DONE algorithm suitable for largescale problems, a parallel implementation using a graphics processing unit (GPU) is considered. This master thesis aims to develop such a parallel implementation which performs faster than the existing sequential implementation without much change in obtained accuracy. Since OBFN tuning is a problem that may involve a large amount of parameters, an OBFN simulation is to be used to compare the parallel implementation to the sequential implementation. The key of the DONE algorithm is solving a regularized linear leastsquares problem in order to construct a smooth and lowcost surrogate function which does provide derivatives and can be optimized fairly easily. This master thesis first discusses the basics of parallel computing, after which several linear leastsquares methods and several numerical optimization methods are investigated. These methods are compared and the most suitable methods for parallel computing are implemented and tested for increasing dimensions. The final parallel DONE implementation combines the recursive leastsquares (RLS) method with the BroydenFletcherGoldfarbShanno (BFGS) method and optimizes the largescale OBFN simulation almost twice as fast as the sequential DONE implementation, without much change in obtained accuracy.derivativefree; optimization; numerical; algorithm; linear; leastsquares; random; fourier; expansion; rfe; databased; online; nonlinear; extremumseeker; done; parallel; parallelization; graphics; processing; unit; gpu; compute; unified; device; architecture; cuda)uuid:4cf2c369b7b94db899da9ab5b436f64eDhttp://resolver.tudelft.nl/uuid:4cf2c369b7b94db899da9ab5b436f64eISolving the open day timetabling problem using integer linear programming
Gecmen, D.Van den Berg, P.L. (mentor)iThe timetabling problem is known as a large class of problems that fall under the mathematical field of scheduling. A widely used method to solve timetabling problems is the integer linear programming method, which we have used to solve the open day timetabling problem. In this thesis we have addressed a timetabling problem for the open day at the Christian College Groevenbeek. This open day consists of two separate day parts: the morning and the afternoon. For both day parts we have received a data set. These data sets consist of students and their preferred studies. To solve the problem we created several mathematical models formulated as an ILP. After that, we implemented these models in AIMMS to solve them for the data sets we have received. The first model we created was the feasibility model. We used this model to determine the appropriate number of lecture halls and lecture hall capacities when we have 4 rounds on< both day parts. To achieve 4 rounds on both day parts we found that the best combination to have is 19 lecture halls with a capacity of 30 and 1 lecture hall with a capacity of 40. In addition, for a good quality schedule objective functions were considered. The first objective was to minimize the number of presentations. The second objective was to minimize the total workload. The workload of a teacher is the total time a teacher is present at the college, which consists of the number of presentations he has to give and the number of gaps he has in his schedule. A gap is a round where a teacher is not scheduled to give a presentation, but has to be present. We added each objective and their corresponding constraints to the feasibility model. After applying both models to the data sets we concluded that the second objective resulted in a better schedule, as it achieves the theoretical minimum number of presentations and creates zero gaps in the schedules for both day parts. When we combined the schedules for both day parts this did not result in a good schedule as some studies still contained gaps between the two day parts. To improve this we minimized the total workload for both data sets combined.<scheduling problem; integer linear programming; optimization)uuid:92868b2d87c44e0b9761698dd54f02f9Dhttp://resolver.tudelft.nl/uuid:92868b2d87c44e0b9761698dd54f02f9HCruise Performance Optimization of the Airbus A320 through Flap Morphing
Orlita, M.Vos, R. (mentor)In the era of increasing aviation traffic the conditions are right to promote design of ambitious concepts. At Fokker Aerostructures attention is drawn to smooth inflight shape morphing to produce a structurally functional Variable Camber Trailing Edge Flap (VCTEF). The deployment mechanism would fit into the flap, not limiting other functionality such as Fowler motion, while at the same time allowing small camber variations during cruise. This is based on the assumption that such morphing will bring performance improvements which are commercially interesting. The main goal of this research was therefore to predict these performance benefits and thus the applicability for a specific case of the Airbus A320 aircraft in cruise flight. This aircraft is large enough to accommodate the technology, it is operated in great numbers and cruise is the most fuel demanding part of its mission. Since the concept is in the development phase the further task is to determine the morphing design setup which performs best. The amount of morphing is driven by a circular reference function, which is added to the base geometry at any desired streamwise cut of the wing by manipulation of the airfoil coordinates as seen on the cover. The design is specified by the points on the airfoil upper surface where the morphing begins and ends, boundaries of the morphing region where upper surface bending is allowed. As also found in other literature it is shown that morphing can bring drag reduction for a section, wing and the complete aircraft. This varies throughout the cruise, which is translated to more sophisticated performance indicators for comparison and evaluation of the benefits. The first indicator is the increase of range over the design mission for the given aircraft. The second and third are the fuel savings which can either be obtained by increasing the cruise end weight, or by decreasing the cruise beginning weight, both by the amount of the saved fuel while keeping the aircraft range constant. In order to evaluate these indicators, the Breguet range equation is used in a discretized form, utilizing an interpolated lifttodrag ratio determined by aerodynamic analysis at 7 cruise points. This was done using both the 2D solverMSES and a quasi3D tool Q3D developed at TU Delft comprising ofMSES and AVL vortex lattice solver. For the analysis a complete A320 model is required, which was not available and was created from the known performance data and partially assumed geometry. The unknown wing geometry was optimized with respect to the midcruise drag simulating an already efficient a< ircraft, as suggested by literature. Other model components were the horizontal stabilizer, fuselage and center of gravity position allowing trim at the reference cruise points and obtaining the lift requirements for the wing and a representative section. Under these lift requirements the 2D and 3D analyses were performed at individual cruise points to obtained improved lifttodrag ratios which could be then used to evaluate the range improvement. Itwas found thatwith morphing in 2Dthe drag reduction can amount up to 9% at the beginning of cruise but parabolically decreases towards mid cruise after which it remains below 0.5%. This is primarily due to manipulation of the shockwave and the boundary layer at the given lift requirements, which is most dominant at high cruise lift coefficients. Since the induced drag was found not affected by the assumed morphing, such improvements are further scaled down when evaluated for the entire wing and even further from the aircraft point of view, resulting in a range improvement in order of 20km and fuel savings of below 0.5% of trip fuel. A sensitivity analysis on the design variables has shown that these performance benefits have small sensitivity to the size of the morphing region and that a very aft located regions are the most beneficial, suggesting that a small tab at the trailing edge might be a better and easier solution. In view of these results the smooth morphing concept is deemed not applicable for the cruise of short range aircraft such as A320. However, given the parabolic behaviour of the drag improvements, larger potential can be expected for long range aircraft, which is the main resulting recommendation of the conducted research. Furthermore it cannot be excluded that other regimes could benefit more from the morphing concept, such as highlift, which would probably require windtunnel testing, as discussed in the final Appendix of this work.7morphing; camber; transonic; drag; optimization; cruise&Aerodynamics, Wind Energy & Propulsion)uuid:eb4a8dd4e02448d797844bbecbebe1f1Dhttp://resolver.tudelft.nl/uuid:eb4a8dd4e02448d797844bbecbebe1f1DThe Heston model with Term Structure: Option Pricing and Calibrationvan der Zwaard, T..Oosterlee, C.W. (mentor); du Toit, J. (mentor) This thesis addresses the calibration of the Heston model with term structure (i.e. with piecewise constant parameters) to a set of European option prices from the FX market. Several option pricing methods are discussed and compared, among which the COS method, Lewis' method and the Andersen QE Monte Carlo scheme. Several modifications are proposed in order to improve the practical usability of the COS method in terms of speed, accuracy and robustness. The calibration of the Heston model with term structure is chosen as a benchmarking testcase for comparing several optimization techniques, that are both opensource as well as from licensed products. The performance the optimizers is measured in terms of speed of the calibration. In addition, a simple hedge test using the calibrated model is used as a secondary performance metric. The combined effort of finding the fastest optimization techniques and fastest pricing method has the potential of speeding up daily FX calibrations performed in many financial institutions.option pricing; foreign exchange (FX) market; COS method; Heston model with term structure; calibration; optimization; benchmarking; hedging
20210819Numerical Analysis)uuid:698e7ac823ac4c13831bf9125838ff1cDhttp://resolver.tudelft.nl/uuid:698e7ac823ac4c13831bf9125838ff1cMMultiplePhase Trajectory Optimization for Formation Flight in Civil Aviationvan Hellenberg Hubar, M.E.G.Visser, H.G. (mentor)A tool is developed that is able to optimize the trajectories of multiple aircraft that fly in formation in order to obtain the minimum total fuel consumption. Several experiments are conducted to investigate the benefits of formation flight for commercial aircraft. Finally, also the influences of wind and delay on the trajectories of the aircraft that join in the formation are exa< minedformation flight; civil aviation; GPOPS; multiplephases; multiple aircraft; optimization; trajectory optimization; minimum fuel burn; minimum Direct Operating costs; multiplephase trajectory optimizationControl and Operations Aerospace Transport & Operations)uuid:446a43d3f7294392b72d8f68737e5a64Dhttp://resolver.tudelft.nl/uuid:446a43d3f7294392b72d8f68737e5a64SA Computational Intelligence Approach to Voltage and Power Control in HVMTDC GridsAgbemuko, A.J.Kvan der Meijden, M.A.M.M. (mentor); Popov, M. (mentor); Ndreko, M. (mentor)PAn ever increasing interest in renewable energy sources (RES) in order to reduce global carbon emissions and alleviate problems posed by climate changes has lead to dramatic increase particularly in offshore wind energy projects. Several different projects are currently being executed and so many are being planned for the future. The location of wind power plants is increasingly going farther away from shore with more and more challenges and opportunities. With the increased distance to shore, HVAC (High Voltage Alternating Current) transmission is consequently becoming impossible to be used as a result of increase in cable charging currents with distance. As such, HVDC (High Voltage Direct Current) transmission is becoming the only alternative to transmit power from far offshore wind power plants to shore. Two ways of doing this; pointtopoint connection and multiterminal connection. Multiterminal connection offers dramatic improvement in flexibility, security, reliability of power supply, and with current advancement in power electronic control, offers so much. Thus, a multiterminal grid connection is the subject of this thesis. The most important issue for multiterminal grids has been it's controllability. Two most important controllable parameters in multiterminal grids is voltage and power with voltage being more important as IGBT (insulated gate bipolar transistors) switches are still very sensitive devices. Beside, controlling voltage entails controlling power as they are both dependent on each other. This Thesis proposes a new computationalbased control philosophy for the direct voltage and active power control of a VSC (Voltage Source Converter) based multiterminal offshore DC grid. The limitations of the classical control strategies to HVMTDC (High Voltage Multiterminal Direct Current) grids were studied and in particular direct voltage droop control strategy. The main drawbacks of classical voltage droop control is its difficulty to reach power reference setpoints and does not ensure a minimum loss profile in the event of contingencies. The proposed strategy is capable of addressing these weaknesses by combining the advantages of the droop controller such as robustness and exceptional ability to compensate for imbalance during contingencies, and the advantages of the constant active power controller which has the ability to reach easily power set points. Thus, the direct voltage droop control strategy and constant active power control strategy were combined, simultaneously solving the drawbacks of each. The advantages of the new Fuzzy controller is the reduced computational effort, the high degree of flexibility, limited influence of topology or the size of grid, and the near zero percentage error. The control strategy is demonstrated by means of time domain simulations for a three terminal VSCbased offshore HVDC grid system used for the grid connection of large offshore wind power plants. Furthermore, a high level controller in the form of an optimal dispatcher was implemented using a genetic algorithm (GA) optimizer to form a complete hierarchical control system  The fuzzy based strategy is used at the local layer and an optimizer/scheduler at the upper layer. The Optimizer optimizes for losses and provides optimal reference set points to the fuzzybased controllers. The optimizer also checks regularly the available wind power and when it changes, it defines new set points. It uses the new information on wind generated to recalculate set points and return all nod< al power fixed terminals to predisturbance levels. However results of the GA optimizer in comparison with traditional NewtonRaphson method do not show considerable improvement in reducing losses as expected and this was confirmed by similar works reported in literature. Finally, simulation results are presented in order to demonstrates the capabilities of the proposed control strategy in meeting all the design objectives. No deviation in power or voltage from the references, no influence of topology, configuration, or size. Hence, there will be no need for a secondary corrective action to alleviate the deviations.HVDC; VSC; MTDC; knowledgebased control; fuzzy control; genetic algorithm; transmission system; offshore grids; HVMTDC; hierarchical control; optimization
20200711#Electrical Sustainable Energy (ESE)"Intelligent Electrical Power Grids)uuid:cca2c63c29c64e1c88f5f2600bcedbceDhttp://resolver.tudelft.nl/uuid:cca2c63c29c64e1c88f5f2600bcedbce@Modelbased optimization of drilling fluid density and viscosityRoijmans, R.F.H.Jansen, J.D. (mentor)Optimization of drilling fluid properties is an essential part of cost effective drilling operations and process safety. Currently fluid properties are measured and optimized manually by human engineers with different skills and experience which might lead to nonoptimum drilling fluid properties that deteriorate its functionalities. Automated drilling fluid management is still at an early development stage. Several vendors are actively developing automated skids to measure drilling fluid properties in real time [1] [2], and several authors also have published scientific work on the use of the realtime measurement as a component of automated control systems that dose mud additives automatically to meet the mud specifications or setpoints defined by human engineers [3] [4]. During the well planning stage, the design process of mud specifications is carried out by engineers checking several scenarios using well planning software and their experience to come up with drilling fluid specifications. When hole cleaning and/or borehole stability conditions change during the actual drilling process that warrant updates or changes of drilling fluid properties, the specifications are updated in an adhoc manner, relying on the skills of human engineers. This thesis focusses on the development of a modelbased optimization module for drilling fluid properties to help engineers in the planning and drilling phase to automatically derive drilling fluid specifications that meet the hole cleaning criteria, and satisfy the downhole pressure requirement and constraints set on the operating ranges of drilling parameters. The optimization framework will use proxy models derived from well hydraulics software that predicts cuttings concentration and downhole pressure as a function of the drilling fluid properties. Three objective functions for the optimization module are given as examples in this thesis. The first two objective functions deal with the hole cleaning criteria while the last one is a cost function that combines the cost of hole cleaning and downhole pressure management. The optimization module has been tested on a case study based on real field data. Given an objective function, multiple constraints, and proxy models, the module takes only a few seconds to find the optimum mud property values and drilling parameters such as flow rates, rotary speed and rate of penetration. A benchmark with the field data shows that the optimum drilling fluid properties and parameters result in significant improvement of the hole cleaning state while the downhole pressure requirement and constraints on the drilling parameters can still be satisfied. When a cost function is defined as a combination of hole cleaning and downhole pressure management, the module also gives a quantified benefit of the tradeoff between maximizing hole cleaning and minimizing losses. Since this module can perform optimization very efficiently compared to the adhoc processes done by human engineers, this module may be of s< ignificant value for operating units to use in the planning and drilling phase and also in the future as an outer optimization loop for automatic drilling fluid control systems.<drilling fluid; automation; optimization; density; viscosity
20170106!Civil Engineering and GeosciencesGeoscience & EngineeringPetroleum Engineering)uuid:6fe5df50ff934624a50c37fb6331eedfDhttp://resolver.tudelft.nl/uuid:6fe5df50ff934624a50c37fb6331eedf@Tailored SID & Profile Allocation for Amsterdam Airport Schiphol
Ceulemans, B.,Visser, H.G. (mentor); Roling, P.C. (mentor)Currently, only one Standard Instrument Departure (SID) track and one flight procedure is used per runway departure fix combination. In contrast to tailored arrivals, the potential benefit of tailored departures has been left relatively undiscovered. The research objective is to quantify the potential benefit of tailored SIDs and profile allocation for Amsterdam Airport Schiphol by developing a model that is capable of simulating departure trajectories per runway departure fix and optimize the overall allocation of departing aircraft for noise and fuel consumption. The proposed methodology includes a twostep modelling framework. The two models involve the design of novel tailored departure trajectories using a multi objective genetic algorithm and the computation of optimal flight allocation by means of Mixed Integer Linear Programming (MILP). A case study is presented and serves as proof of concept.allocation; capacity; trajectory optimization; linear programming; MILP; tailored departures; Schiphol; airport; departures; fuel; noise; optimization; multi objective genetic algorithm)uuid:c5e0bc71db3a4619a658b0a773f45904Dhttp://resolver.tudelft.nl/uuid:c5e0bc71db3a4619a658b0a773f45904lExploration of optimal orbits in the strongly perturbed environment of the 2001 SN263 triple asteroid systemObrecht, G.+Doornbos, E.N. (mentor); Cowan, K. (mentor)sFor the past 20 years, the small bodies of the solar system, such as asteroids and comets, have been increasingly gathering the interest of scientists and space agencies. The latter have been multiplying the number of space missions to study them. Brazil does not want to be left out and has been working on its own mission, ASTER, which has the particularity of having as a target a triple asteroid system. Although adding great scientific interest to the mission, this characteristic considerably complicates the mission design, by making the space probe move in a complex gravitational field and submitting it to very strong perturbations forces. Following the past researches on the ASTER mission, which mostly dealt with the characterisation of the 2001 SN263 asteroid system, this work focuses on the preliminary design of mission orbits suitable for the exploration of the asteroids. Two phases of the mission are considered: the arrival in the system, which requires a parking orbit; and the exploration phase. For the latter, two scenarios are studied: a parallel and a sequential observations of the system. To find the optimal orbits for each of these cases, a computer tool has been designed, which comprises an orbit integrator able to propagate the trajectory of a spacecraft within the asteroid system, and an optimiser which uses evolutionary algorithms to find optima from a 5dimensional search space in a single or multidimensional objective space, according to objective functions that can be chosen and adapted to match the case considered. The computer tool performs well for all cases, and allows to draw general conclusions on which kind of orbits to consider for the ASTER mission. The results show that the solar radiation pressure is by far the most problematic perturbation and is hence driving the properties of the solutions. Among all cases, many optima are terminator orbits, which are by nature strong against solar radiation perturbations. Moreover, orbits closer to the bodies are more stable, and any trajectory too distant from the bodies will be blown away. This work concludes on the suitability of the optimisation m< ethods selected to the orbit design for this mission, although it is advised to still improve the software to model the dynamics of the system in a more detailed manner, and on recommendations for the ASTER mission. No satisfying parking orbit has been found and the relative strength of the solar radiation pressure implies that there does not exist orbits sufficiently remote from the bodies to serve as parking orbits. It is recommended to investigate other solutions with active orbit maintenance. As for the exploration phase, the sequential observation scheme shows its superiority. Satisfying observation orbits can be found about all three bodies, which is not the case for the parallel observation because of the zones of instability present inbetween the bodies.Xorbit; optimization; asteroids; triple asteroid; perturbations; four body problem; ASTER Astrodynamics and Space Missions)uuid:a91c3cf801a24747aaeaf56367256905Dhttp://resolver.tudelft.nl/uuid:a91c3cf801a24747aaeaf56367256905cArtificial neural networks for determining the optimal process conditions of a gasification processVellekoop, E.C.)De Jong, W. (mentor); Winkel, R. (mentor)^artificial neural networks; pyrolysis; gasification; biomass; optimization; process conditions
20200422Process & Energy)uuid:13fd72f6946b4fc1bcfdee1d737abe85Dhttp://resolver.tudelft.nl/uuid:13fd72f6946b4fc1bcfdee1d737abe85Robust fleet planning under stochastic demand
Sa, C.A.A.,Santos, B.F. (mentor); Clarke, J.P. (mentor)The research objective of this thesis is to develop an innovative airline fleet planning concept that is capable to consider the longterm stochastic nature of air travel demand while generating meaningful results in reasonable computation times. The proposed methodology aims to identify robust fleets, in terms of profit generating capability across a longterm planning horizon under stochastic demand, through the adoption of a portfolio of fleets (each of different size or composition) and a threestep modeling framework. The three models involve the simulation and sampling of stochastic demand using the mean reverting OrnsteinUhlenbeck process, iteration over an optimization model that optimally allocates each fleet from the portfolio given the demand sample values, and a scenario generation model that generates scenarios across the planning horizon. A case study is presented and serves as proof of concept.7airline fleet planning; stochastic demand; optimizationAir Transport and Operations)uuid:23623188d98749eb883bea52e15f7842Dhttp://resolver.tudelft.nl/uuid:23623188d98749eb883bea52e15f7842tFlexible Arrival & Departure Runway Allocation Using MixedInteger Linear Programming: A Schiphol Airport Case StudyDelsen, J.G.9Runway capacity of a complex runways system can be limited by several factors. Currently, the runway usage at Amsterdam Airport Schiphol (AAS) is described by a preference list established by multiple stakeholders. It makes an important tradeoff between minimizing noise exposure to the environment and maximizing capacity. The existing model does not take into account fuel burn and the ensued emissions for the current and future demand in flights. This study tries to address this issue. A model has been developed using MixedInteger Linear Programming (MILP) by which flights can be allocated to runways, while optimizing for fuel and noise. The research has the following research question: Can fuel burn be significantly reduced for aircraft operating at Amsterdam Airport by utilizing a novel flexible arrival and departure runway allocation model, using a predefined set of variables and rules, accounting for noise annoyance, runway capacity and the current and future demand of flights? The runway allocation model developed for this study is able to assign aircraft to runways based upon an optimization tradeoff between fuel usage and noise exposure to the environment. Selecting a shorter flight or taxi route may result in lower fuel burn and emissions, while separation and noise regulations are maintained. A multitude of scenar< ios is simulated using the allocation model. Different runway configurations are tested. Additionally, different peak moments varying during the day are compared to see when flexible allocation is feasible and most profitable. A set of Pareto optimal solutions can be evaluated in order to determine the most optimal runway allocation distribution. The conclusion that can be drawn from this research is that flexible allocation can have significant impact on both fuel usage and emissions, while adhering to the current regulations. Depending on the flexibility of available runways, mainly restricted by separation and noise regulations, runway demand, local conditions and maintenance, savings are possible. For scenarios where there is room for flexibility, savings are evident. For restricted scenarios, due to wind or visibility conditions, potential savings exist, although to a lesser extend. The level of runway demand plays a role, as most flexibility and potential savings are obtainable during offpeaks. Annual savings can amount to significant fuel and emission reduction. The described runway allocation tool has the generic abilities of being scalable to wide variety of airports and their characteristics. Other airports, a larger set of aircraft and aircraft types, different arrival and departure operations can all be added to the model due to the generic characteristics. This aids further research and eventual application of flexible arrival and departure runway allocation in the aviation industry.prunway; allocation; capacity; MILP; linear programming; Schiphol; airport; scheduling; optimization; fuel; noiseAir Transport & Operations)uuid:24fa41ff5d3c4abbb0b5cd230e9bf89cDhttp://resolver.tudelft.nl/uuid:24fa41ff5d3c4abbb0b5cd230e9bf89c'LEDbased photocatalytic reactor designLi, Z.4Stankiewicz, A. (mentor); Khodadadian, F.M. (mentor)As a promising technology, photocatalysis shows its unique advantages and potential in many disciplines, from hydrogen production, indoor and outdoor air purification, remediation of nonbiodegradable molecular in the factory, to the organic synthesis with high selectivity. In recent years, photocatalytic semiconductor process has shown a high potential for the contaminated air remediation. Compared with conventional technology, photocatalysis in pollutant degradation shows advantages as a lowcost, environmentally friendly and sustainable treatment technology to align with the zero waste scheme. The objective of this thesis was to perform the methodology in the LEDbased photocatalytic reactor optimization by minimizing the reactor cost. The methodology is provided to be reliable and several parameters are checked based on the impact that each parameter has on the reactor optimization.dphotocatalysis; photocatalytic reactor; optimization; mathematical model; Light Emitting Diode (LED)Process and EnergySPET  Energy Technology51.999335, 4.371127)uuid:4d22ccddf4e7458eb4557405eaba6245Dhttp://resolver.tudelft.nl/uuid:4d22ccddf4e7458eb4557405eaba6245JTopology optimization of 3D linkages with application to morphing winglets
De Jong, T.A.0De Breuker, R. (mentor); Gillebaart, E. (mentor)
Topology optimization is the process of optimizing both the material layout and the connectivity inside a design domain. The first paper on topology optimization dates back to 1904, when the Australian inventor Michell derived optimality criteria for minimum weight truss structures. In 1988 Bendse and Kikuchi published the pioneering paper "Homogenization approach to topology optimization", laying the foundation of numerical optimization methods for topology optimization. Since then, extensive research has been performed both in academia and industry trying to solve different topology optimization problems. Due to its general applicability, topology optimization has been applied to the design of many morphing aircraft structures including morphing leading edges, trailing edges, or both. It has also been applied to complete morphing wings. Morphing structures have the ability to change their shape< throughout the flight. This allows for possible weight savings and/or drag reduction, resulting in a reduced fuel consumption. Despite the great interest in morphing winglets from both Airbus and Boeing, topology optimization has not yet been used to design morphing winglets, except for previous work done by E. Gillebaart and R. De Breuker. This thesis continues with the research by focusing on the following research objective: "Developing a software tool to design amechanism for morphing winglets, using groundstructure based topology optimization, by improving, extending, and expanding the previous 2D inhouse tool." The research in this thesis is based on previous work done by the faculty. The previous 2D tool is improved, its capabilities are extended and the tool is expanded to 3D. The current tool effectively demonstrates how topology optimization, based on the groundstructure approach, can be used to obtainmechanisms for morphing winglets. A two step optimization strategy is formulated, where the mechanism is designed in the first step and sized to obtain minimum weight in the second step. Both optimizations are done using the globally convergent method of moving asymptotes (GCMMA) optimizer, combined with the adjoint sensitivity technique. Due to the large rotations of the winglet, geometric nonlinearity is taken into account using the GreenLagrange strain measure. Various mechanisms for morphing winglets were successfully designed and sized both in 2D and in 3D. In 2D mechanisms were found where the cant angle could be regulated, in 3D mechanisms were found where both the cant angle and the toe angle could be regulated. An aerodynamic load case of 5 [kN] was defined. In 2D half of this loading was assumed to act on the mechanism, resulting in a minimum weight of 15.0 [kg]. In 3D the minimum weight was found to be 48.0 [kg].otopology optimization; GreenLagrange; geometric nonlinearity; optimization; GCMMA; morphing; morphing winglet"Aerospace Structures and Materials0Aerospace Structures and Computational Mechanics)uuid:9e0a6ef653ac422d977116cd8225072cDhttp://resolver.tudelft.nl/uuid:9e0a6ef653ac422d977116cd8225072cVWing aerostructural optimization using the Individual Discipline Feasible architectureHoogervorst, J.E.K.Elham, A. (mentor)At present, on the aviation market a need exists for lighter and more efficient aircraft than the ones dominating the airspace today. Beside the reduction in operating costs and air pollutants of these new generation aircraft, this reduction in fuel use can result in several advantages with respect to the performance of the aircraft like increased range, increased payload capacity, decreased of takeoff field length and decreased takeoff noise. The present thesis is an effort to contribute to this reduction of fuel use by performing a gradientbased aerostructural wing optimization of a modern highspeed transport aircraft, the Airbus A320, for minimal necessary fuel weight while maintaining its range specification. The novelty of this work is the use of the Individual Discipline Feasible (IDF) architecture instead of the traditional Multidisciplinary Feasible architecture. Using the IDF approach the disciplines within the aerostructural optimization are completely decoupled. The consistency of the system as a whole is maintained by the use of equality constraints to equate the output of one discipline to the input of another. No coupled sensitivity information is required because of this decoupled system. This makes the system not only simpler, but also provides more freedom in software choice for the disciplinary analyses. Furthermore, the time to perform optimization is reduced as the work of making the system consistent is removed from the computationally expensive individual disciplines and put it in the hands of the cheap optimization algorithm. The CFD solver SU2 is used within the aerodynamic discipline to deform the grid, calculate the flow properties and gain sensitivities of lift and drag with respect to surface perturbations of the wing. The Euler model is used a< nd the viscous drag component is calculated using a separate estimation. For the structural discipline the FEMWET software is used, providing the structural data including the static aeroelastic deformation of the wing. The optimization design variables are selected to be the angle of attack, the exterior shape of the wing, being the airfoil and planform shapes, and the thicknesses of the equivalent panels representing the internal wing box. The problem is constraint by compression, tension, shear, buckling and fatigue failure modes. Moreover it is constraint by a minimum aileron effectiveness and a maximum wing loading. The aerodynamic analysis is performed under cruise conditions while the wing structure is analyzed under the critical load cases of the reference aircraft. The optimization algorithm chosen is the Sparse Nonlinear Optimizer, based on the Sequential Quadratic Programming optimization algorithm. The optimization resulted in a reduction of the aircraft fuel weight of 11%. This has been achieved by reducing induced drag through an increase in span and an improved lift distribution, by reducing wave drag by improved airfoil shapes and by reducing wing structural weight by a reduction in wing sweep.4aerostructural; MDO; wing; optimization; SU2; FEMWET)uuid:dee2d842ca9140868ef8771cfee1b0dcDhttp://resolver.tudelft.nl/uuid:dee2d842ca9140868ef8771cfee1b0dcPAerogravity assists: Hypersonic maneuvering to improve planetary gravity assists
Hess, J.R.,Mooij, E. (mentor); Sudmeijer, K.J. (mentor)Interplanetary missions use gravitational slingshots around planetary bodies to adjust their heliocentric velocity or inclination for quite some time. The momentum exchange that can be achieved during a socalled gravity assist is limited by the mass of the planetary body. To overcome this limitation, an aerogravity assist was proposed, a maneuver where, in addition to the gravitational forces, use is made of aerodynamic forces to increase the bending angle of the velocity, hence increasing the momentum exchange. To investigate how efficient an aerogravity assist can change the interplanetary orbital inclination and velocity, a simulator was developed that is capable of simulating both the gravitational and aerodynamic forces on a vehicle during an aerogravity assist. It was determined that waverider is a type of vehicle suitable for aerogravity assists due to their large lifttodrag ratio, which reduces the energy dissipation in the atmosphere. The aerodynamic characteristics of a number of waverider shapes were evaluated, after which the one with the largest lifttodrag ratio was selected. Furthermore, a numerical optimization algorithm was used to develop a reference trajectory planner. Finally, a guidance algorithm based on the tracking of drag accelerations was developed and tested to investigate if the found trajectories would still be feasible under the influence of uncertainties and perturbations. The angle over which the trajectory is bent is a measure for the effectiveness of the aerogravity assist. Using the reference trajectory planner, the maximum possible atmospheric bending angle was investigated for an aerogravity assist at Mars and Jupiter for different initial velocities. From this analysis, it was concluded that extremely high velocities were involved in the aerogravity assist at Jupiter, which resulted in large mechanical and thermal loads. These loads would limit the achievable bending angle when the velocities become too large. For the entry velocities investigated, the velocity bending angle could be increased by 10% for high entry (80.0 km/s) velocities and up to 143% for a relatively low entry velocity (68.0 km/s). For an entry velocity of 80.0 km/s, the initial heatflux peak exceeded the imposed constraints, which prevented the optimization algorithm of finding any solutions. The maximum velocity bending angle that could be achieved at Jupiter was 125.1 degrees at an entry velocity of 68.0 km/s. At Mars, although the heat loads were still larger than for an Earth entry, it is believed that thermal prote< ction systems can be designed that could handle the heat loads. The velocity bending angle could be increased by 490% to 818% depending on the arrival velocity, with a maximum velocity bending angle of 178.5 degrees at an entry velocity of 9.0 km/s. To investigate the effect of an aerogravity assist on an actual mission, two existing missions has been selected: Rosetta for Mars and Ulysses for Jupiter. Although both spacecraft did not have an aerodynamic shape, which means an aerogravity assist could not have been performed during the actual mission, it has been assumed that these vehicles would have had the geometry of a waverider. During the investigation of Rosetta swingby at Mars, a reference trajectory was generated to investigate the amount of velocity decrease that could have been achieved using an aerogravity assist. It was determined that the reduction in velocity could be increased by 167% with respect to a gravity assist: from 2.3 km/s for a gravity assist to 6.2 km/s for an aerogravity assist. For Jupiter, it was investigated if the orbital inclination could be changed using the aerodynamic force only. As the entry velocity exceeded 80.0 km/s, the heat flux constrained was removed from the trajectory optimization to allow the optimization algorithm to find solutions. It was possible to change the orbital inclination by 54.2 degrees, but at an extremely large heat load of 40,620 W/cm2. This reconfirms that even though orbital inclination changes are possible using aerodynamic forces, Jupiter is unsuitable for aerogravity assists due to the high velocities and large heat loads associated with an atmospheric maneuver at this planet. Finally, using the aerogravity assist trajectory found for Rosetta, which was generated generated with the reference trajectory planner, the guidance algorithm was tested. The guidance algorithm was capable of tracking a drag reference under the influence of uncertain initial flightpath angles. The maximum offset in velocity bending angle occurred for a steep entry and was 1.06 degrees while the maximum offset in hyperbolic excess velocity occurred during a shallow entry and was 1.88 m/s. Furthermore, the tracking was also successful when a more accurate atmosphere model and perturbations were taken into account. For this analysis, the maximum offset in velocity bending angle and hyperbolic excess velocity were 1.24 degrees and 2.14 m/s respectively.Qastrodynamics; aerogravity; gravity; assist; optimization; hypersonics; waverider
20170224Space Exploration)uuid:ebb54c533c794e6794057451e2eca850Dhttp://resolver.tudelft.nl/uuid:ebb54c533c794e6794057451e2eca850@Development of an optimization framework for landing gear designVan Ginneken, P.Voskuijl, M. (mentor); Vergouwen, P. (mentor)#An opportunity was identified to improve the traditional landing gear design process. Especially in the conceptual design phase, lots of manhours are consumed by making the same calculations over and over again, for different concepts. Often, an existing gear is therefore used as initial starting point, to simplify the design process. This results in little technical progress. Additionally, integration between the different disciplines involved is suboptimal which can lead to inconsistent results. In this thesis, an optimization framework is described that can do the preliminary design of a landing gear fully automated. It ensures that communication between disciplines is respected by adding a toplevel optimizer which is in charge of changing the design variables. The realization of this framework greatly reduces the repetitive tasks in the design phase of a landing gear. This makes the design phase less limited to traditional architectures while leaving more time to evaluate nonstandard solutions that may be lighter, safer and/or cheaper.;MDO; optimization; landing gear; aerospace engineering; MDF
20160210)uuid:bc4479001e7744a3aa2bfc0d5f5c292fDhttp://resolver.tudelft.nl/uuid:bc4479001e7744a3aa2bfc0d5f5c292fTVertical Collaboration in a TwoLevel Supply Chain: An AgentBased Modeling App< roachBraams, F.P.ZLudema, M.W. (mentor); Tavasszy, L.A. (mentor); Oey, M.A. (mentor); Vergouwen, Y. (mentor)
Collaboration in the supply chain is nowadays seen in the scientific community as the next best thing in supply chain optimization (Ballot, 2015; Barratt & Oliveira, 2001; Barratt, 2004; Ireland & Bruce, 2000). Although widely investigated and often mentioned in literature, the concept of supply chain collaboration is not precisely defined (Barratt, 2004). It can be roughly described as: collaboration in the supply chain are all the joint efforts of the stakeholders within a supply chain to improve the overall performance(Barratt & Oliveira, 2001; Barratt, 2004). Procter & Gamble (P&G) can be regarded as one of the largest fast moving consumer goods (FMCG) companies in the world (MBASkool, 2015). Although performing quite well, P&G feels that they can still improve their supply chain (Olsthoorn, 2015). They have expressed the feeling that their main challenge lies in improving their supply chain while being more externally focused (Demange, 2015). P&G have therefore issued this specific project; assessing the effects of vertical collaboration in (one of) their supply chain(s) and provide a handle on how to implement this concept. This master thesis report discusses the effect of vertical collaboration in a twolevel supply chain; collaboration between manufacturer (Procter & Gamble) and the retailer (Retailer X). With the help of a casestudy on the product Dreft Automatic Dishwashing (ADW), the goal was to quantify the effect of increased vertical collaboration within a reallife supply chain. To help structure this research the following research question was drafted: Could vertical collaboration in the supply chain of Dreft ADW lead to better service, cost and cash results? In order to develop an answer to this question, first a literature study was conducted to better define the concept of vertical collaboration. Then, based on a dataanalysis on the current state of the supply chain, the problem areas were disclosed for which interventions were devised on the basis of the concept of vertical collaboration. Subsequently, an AgentBased Model (ABM) of the supply chain of Dreft ADW was designed, to simulate the effects of these interventions so as to provide the data on the effects of vertical collaboration in the supply chain of Dreft ADW. With the aid of the model we were able to both answer the research question as well as to provide the problem owner (Procter & Gamble) an approach to best optimize their supply chain by using the concept of vertical collaboration. The interventions that were used to embody the effect of vertical collaboration and subsequently tested in the ABM were:  Production batch size alignment to retailer orders.  Alignment of order information sharing process between retailer and manufacturer during promotions or continuous interaction.  Use of realtime uptodate information throughout the supply chain in ordering and replenishment.  Use of POS data in the ordering and replenishment process. The results of the model show that vertical collaboration in the supply chain of Dreft ADW could indeed lead to better service cost and cash results. By implementing the interventions in four sequent steps, service levels can be increased without increasing inventory levels. Next to that, cost savings of up to 2.4% of the gross value of the sold products can be achieved.supply chain; collaboration; optimization; FMCG; agentbased modeling; batchsize; realtime information; stakeholder alignment Engineering Systems and ServicesTransport and Logistics)uuid:7897c8cd14354d3996934eda558a3e6bDhttp://resolver.tudelft.nl/uuid:7897c8cd14354d3996934eda558a3e6b]Determination of the body force generated by a plasma actuator through numerical optimizationHofkens, A.Kotsonis, M. (mentor)"In order to extract the body force field that is generated by a plasma actuator from velocity data, most researchers disregard the influence of the pressure gradient to obtain a spatial and temporal description of < the body force field. There is however some discussion whether this assumption is valid or not. The current research tries to compute the body force field by using a numerical optimization procedure, using a \matlab optimization routine combined with an \openfoam solver which was adapted to accommodate for the body force term. Many simplifications had to be made to be able to perform the optimization in a reasonable amount of time, among which were a fairly coarse numerical grid, a first order discretisation scheme and a parametrization of the body force field. Due to this last simplification, no real conclusions can be drawn with regard to the spatial distribution of the body force, but the integral body forces in x and ydirection display more or less valid behaviour and correspond to previous research. It is also shown that the pressure gradient has the same order of magnitude as the body force density in all 8 cases, which means that this research challenges the assumption that the pressure gradient is of little importance when trying to obtain the body force from velocity data.1plasma actuator; ACDBD; optimization; body force<Aerodynamics, Wind Energy, Flight Performance and PropulsionAerodynamics)uuid:fc0a57ce33df4cd2a61b66e702cc9cf8Dhttp://resolver.tudelft.nl/uuid:fc0a57ce33df4cd2a61b66e702cc9cf84Earth frozen orbits: Design, injection and stabilityHoogland, J.Noomen, R. (mentor) A frozen orbit is an orbit chosen such that the effect of perturbations on (a combination of) the mean orbital elements is minimized. The concept first appeared in literature in 1978, and was applied that same year to the Seasat mission. This altimetry mission featured strict requirements on the accuracy of the altitude of the satellite above the sea surface. By designing an orbit for which the mean eccentricity and mean argument of periapsis remain static, the satellite s altitude will theoretically be constant, depending only on location of the subsatellite point. Classically, the theory behind frozen orbits is only based on the J2 and J3term of the spherical har monics gravity field model and clever manipulation of the Lagrange planetary equations. Through considerable analytical effort, it is possible to include all other zonal gravity field terms into the equation, but this approach is limited to perturbations that can be cast into the form of a disturbing potential. The aim of this thesis is to find a numerical method that overcomes this limit and to use that method to investigate the effects of including thirdbody gravity, atmospheric drag and solar radiation pressure on the mean orbital elements. To do this, the frozen orbit problem is formulated as an optimization problem. Use is made of Differential Evolution (DE) and grid searching to simulate many trajectories and to find a set of injection parameters that results in a minimal variation in the mean eccentricity and mean argument of periapsis. The mean elements are reconstructed from the osculating elements by making use of the EcksteinUstinov theory and subsequent numerical averaging. In combination with Precise Orbit Determination (POD) data, this reconstruction is used to investigate the variations in the mean orbital elements of ERS2 and TOPEX/Poseidon. Subsequently, the numerical method is applied to various orbital dynamics models. When applied to zonal gravity fields, the new method is found to be in good agreement with analytical solutions. The influence of other perturbations on solutions found in zonal models is examined, and it is found that taking these perturbations into account during the optimization process does not lead to significant improvements with respect to the simple zonal case, nor does it lead to significant changes in the found injection conditions. For the assumed satellite characteristics, radiation pressure is found to be the most influential perturbation, causing fluctuations in the mean eccentricity of 3%.aastrodynamics; frozen orbit; orbital perturbations; mission design; optimization; orbit injectionSpace Engineering (SpE)%Astrodyna< mics and Space Missions (AS))uuid:9cffd913b3d74f0fa5660c73f8a7d490Dhttp://resolver.tudelft.nl/uuid:9cffd913b3d74f0fa5660c73f8a7d4909JAY: Kitting as optimization tool of aircraft maintenance Kok, L.M.Santema, S.C. (mentor)UKitting as (LEAN) method in handling and supplying material and tools is not used in the aircraft maintenance process of KLM. That was one of the researchers first findings when starting this study for KLM Royal Dutch airlines. Kitting is the gathering of components and parts needed for the manufacture of a particular assembly or product. Individual components are gathered together, as a kit, and issued to the point of use (Bozer and McGinnis, 1992). This study investigates and designs a new concept process and physical tool cart using the method of kitting, that will facilitate the optimization of the KLM aircraft maintenance process: Design of a process, cart (proof of concept) and implementation map using the concept of kitting to facilitate aviation mechanics to work more efficiently,improving the productivity with to at least 10%, preferably more. Based on the results of the study and proof of concept, the researcher suggests KLM to further investigate the application of the method of kitting as optimization tool to increase productivity in aircraft maintenance. B787 and be on This is especially looking at the current addition of the 20 B787 s to the KLM fleet. New aircraft are more self monitoring than ever before, sending exact information about needed maintenance in advance, using predictive health monitoring systems. With this data available, kits containing all parts and materials needed for the specified maintenance can be combined. Kitting anticipates on the future developments in both aircraft design and it s needed maintenance. To be ready for the future, KLM should start today.{Aircraft Maintenance; KLM; process; optimization; Boeing 787; Boeing 737; Acheck; tool cart; Tool Trolley; LEAN; SIX SIGMACampus onlyIndustrial Design EngineeringProduct Innovation Management+Master of Science Integrated Product Design)uuid:41e4de26825c493eb89a1e460969bd30Dhttp://resolver.tudelft.nl/uuid:41e4de26825c493eb89a1e460969bd30~Changes of the Loads Envelope for Wing Stiffness Modifications, In the Frame of Multidisciplinary Design Optimization PurposesVan Der Wurff, S.Scharpenberg, M. (mentor)iThis work focuses on a multidisciplinary design optimization of an aircraft wing. Among others, a structural optimization of the wing stiffness is performed. For a certain stiffness model, a set of relevant load cases can be determined that have high chance to cause active constraints. The purpose of this research is to investigate if the set of relevant load cases changes, when a wing stiffness modification occurs. If changes in the relevant set of load cases are small, the decision can be made to calculate a constant set of relevant load cases, in order to reduce computation time in the optimization routine.saircraft; optimization; flight dynamics; wing; stiffness; flexibility; loads; structure; loads envelope; load cases
20190101&Precision and Microsystems Engineering)uuid:0c5a179649d74e7d943157448f9ce09aDhttp://resolver.tudelft.nl/uuid:0c5a179649d74e7d943157448f9ce09a@Optimizing inventory planning for aircraft component maintenanceAlizadeh, K..Curran, R. (mentor); Verhagen, W.J.C. (mentor)This research aims to improve the inventory planning for a logistical provider who offers aircraft component maintenance and availability to its customer. To this purpose, a classification model is introduced which makes use of two existing classification methods i.e. Analytic Hierarchy Process (AHP) and Cost Criterion in order to produce a superior classification strategy. Subsequently, the weights obtained through the classification are utilized in a Non Linear Integer Programming (NLIP) Problem in order to optimize the inventory levels. This integrated approach resulted into significant savings of up to 25%. In order to validate the suitability and robustness of the new model, its practical < performance is verified through a series of discrete event simulations.4optimization; inventory; spare parts; classification)uuid:0607236ec6d141a7873bc487065cea34Dhttp://resolver.tudelft.nl/uuid:0607236ec6d141a7873bc487065cea34$Optimization of IceClass Propellers
Huisman, T.J.Van Terwisga, T.J.C. (mentor)%
The main objective of this Master s thesis is to develop an optimization routine to improve iceclass propeller design methodology using the design space within the iceclass rules. Ice impacts on a ship propeller give additional design demands to ensure reliability and safety. Consequently, ice class propellers feature thicker blades, therewith compromising fuel efficiency. However, ships trading the Baltic states and Scandinavia only sail two to five percent of their time in iceinfested waters. Propulsive efficiency should hence be optimized for icefree conditions only, while still having sufficient ice performance and strength. The Finnish Swedish Ice Class Rules prescribe loads on the propeller blade as five load cases of uniform pressure that should be applied on the propeller blade. The NonDominated Sorting Genetic Algorithm II (NSGAII) is coupled to MARIN s inhouse propeller geometry generator, hydrodynamic boundary element analysis method PROCAL and a finite element analysis to evaluate the propeller blade strength. Both the radial and chordwise propeller distributions are parameterized by means of Bzier curves into optimization design variables. With these expansions, the computational framework is capable to automatically satisfy the iceclass stress constraints while converging to the best possible objective values. Each propeller within the optimization is iterated on mean pitch towards a design thrust. The four optimization objectives that are considered in this Master s thesis are propeller efficiency, thrust variation throughout the ship s wake field, propeller mass and iceinduced loading. Efficiency is considered as main objective while thrust variation is intended to provide interaction with the wake field. Besides the practical importance of the mass objective, it also guides the optimization towards high efficiency and maximum allowable material stresses. Based on a steady simulation of ice milling by means of an idealized iceload pressure distribution, the iceinduced loading can be estimated as quantification of iceperformance. Best practice guidelines on the usage of PROCAL within the optimization are developed based on grid refinement and numerical uncertainty studies. Four different implementations of the finite element method are compared to the solution from a dense tetrahedral solid element mesh. Linear shell elements appear to perform best, both in terms of computational time and accuracy. A case study shows that iceinduced loading can be reduced as function of particularly the pitch distribution and blade profile geometry. It is also observed that the optimization searches for the weaknesses within the computational methods. For instance, it appears that the current iceclass rules allow highly skewed propellers, despite damage cases in practise. The optimization results are encouraging for future work concerning the optimization of blade profiles, although further work is required. It appears that the thrust variation objective steers towards flat chordwise pressure distributions. Cavitation computations are not yet included in the optimization, nonetheless, the optimized propellers show only little cavitation in the tip region. In conclusion, the optimization seems to provide a wellbalanced starting point towards the design of high efficiency iceclass propellers.,ice; propeller; optimization; genetic; classMarine & Transport Technology/Ship Hydromechanics / Resistance and Propulsion)uuid:0dd5517167684e46b6cd970eb912e2acDhttp://resolver.tudelft.nl/uuid:0dd5517167684e46b6cd970eb912e2ac?Aerodynamic Design Optimization of the MTT Radial Micro TurbineGovindarajan, S.@Colonna, P. (mentor); Pini, M. (mentor); Visser, W.P.J. (mentor)SMicro turbines are touted to become the < prime system for the combined heat and power(CHP) applications in light of their significant advantages in terms of performance, size, costs and reduced CO2 emissions [1]. Micro Turbine Technology B.V. (MTT) is currently developing a 3KW recuperated micro turbine for such applications. Commercially available off the shelf turbocharger components are used since they provide high performance with relatively low costs since they are mass produced.The drawback in using these components is that they are manufactured for the automotive sector and inherently operate at different conditions than the MTT operating point. Here in lies an interesting scope for performance improvement by optimizing the turbine and within the current work the focus is on aerodynamic optimization of the radial inflow turbine used in the MTT system. This study is a followup from the recommendations provided in [2] and [3]. A goal driven optimization is performed on the rotor geometry using ANSYS DesignXplorer and a total of four design solutions were obtained. The most important findings from the response surfaces, sensitivity analysis from the optimization were: From the parametric sensitivity it was clear that all of the six design variables have a significant impact on efficiency. The exducer angles have the most predominant effect on efficiency with that of shroud larger than hub. All of the optimal candidates exhibited an increase in the totaltototal efficiency ranging from a minimum of 6.38 percentage points to a maximum of 7.90 percentage points as compared to the baseline geometry. This efficiency improvement was accompanied by an increase in mass flow rate with a minimum value of 69.15 g/s and a maximum of 73.51 g/s. These design solutions are then coupled with the diffuser domain to study the performance characteristics and the interaction between the components. The most important outcomes from these simulations were: The efficiency of the rotor drops by 3 percentage points on an average due to the additional pressure losses introduced when coupled with the diffuser. The diffuser performance has improved and the Cp experiences a maximum increase of 17.63 percentage points (Candidate D) and a minimum of 12.70 percentage points (Candidate B). The swirl coefficient for optimum diffuser performance is found at values close to 0.22. If the swirl coefficient is increased or decreased from this optimum, diffuser performance drops. The best design solution in terms of rotor efficiency and overall totaltostatic efficiency is Candidate C. However it exhibits a poorer diffuser performance than the other optimal candidates. From this study, it is apparent that there is compromise between rotor and diffuser performance. The improvement in rotor efficiency(<tt,5ax) ranges from a minimum of 6.16 (Candidate A) percentage points to a maximum of 8.54 percentage points (Candidate C) as compared to the baseline which is more or less similar to the case with the individual rotor domain simulations. The improvement in totaltostatic efficiency (<ts,9) achieves only a maximum of 3.23 percentage points(Candidate C) as compared to the baseline. In other words ,whatever is gained in rotor totaltototal efficiency from the design optimization, less than half of it is utilized when coupled with the diffuser(Cp<50%). The best candidate solution selected from the simulations mentioned above is then analyzed from a structural point of view by comparing its equivalent vonMises stresses with that of the baseline design. The most important conclusions from this study were: The optimized rotor experiences higher at stresses at the tip station than the baseline. At the root section both geometries exhibit higher stresses than most of the other stations. At the trailing edge the optimized rotor exhibits lower stresses than the baseline geometry. A maximum stress of 1599.7MPa for the baseline occur at the trailing edge root section and the optimized rotor experiences a lower maximum of 1439.3MPa and it occurs closer to the shaft. In order to reduce the tip stresses for the < optimized rotor and bring them closer to the baseline values, the speed of rotation must be reduced to the neighborhood of 180 000 rpm. One of the important outcomes of this study is that there is a compromise between diffuser and rotor performance, consequently a coupled optimization is recommended with the diffuser domain in order to get a better understanding of the interaction effects between the components. The optimization procedure in this work is performed with only the rotor domain due to computational restrictions and the design solution does not take into account the diffuser performance during the optimization process. A scaling study with the volute is recommended for the optimized rotor geometry in order to restrict the mass flow rate. The design solution exhibits an increase in passage area and consequently a rise in mass flow rate. However, in the real application a change in mass flow rate might cause a mismatch with other components of the MTT power unit. The structural analysis is done post optimization and is carried out without any constraints for maintaining the stress within acceptable levels. Therefore a multidisciplinary optimization combining the solid and fluid analysis is recommended to prevent the stress increase and maintain the creep life of the component.(microturbine; optimization; CFD analysis
20160929)uuid:a24ca984b35a4da6b4783329f7cd1f2bDhttp://resolver.tudelft.nl/uuid:a24ca984b35a4da6b4783329f7cd1f2bYOptimized structural design of Plug & Play Core modular stadia during preliminary designVan Laar, D.D.D.WTerwel, K.C. (mentor); Nijsse, R. (mentor); Veer, F.A. (mentor); Tnnissen, J. (mentor)4 Modular construction is the onsite assembly (installation and connection) of factory made units (Lawson et al., 2014). It allows fabrication of structural components to be moved from the building site to controlled environments. Ballast Nedam adopted modular construction as a key business strategy for the future (Ballast Nedam, 2014). Plug & Play Core is the modular and reusable structural core of a stadium. After realization, the inner core of the stadium can be easily disassembled, transported to another location and reused. Various stadium layouts can be realized, making the concept easily adaptable to the (varying) demands of the client/architect. The Plug & Play Core structural design needs to be easily adaptable to changing (tender) demands. At the same time the design process needs to be quick(er), resulting in more time for actual integral design. The design itself should primarily comply with safety codes, while the financial consequences of structural design decisions on other integral design aspects should be clear. An optimized design process in the form of a structural design tool is proposed to solve these challenges. The design tool should integrate FEM software for structural verification. First a literature search is performed to obtain necessary background information. After, a short summary of key design aspects regarding Plug & Play Core is presented. Then the design tool called Toolbox is explained in the form of a design manual. The main part of the report is ended with a case study of a demountable upper tier for the FIFA WC2022 AlWakrah stadium in Qatar. In that chapter the Toolbox design tool is applied to real project data. It is found that the integral design process of Plug & Play Core modular stadia can be optimized by the use of a design tool like the Toolbox. Earlystage consideration of full lifecycle design aspects via a cost analysis in combination with standardized structural analysis, makes the design process quick(er) and allows the designer to generate various design alternatives in a relatively short amount of time. This results in more time for integral optimization of the design. The case study insinuates that cost savings could be achieved by application of the Toolbox, making Plug & Play Core modular stadia more appealing regarding traditional construction alternatives.mmodular; construction; stadia; Plug & Play Core; preliminary design; optimization; < structural design; processStructural Engineering;Design & Construction / Structural and Building Engineering)uuid:e9e5bab509e04bf9b1c19508d381c0c6Dhttp://resolver.tudelft.nl/uuid:e9e5bab509e04bf9b1c19508d381c0c6Analysis of the behavior of the Ding well model for offcentered wells and its impact on gradientbased well location optimizationDe Zeeuw, J.Q.AJansen, J.D. (mentor); Ashoori, E. (mentor); Joosten, G. (mentor)The computer assisted optimization of well locations in reservoirs can result in a better planning of the reservoir development and nontrivial optimal locations can be found. Currently most optimizers use gradientfree methods which require significant computing capacity. Especially when considering decision making under uncertainty there is a need for faster, but still accurate optimizers to handle a variety of reservoir realizations. Some gradient based well location optimization methods have been proposed in literature that make use of the adjoint method to calculate all gradients in the field at the cost of only one forward and one backward simulation. The proposed methods are however using some approximation of the gradient of the objective function with respect to the exact well locations. This study shows a new method to derive the direct gradient of the objective function with respect to exact well locations via a well model for offcentered wells. However, the study shows multiple issues arising from this well model for offcentered wells and what their impact is on this new gradientbased well location optimization method. First, the offcentered well model is found not to be able to model wells located on exact grid block edges. Second, when moving wells from grid block to grid block the well model does not behave smoothly, which jeopardizes its suitability for well location optimization. A mitigation strategy is developed and tested, but this does not solve this issue. Third, the well model is not fully correct in modelling offcentered wells for both singlephase flow and multiphase flow, and again this jeopardizes its suitability for the new gradientbased well location optimization method. The use of an offcentered equivalent wellblock radius instead of the Peaceman wellblock radius is found to solve most of the suspect behavior for singlephase flow for offcentered wells. The issues related to multiphase flow are not solved in this study, but some potential solutions are proposed.ODing well model; optimization; gradientbased; well location; offcentered well51.9983106, 4.3764585)uuid:6cf6c9080f5d4096b3adaa96fd1ff382Dhttp://resolver.tudelft.nl/uuid:6cf6c9080f5d4096b3adaa96fd1ff382oParallelizing the Linkage Tree Genetic Algorithm and Searching for the Optimal Replacement for the Linkage TreeDe Bokx, R./Witteveen, C. (mentor); Bosman, P.A.N. (mentor)The recently introduced Linkage Tree Genetic Algorithm (LTGA) has shown to exhibit excellent scalability on a variety of optimization problems. LTGA employs Linkage Trees (LTs) to identify and exploit linkage information between problem variables. In this work we present two parallel implementations of LTGA that enable us to leverage the computational power of a multiprocessor architecture. These algorithm extensions for LTGA enable us to solve a problem that previously could not be solved, being the problem of finding highquality predetermined linkage models that result in a better performance of LTGA for intricate problems by replacing the onlinelearned LTs. This is done by learning highquality LTs offline by optimizing LTGAs performance as a function of static LTs. This results in a better performance of LTGA than with onlinelearned LTs as the problem complexity increases. A parameterfree implementation is used to search optimal subsets of linkage sets in the offlinelearned LTs. This pruning of the LT results in a further performance improvement of the LTGA by, on average, removing about 50% of the linkage sets from the offlinelearned LTs. This suggests that LTs contain redundancies that may possibly still be exploited < to improve the performance of LTGA with onlinelearned LTs.hparallel; optimization; modeling; LTGA; Linkage Tree; Linkage Tree Genetic Algorithm; model optimizationSoftware TechnologyAlgorithmics)uuid:897b5abe07134c4aad5d6c76d33c8bd2Dhttp://resolver.tudelft.nl/uuid:897b5abe07134c4aad5d6c76d33c8bd2Well Trajectory OptimizationLoomba, A.K.+Jansen, J.D. (mentor); Ashoori, E. (mentor)Delineating the well placement and trajectory of production or injection wells is an important step of any field development program. Drilling a well is an expensive process in terms of money, time and effort invested. Being an expensive item, wells must be carefully studied before being drilled. In order to make this process more efficient, this dissertation presents a computerassisted approach of determining the optimal well trajectory using adjointbased optimization technique. The algorithm is based on surrounding the injection or production wells with dummy wells; a technique proposed in earlier studies. These dummy wells have a minimal rate of injection or production so as to minimalize their influence on the simulated output. The sum of the gradients of the objective function with respect to the flowrate in each dummy well over the lifetime of the reservoir is used to define an improved well trajectory. The whole process is repeated until the maximum net present value is achieved. In order to ensure that the newly optimized well trajectory is drillable, the algorithm restricts the curvature of the trajectory. The optimization algorithm was applied to synthetically generated homogeneous and heterogeneous reservoirs to improve a single well trajectory. The algorithm was also successfully tested to improve multiple well trajectories in the Egg Model , a threedimensional heterogeneous channelized reservoir model. In addition to well trajectory optimization, the dissertation also presents an approach to optimize well numbers. This algorithm is based on drilling a dense quasiwell configuration to get an initial knowledge of the reservoir and utilizing this knowledge along with dummy well gradients to reduce the well count. Depending on the reservoir, fluid and economic parameters, both the algorithms display the adeptness to predict optimized type, trajectory and number of wells. Although the optimization results showed a considerable improvement in the net present value of the project, the algorithm can get stuck in local optimum.Eoptimization; algorithm; gradientbased; well trajectory; well number!Section for Petroleum Engineering)uuid:21fa0317d7c14ba1835e0a771ab50923Dhttp://resolver.tudelft.nl/uuid:21fa0317d7c14ba1835e0a771ab50923gMaximizing operational readiness in defense aviation by optimization of flight and maintenance planningVerhoeff, M.EVerhagen, W.J.C. (mentor); Jongstra, J. (mentor); Curran, R. (mentor)T
The primary objective of a defense aviation operator or air force, is to maximize its continued readiness to successfully perform military flight operations, whenever and wherever the home state or international community calls for it. In order to maintain a satisfactory level of operational readiness, air forces need to ensure that sufficient aircraft are mission capable and continue in this state for an adequate period of time. Furthermore, a specified amount of training hours need to be produced to keep all aircrew in mission capable condition. These requirements must be fulfilled at all time, which requires an involved planning process. Since aircraft are subject to stringent safety requirements, preventive maintenance must be performed at prescribed flight time intervals, which causes downtime and affects operational readiness. As a result, all preventive maintenance efforts as well as the flight assignments should be planned and scheduled adequately for the entire aircraft fleet. This process called flight and maintenance planning (FMP) is highly complex and time consuming due to numerous constraints. Furthermore, deviations from the schedule are inevitable due to various uncertainties. Hence, the sched< uling process should be responsive, flexible and fast. However, in practice, aircraft utilization tends to be managed manually and on a daytoday basis, leading to a reactive and overly timeconsuming approach, in which problems with respect to operational readiness and controllability can easily develop. Besides, longterm utilization of (preventive maintenance governed) aircraft components is uncontrolled, which results in fluctuating demand for resources and may affect readiness on the aircraft level. To solve these shortcomings, this research introduces two interconnected novel mathematical optimization methods for flight and maintenance planning on both aircraft and component level. The developed methods take into account all relevant requirements and constraints from industry practice and address all aspects of operational readiness, aiming for a proactive, efficient and more robust scheduling effort. Both models are implemented for real historical problem instances drawn from the Royal Netherlands Air Force (RNLAF) CH47D Chinook transport helicopter fleet. Subsequently, the generated outputs are compared with the actual output of the RNLAF for the exact same problem instances as a means of validation and demonstration of the model s performance. The aircraft FMP model results demonstrate that fleet operational readiness can be increased by up to 22% for the subject years, while using the same organizational resources, and fulfilling the same operational requirements. Furthermore, the model supports in rapid generation of feasible longterm flight and maintenance plans, which strongly improves flexibility, anticipation and control. The results of the component FMP model demonstrate that proactive scheduling of substitutions between aircraft and spare inventory, can be utilized towards increased overall component serviceability. This reduces the variability on demand for preventive maintenance, which leads to budgetary and logistical benefits. The developed methods provide a comprehensive and versatile FMP solution for defense aviation operators and provide a strong foundation for development of more complex or comprehensive FMP models.defense; scheduling; readiness; optimization; linear programming; aircraft; component; fleet; sustainability; serviceability; availability; military; air force; Royal Netherlands Air Force; aviation; maintenance; phase maintenance; preventive maintenance; planning; operations research
20350710,Air Transport and Aerospace Operations (ATO))uuid:bb64f92af6834a458a0a98d22e87b3afDhttp://resolver.tudelft.nl/uuid:bb64f92af6834a458a0a98d22e87b3afThe Weather MakerMark, A.7Van Doornen, E.J.G.C. (mentor); Groenewold, S. (mentor)A research in optimizing the urban climate on individual user base in real time. Design exploration in the context of said personal comfort optimization.Urban; Comfort; optimizationArchitectureThe Why Factory)uuid:dad2f7fb545a46b3a266a418060d1fe4Dhttp://resolver.tudelft.nl/uuid:dad2f7fb545a46b3a266a418060d1fe44Online learning algorithms: Methods and applicationsVan der Geugten, G.Molinaro, M. (mentor)sIn this research we study some online learning algorithms in the online convex optimization framework. Online learning algorithms make sequential decisions where there is no information about the future events in time. To measure the success of a decision there is a (bounded) cost for each decision, which is revealed after every time step. The goal of these algorithms is to perform roughly as well as the best fixed decision in hindsight and this will be measured by calculating what we will call the regret. The algorithms that we studied are the Multiplicative Weights Method, the Weighted Majority Algorithm, the Online Gradient Descent Method and the Online Newton Step Algorithm. All of these methods have their own applications and regret bounds. Which are also studied and in particular, we give a proof for the Multiplicative Weights Method and the Weighted Majority Algorithm. Furthermore, we provide an implementation of the Online Gradient Descent Met< hod on a portfolio management problem where we have four stocks and we seek the best investment strategy to obtain the biggest profit over the stocks. This has lead to a wealth increase by a factor 1.37 after four years of investing where the best fixed distribution over the stocks in hindsight leads to a wealth increase of 1.68. Further analysis shows that the implementation works better then we could be expecting beforehand.optimization; online learningApplied mathematics)uuid:ab83f3969d07459d8e07cf284ac5124fDhttp://resolver.tudelft.nl/uuid:ab83f3969d07459d8e07cf284ac5124f8Weight and Volume Optimization of a Threephase InverterLiu, Y.Popovic, J. (mentor)vThreephase inverters are widely used in industries. Due to the advancements in semiconductor technologies, researchers have witnessed a continuous increase in the power density of inverters over the decades. Meanwhile, on the basis of such advancement, designers are also pushing the power density to its limit by minimizing the weight and volume of every subsystem in the inverter such as EMI filter, cooling system, DClink, etc. More often than not, these studies only look into one individual subsystem and do not consider the correlation between different subsystems. This study explores the optimization of the weight and volume of a threephase inverter system with a different approach. In this thesis, essential designing variables and subsystems are considered simultaneously. To accomplish this goal, a thorough investigation of the case study inverter system is introduced; crucial subsystems and designing variable will be picked out. Then an effective and efficient system model is constructed to evaluate different system designs. In the final step, NSGAII optimization algorithm is applied to obtain Pareto optimal inverter designs. The optimization results are presented and analyzed to shed light upon guidance which designers could follow. It is believed that the proposed optimization procedure could effectively assist designers in the early stage of the designing task./inverter; optimization; NSGAII; weight; volumeESEElectrical Engineering)uuid:22772c2f7fb94775946173099de52583Dhttp://resolver.tudelft.nl/uuid:22772c2f7fb94775946173099de525835Distributed ConnectivityAware Multirobot ExplorationKuijsters, W.A.A.Keviczky, T. (mentor)qNetworks of mobile robots enable us to explore areas quickly and without danger to human operators. To execute this task successfully, communication between the robots in the network is imperative. In this thesis, we link communication between robots to network integrity, where network integrity is defined as the ability of the network to communicate its acquired data to all robots in the network and to the human operator. We first introduce a greedy exploration algorithm which will be used as the basis of this connectivityaware exploration algorithm. Then, we propose two different algorithms that each aim to maintain network integrity by using relaying robots:  The Laplacian algorithm aims to keep the graph of robots connected, which means that the graph Laplacian has a positive Fiedler value. The relaying robots actively maintain network integrity while the exploration robots execute the greedy exploration task.  The Data Transmission Rate (DTR) algorithm aims to preserve enough bandwidth for the exploration robots. All robots simultaneously attempt to preserve this bandwidth, and through artificial potential functions a control law is devised to allow for exploration. We present simulation results based on a series of scenarios, which involve exploration of a rectangular obstaclefree area. From the results, we conclude that the DTR algorithm performs significantly better in terms of exploration time and size of the explored area. Still, many improvements can be made to the DTR algorithm, such as incorporating obstacle avoidance and finding realistic parameters for the signal strength function used in the algorithm.BSDP; Distribution; robotic exploration; optimization; connectivity$Delft Center for Systems and< Control)uuid:3d92d97423374593a003e422489d8966Dhttp://resolver.tudelft.nl/uuid:3d92d97423374593a003e422489d8966^Optimizing Compliant Mechanisms for Sustainable Materials: A Case Study of a Compliant GrasperDiepens, T.Herder, J.L. (mentor)Optimization in engineering could and should be used for more than just optimizing performance of mechanisms. In a world with growing environmental pressure and increasing scarcity of materials, thinking about how we could make better use of materials becomes more and more important. This is where optimization could be used to make sure mechanism satisfy their requirements while using more sustainable materials. Mechanisms which have potential for sustainable designs, due to their monolithic nature and simplicity, are compliant mechanisms. These mechanisms however heavily rely on material properties and thus often require high quality materials.This paper defines and tests a method for optimizing compliant mechanisms for sustainable materials by minimizing the demands for the material while making sure the mechanism still fulfills its requirements. This is done using two case studies: a laparoscopic grasper with strict boundary conditions, and a larger compliant grasper with less strict boundary conditions.2compliant mechanisms; optimization; sustainabilityBioInspired Technology)uuid:3624611d6f544db4a3b0a4ac47b38131Dhttp://resolver.tudelft.nl/uuid:3624611d6f544db4a3b0a4ac47b38131Lucrative PromenadesPlastira, I.=Bier, H. (mentor); Biloria, N. (mentor); Vollers, K. (mentor)The design proposal investigates the generation of a possible social and physical structure within the area of the former army campus of Kodra;thus a communal flexible space for people to meet, act, produce and socialize. The general result of this process was to investigate the implementation of computational techniques to generate a building enviroment to reflect social and spacial needs, within the context of experimental green strategies theme.7optimization; agentbased simulation; urban agriculture5Non Standard and interactive architecture  Hyperbody)uuid:b80682e9c39a45209f0bbe3efd5cd6e4Dhttp://resolver.tudelft.nl/uuid:b80682e9c39a45209f0bbe3efd5cd6e4TIntegrating Natural Ventilation within an optimization process of energy performance
D'Aquilio, A.BTurrin, M. (mentor); Bokel, R.M.J. (mentor); Martin, C.L. (mentor)WDecisions made in the early stage of the design process can have huge impacts on the future performance of a building. The implementation of natural ventilation strategies should occur early in the design stages and it should be embedded into a multicriteria optimization process in order to achieve low energy performance and indoor comfort.Eoptimization; indoor comfort; energy performance; algorithmic processSustainable Graduation Thesis)uuid:7baa3baf8b144a18a843fe1817719d59Dhttp://resolver.tudelft.nl/uuid:7baa3baf8b144a18a843fe1817719d59Network optimization based on trip purpose: Public transport network optimization based on towards trip purpose differentiated passenger groups
Roeske, R.BVan Nes, R. (mentor); Baggen, J.H. (mentor); Van Arem, B. (mentor)Passenger loss is inextricably connected with network optimization, since longer access times and distances will hypothetically result in a partial passenger loss, because longer access distances confine the willingness to bridge those distances. Optimizing stopping distances based on different passenger groups that are differentiated towards trip purpose, leads to a different optimization then for the whole passenger group. This thesis develops a new method to optimize stopping distances based on trip purpose and generates possible compensation measures per passenger group to prevent fallback of transport usage. The method was applied to a tram network in the Dutch city of Rotterdam.Dpublic transport; passenger groups; trip purpose; optimization; tramTransport & PlanningTIL)uuid:b97923b62d5b4763a89d9017747b1c8dDhttp://resolver.tudelft.nl/uuid:b97923b62d5b4763a89d9017747b1c8dEntering an integrated < cluster
Bijloo, M.Bots, P.W.G. (mentor)wA modelbased approach to support an utility provider in its investment decisionmaking to enter an integrated cluster.\nontechnical factors; optimization; experimental design; utility system; industrial clusterPolicy Analysis)uuid:1cffab121f164d9da8d2770d15030f11Dhttp://resolver.tudelft.nl/uuid:1cffab121f164d9da8d2770d15030f11HAerodynamic and Aeroelastic Design of Low Wind Speed Wind Turbine BladesRamirez Gutierrez, C.A.*Timmer, W.A. (mentor); Shen, W.Z. (mentor)A large number of wind energy installations exist on rich wind resource sites. Nevertheless, estimates show that about 50% of the world s wind energy resource has a wind speed of 7 m/s or less. For these low wind speed resource areas, low wind turbine technology is required. For this reason, this DTU Wind Energy master project, in cooperation with Ming Yang Wind Power European R&D Centre ApS, looks into the design of a low wind speed wind turbine blade. The project s goal is to design a wind turbine blade for a 2 MW wind turbine, with a rotor diameter of 115 meters. A site, in China, is also proposed for the wind turbine design. The project focuses on the design of a blade for low wind speed wind turbine applications, on sites with a mean wind speed of about 7 m/s. The project includes several stages. First an introduction to the blade design and blade optimisation methods are introduced. Afterwards, the provided site in China is assessed and key parameters are selected for the next project stages. The next step, involves the wind turbine design, provided by Ming Yang Wind Power. This one is reviewed by doing an aerodynamic and aeroelastic performance analysis. With a cost of energy approach, a new wind turbine blade, for a wind turbine with a rated power of 2MW, is designed. Finally, an aerodynamic and aeroelastic performance analysis of the new blade, under different wind conditions, is performed to assess its feasibility. The framework is carried out with HAWC2, developed by DTU, and compared to GH Bladed, at some of the design stages.Raeroelastic; aerodynamic; china; optimization; design; low wind; blade; windenergyDUWINDEuropean Wind Energy)uuid:94455039e5324219b223b759cc317046Dhttp://resolver.tudelft.nl/uuid:94455039e5324219b223b759cc317046AA New Strategy for Combined Topology and Fiber Angle Optimization Yap, T.T./Langelaar, M. (mentor); Van Keulen, A. (mentor)The use of composite materials is of increasing importance over the past years. Especially unidirectional fibrous laminates are nowadays widely applied in industry. They provide mechanical advantages in terms of stiffness to weight ratios, strength and resistance against fatigue. These properties make them suitable for highend applications as for example the aerospace industry. Topology optimization is a mathematical technique which has recently gained importance as well. As an optimization technique with a large design freedom, it is able to design complex structures with high performance beyond human abilities. Together with the latest improvements on manufacturing techniques, the application of topology optimized structures intensifies in various fields. This research focuses on topology optimization on unidirectional fibrous laminate structures. The problem of combined topology and fiber direction optimization is researched over the past years by a number of groups. The problem formulation where the fiber angles are directly used as design variables is highly nonconvex and is likely destined to end up in a local optimum far from the global optimum. Two other alternatives are described in literature: a discrete and continuous problem formulation. In the discrete approach, called Discrete Material Optimization (DMO), a finite number of candidate materials per element represents the different fiber orientations and penalization is applied to end up with a clear distinction between the candidate materials. The discrete formulation has the drawback that the solution is limited to the predefined candidate materials and that the number of d< esign variables easily becomes large. Furthermore, the global optimum could never be guaranteed due to the required penalization. The continuous approach uses lamination parameters as design variables and the optimization problem becomes convex. A shortestdistance approach is used to determine the closest realistic laminate configuration for the global optimal set of lamination parameters. Using this technique, continuous variable stiffness panels can be designed with a reasonable amount of design variables. However, the realistic laminate configuration to a set of lamination parameters is not known analytically for more complex problems. Therefore, the determination of a physically meaningful configuration may be a difficult task, and may go with a loss of performance. Given both the pro's and con's of the methods from literature, there seems to be a demand for a method that can provide detailed results (continuous variable stiffness), with a reasonable amount of design variables, which also directly provides a physically realistic laminate configuration. In this research a new method called the Adaptive Angle Set Method (AASM) is proposed. AASM solves a sequence of DMOlike subproblems for fiber angle optimization, but the associated design variables are not penalized. A separate set of density variables performs the topology optimization and the combined problem is solved simultaneously. Every subproblem in AASM is analogue to a nonpenalized DMO problem with three candidate materials for every element, representing a set of three different fiber angfiles. In the initial subproblem, the angle set is equal for all elements and given by 60 0 60, spanning the entire domain of 180 of possible fiber angles. This subproblem is solved to optimality and the subsolution is used to formulate the succeeding subproblem. Based on the subsolution of design variables, a combination of update functions estimates a new fiber angle for every element, which is defined as the middle angle of the element's new angle set. The two other angles are valued from this middle angle plus and minus a certain offset (range) and the new subproblem is again solved to optimality. However, the range between the three candidate materials is tightened with the formulation of every new subproblem, such that the sequence of problems converges to angle sets where the three candidate materials are close to each other. This can be as close as 1 difference in the final subproblem. At the final stage, penalization is applied to create a clear distinct solution between the candidate materials, but this only causes a minimal loss of performance due to the small range in the angle set. Using this approach, the number of design variables is constant for every subproblem, namely three fiber angle design variables and one density variable per element. In the final stage, a high angle resolution is obtained with a directly known laminate configuration. The way in which a new subproblem is formulated highly depends on the estimation of the new angle for every element. The determination of the optimal new angle using an optimization routine would be equal to solving the overall fiber angle problem, which can not be solved efficiently with a gradient based optimizer. Therefore, two heuristic update functions are introduced to estimate the new angle. The first update function makes a linear combination of the previous angle set with the corresponding optimal design vector. The second update function sets the new angle equal to the largest principal stress direction for that element. A number of test cases showed that a mixed application of both update functions yielded the best results. The final configuration was tested on a number of compliance minimization problems, which were kept planar and single loaded during this research. For small problems, the AASM results could be compared to brute force global optima of the underlying fiber angle integer problem. Results equal or close to the global optimum were obtained. For larger problems and multiple layer laminates, AASM provided< promising results as well, which were obtained faster than a comparable DMOformulation. The promising results obtained by AASM makes the method worthwhile for further investigation on larger and more complex problems, including other objective functions, bending elements and manufacturing constrained problems.topology; optimization
20141112)uuid:86815e55bbba45b4915b6f321b485940Dhttp://resolver.tudelft.nl/uuid:86815e55bbba45b4915b6f321b4859409Imitation learning for a robotic precision placement taskVan der Spek, I.T.+Babuska, R. (mentor); Kuijpers, J. (mentor) In industrial environments robots are used for various tasks. At this moment it is not feasible for companies to deploy robots for productions with a limited batch size or for products with large variations. The use of robots for such environments can become feasible through a new generation of robots and software which can adapt quickly to new situations and learn from their mistakes while being programmable without needing an expert. A concept that can enable the transition to flexible robotics is the combination of imitation learning and reinforcement learning. The purpose of imitation learning is to learn a task by generalizing from observations. The power of imitation learning is that the robot is programmed in an intuitive way while the insight of the teacher is incorporated in the execution of the task. This research studies the combination of imitation and reinforcement learning, the research is applied to an industrial usecase. The research question of this study is: "Can imitation learning be combined with reinforcement learning to achieve a successful application in an industrial robotic precision placement task?" To imitate the demonstrated trajectories, Dynamic Movement Primitives (DMPs) are used. The DMPs are used to encode the observed trajectories. DMPs can be seen as a springdamper like system with a nonlinear forcing term. The forcing term is a sum of Gaussian basis functions with each its corresponding weight. Reinforcement learning can be applied to these weights to alter the shape of the trajectory created by a DMP. Policy Gradients with Parameter based Exploration (PGPE) is used as reinforcement learning algorithm to optimize the recorded trajectories. Experiments done on a UR5 show that without the learning step, the DMPs are able to provide a trajectory that results in a successful execution of a robotic precision placement task. The experiments also show that the learning algorithm is not able to remove noise from a demonstrated trajectory or complete a partial demonstrated trajectory. Therefore it can be concluded that the PGPE algorithm is not suited for reinforcement learning in robotics in its current form. It is therefore recommended to apply a dataefficient version of the PGPE algorithm in order to achieve better learning results.reinforcement learning; imitation learning; policy gradient; pgpe; dynamic movement primitives; precision placement; dmp; optimizationEmbedded Systems)uuid:6994221172164c09b9e5e5e452240a5bDhttp://resolver.tudelft.nl/uuid:6994221172164c09b9e5e5e452240a5bNThe role of electrical energy storage in a future sustainable electricity gridVan Staveren, R.J.M.Herder, P.M. (mentor); De Vries, L.J. (mentor); Cunningham, S.W. (mentor); Verzijlbergh, R.A. (mentor); Aalbers, R. (mentor)The call for lower CO2 emissions has increased the integration of renewable energy sources in the electricity system. However, these intermittent sources do not follow the cycles of demand and are unpredictable in their nature. As the electrical system needs to constantly balance supply and demand, these renewable sources cause problems in the operations of the grid. Electrical energy storage is proposed as a solution for these issues. The research uses an optimization model to test the effects of energy storage on the operation of the electrical system. It shows that the development of storage can be beneficial in systems with a large amount of renewables. The value of storage is mostly dependent on the amount of renew< ables in the electricity system. Low amounts of renewables give too little opportunities to load while too much renewables give the storage only few periods to unload. Secondly, the value of storage is dependent on the amount of available transmission capacity. In some situations, investments in transmission can be replaced by investments in storage. As the transmission system operator (TSO) is responsible for system balance, he should have the possibility to choose between different investments and pick the optimal one.9electrical energy storage; renewable energy; optimizationEnergy and Industry)uuid:95cfae42d59d4183a3db43f66fc45ee1Dhttp://resolver.tudelft.nl/uuid:95cfae42d59d4183a3db43f66fc45ee1Air freight transportation configurations: Exploration of optimization possibilities in the freight chain within Europe of KLM CargoHemmes, A.F.DTavasszy, L.A. (mentor); Rezaei, J. (mentor); Warnier, M.E. (mentor)tKLM Cargo transports freight by truck from European outstations to the Schiphol Hub. This freight is transported palletized. It is unexplored what the impact on the KPI s of KLM Cargo is of changing pallet composition or the transportation method (palletized vs. Loose) of this export freight flow. The objective of this research is to provide insights into these effects.;air cargo; logistic chain; optimization; palletized freight
20150828Transport & Logistics3Systems Engineering, Policy Analysis and Management)uuid:dc2a5b72afe04fd99a12a804a855408aDhttp://resolver.tudelft.nl/uuid:dc2a5b72afe04fd99a12a804a855408a[Implications of dredge mine design on mine optimizations and discussing possible approaches
Bijmolt, M.J.+Benndorf, J. (mentor); Wambeke, T. (mentor)5The development of dredging as a major player for surface mine applications has led Royal IHC, a large equipment supplier and consultant for dredging and mining operations, and the TU Delft to work on more advanced optimizations techniques for the design of dredge mines. Three implications of the design of a dredge mine were found to be crucial for optimizations, namely: 1) depth control, 2) mining direction and 3) creation of multiple ponds. The conventional approach for open pit mines, in which a series of nested pits are created to determine an optimal mining sequence, was tested using the core module of Whittle and showed not to be readily applicable on dredge mines, because 1) multiple ponds may be created, 2) the depth to be mined for a certain area changes in time and 3) the nested pits expand randomly towards high graded zones. A new method for optimizing dredge mines was introduced as a second approach, which determines an ultimate depth per stacked block model, based on the cumulative values and finds an optimal route for a pond through these stacked blocks by using an adapted version of the Nearest Neighbour algorithm. Four limitations for this approach are recognized: 1) the depth difference between stacked blocks could become impractical, 2) full utilization of the field is not possible because it may reach a premature deadend or it may enclose a group of nonmined blocks, 3) the blocks have to meet the same length and width requirements of a pond; therefore not incorporating the accuracy of the data and 4) it lacks the function ability to mine the area in layers. Project Alpha indicated that the new approach finds an optimal mine design; however, the long lifetime of the mine (>60 years) results in a low recovery of 65%. Decreasing the lifetime of the mine would result in a higher recovery. The conventional approach showed to be impractical for the design of a dredge mine, while it created multiple thin deposits. The NPV of the worst case scenario of Whittle turns out slightly lower than the NPV of the optimal route determined by the second approach.dredging; mining; optimization
20140703&Resource Engineering/Mine optimization)uuid:3efae2c3b0924c689dcf4ab0d4cff398Dhttp://resolver.tudelft.nl/uuid:3efae2c3b0924c689dcf4ab0d4cff3984Optical System Optimization Using Genetic AlgorithmsHesamMahmoudiNezhad, N.H.M.N./Thijssen, J< .T. (mentor); Bociort, F.B. (mentor)TThe goal of this project is to investigate the performance of the Genetic Algorithms (GA) and the influence of their parameters on optical system optimization. We have developed our own code on optical system optimization using the MatLab GA module to accomplish this task. To evaluate the optical part of our code we checked the outcomes of each step with the commercial lensdesign softwarepackage Zemax. We tested different tuning parameters of the GA. We found that the mutation and the crossover parameters are the most critical parameters. Choosing inappropriate values of these parameters causes the optimization routine to never reach a good result, even by increasing the population size and the number of generations to high numbers. As an alternative to GA, we studied the Artificial Bee Colony (ABC) method. This is one of the newest methods for global optimization which is claimed by some authors to perform better than GA. We combined an existing ABC code with our optical code. According to the results, for the optical system we consider, we found the GA to be superior over the ABC method.0optical system; optimization; genetic algorithms
20140610Applied SciencesImaging Science & TechnologyApplied Physics)uuid:13dab427a77c476ba352f1cb7cf6a0e1Dhttp://resolver.tudelft.nl/uuid:13dab427a77c476ba352f1cb7cf6a0e14Optimization strategy for conceptual airplane design
Vasseur, P.T.Due to the ever growing demand for more efficient aircraft novel aircraft concepts have to be explored. By improving design tools the potential of unconventional configurations can be further studied. This requires improvement of conceptual design tools such that more knowledge can be gathered on alternative solutions as early in the design process as possible. Multidisciplinary design optimization (MDO) can support this process by providing an environment in which the various disciplines can be designed and optimized concurrently, while a certain level of consistency is maintained. An optimization design tool has been created to assess the potential performance gains of novel aircraft configurations. It connects with the Initiator design tool, which is a conceptual design framework. As such, it can also be used as a means to expose any design issues that may exist in the Initiator. With the optimizer tool the following four case studies were performed: a conventional Airbus A320, a forwardswept canard aircraft, a threesurface aircraft and an ovalfuselage aircraft. For this purpose the genetic algorithm, a gradient algorithm and a hybrid genetic algorithm were used. From the case studies followed that large improvements can be obtained with unconventional aircraft configurations when compared to the initial aircraft design proposed by the Initiator design tool. Up to 20% improvement was found with the threesurface and canard aircraft. The ovalfuselage aircraft could be improved by a solid 10%, while a 5% improvement was obtained with the conventional A320. Among all cases the most contributing factors were the wing position, sweep angle and aspect ratio. There is a tendency towards lower sweep angles due to the positive effect on the weight of the wing and an underestimation of the drag rise. With the forwardswept canard relatively high sweep angles were found, from which followed that the weight penalty of forward swept wings is underestimated. The sizing routine of the control surfaces is found to be inadequate, since the Initiator derives most parameters directly from the wing and does not properly take into account control and stability requirements. Results have shown that this mainly regards the sweep and dihedral angle. These sizing issues also affect the static margin. It was found that class II design information was not fed back to the control surface sizing. From the used optimization algorithms can be concluded that the gradient algorithm was the least effective as it had difficulties with the noise. It sometimes stopped prematurely or started oscillating. The genetic algorithm was found to be th< e best option due its robustness. It proved to be far less sensitivity to noise. Its computational cost could be significantly reduced by applying parallel optimization and using a caching mechanism. The hybrid algorithm was found to be too computational expensive. The obtained increase in objective value did not outweigh the added cost."optimization; aircraft design; mdo
20140612*Aerospace Design, Integration & OperationsAerospace Structures and Design Methodologies52.009507, 4.360515)uuid:034c13143d9a425abe9c6cf73756f9f1Dhttp://resolver.tudelft.nl/uuid:034c13143d9a425abe9c6cf73756f9f1<Optimal use of the subsurface for ATES systems in busy areasQian, L.iOlsthoorn, T.N. (mentor); Bloemendal, J.M. (mentor); Timmermans, J.S. (mentor); Van Beek, H.J.M. (mentor)@With the incentive to reach the energy saving and CO2 emission reduction targets of the Netherlands, the application of Aquifer Thermal Energy Storage (ATES) is expected to increase sharply during this decade [2]. With limited aboveground and underground space, well arrangement is becoming difficult in busy areas with a rapidly growing number of ATES systems. Master plans were proposed to achieve optimal use of the subsurface, especially in such busy areas. This study aims at improving the robustness of such master plans. A twostage method is proposed to obtain such robust master plans. It was applied to one of the seven available and investigated master plans, i.e. the Parooldriehoek in Amsterdam. The studied master plan was optimized in the first stage by replacing its design parameters with their best alternatives. In the second stage this sooptimized plan was tested to assess its flexibility to handle climate change and additional future users. As a result, the studied master plan could successfully be lifted to a higher level of robustness compared to the original.<ATES; master plan; arrangement; subsurface use; optimizationWater ManagementWater Resources)uuid:cc25fc3120ac46e09173b8e53ebef400Dhttp://resolver.tudelft.nl/uuid:cc25fc3120ac46e09173b8e53ebef400\Optimization of the operational use of entrance channels based on channel depth requirementsDobrochinski, J.P.H.CVellinga, T. (mentor); De Jong, M. (mentor); Groeneweg, J. (mentor)Y
Large capital and maintenance dredging operations are required to ensure the accessibility of many ports. The expenses associated with the dredging operations can have a significant impact on the finances of these ports. Therefore, considerable attention to the design of the width and depth aspects of access channels is justifiable. This study considered this topic within the framework of an Additional Master Thesis (3month internship). The objectives of the study are: i) to verify the influence of different processes and sources of uncertainties in the evaluation of minimum depth requirements; and ii) to investigate the advantages and drawbacks of different methods of depth requirement evaluation. The Port of Tubaro (Southeast Brazil) is used as a case study to verify processes and methods. Four different approaches were considered to evaluate depth requirements for the access channel of the Port. These methods are based on either deterministic and/or probabilistic methods, and on approaches without or including wave influences. The results for the case study indicate that ship motions due to waves have a minor influence on the required channel depth at that location during most of the time. However, in certain wave conditions (not only in terms of wave height, but also wave period and wave direction relative to the manoeuvring ship) vertical ship motions become the dominant issue regarding depth requirements; consequently waves should be included in a practical evaluation over time. In probabilistic approaches more knowledge can be incorporated in the analysis, however, this requires detailed information. The deterministic approach, on the other hand, is simpler to use and gives good insight about the main driving variables. Although, the main drawback of deterministic methods is that the reliability of the < evaluation cannot be accessed, or that conservative assumptions need to be made. This may be uneconomical. The use of a probabilistic method for the case study led to a more optimized use of the channel in terms of accessibility in comparison to the results obtained with the deterministic method. Nevertheless, those results depend largely on the safety factors assumed in the deterministic computations relative to the probability distributions considered in the deterministic approach. Alternatively, the safety margins can be computed or calibrated for specific cases based on probabilistic calculations. In that case the results of deterministic and probabilistic methods can be similar, ensuring the required reliability of the practical deterministic approach, but not being excessively restrictive.?depth requirements; access channel; probabilistic; optimizationHydraulic EngineeringPorts and Waterways)uuid:a6f1539bd6b049959bca777a277e1295Dhttp://resolver.tudelft.nl/uuid:a6f1539bd6b049959bca777a277e1295fDevelopment of a LowThrust EarthCentered Transfer Optimizer for the Preliminary Mission Design PhaseBoudestijn, E.]Develop the basis for a TU Delft Astrodynamics Toolbox (Tudat)based software tool that comprises the fundamental functionalities required in order to optimize lowthrust Earthcentered orbit transfer trajectories for the preliminary mission design phase. Motivation: contribution to Tudat and facilitate case studies for the MicroThrust consortium.&space; optimization; Tudat; lowthrustAstrodynamics & Space Missions)uuid:d1d56fecc63a4920b961c5ef0244588cDhttp://resolver.tudelft.nl/uuid:d1d56fecc63a4920b961c5ef0244588cFreeform Follows FunctionsSmidt, D.M._Borgart, A. (mentor); De Ruiter, P. (mentor); Sonneveld, P. (mentor); Bittermann, M.S. (mentor)cThis research is twofold. Firstly it treats the complexity of design using a computational intelligent method to achieve, with regard to a limited set of goals, high performing designs. Secondly, an architectural and structural challenge to conceptually design a freeform roof framework, which integrates structural rigidity and nonstandard tessellation.freeform architecture; tessellation; rigidity; structural analysis; complexity; computation; tiling; generative design; parametric design; multiobjective optimization; optimization; performance based design; Grasshopper&Architectural Engineering + Technology/Design & Technology  Computation & Performance)uuid:519b5492935649148391c39614a2567dDhttp://resolver.tudelft.nl/uuid:519b5492935649148391c39614a2567d:Cost optimal river dike design using probabilistic methodsBischiniotis, K.RKok, M. (mentor); Jonkman, S.N. (mentor); Jommi, C. (mentor); Kanning, W. (mentor)nThis research follows a fully probabilistic approach in order to estimate the optimal design for a river dike crosssection, taking into account the investment costs. From the theory studied, the failure mechanisms that contribute most to the failure of river dikes are identified. These are overflowing, wave overtopping, piping and inner slope stability. The most important design variables of the dike crosssection dimensions are set and following probabilistic design methods, the probability of failure of many different dike crosssections is estimated based on the abovementioned failure mechanisms. The aim of the study is to develop a generic method that automatically estimates the failure probabilities of many river dike crosssections and gives the one with the least cost, taking into account the boundary conditions and the requirements that are set by the user.river dike; cost optimal; optimization; overflowing; piping; macroinstability; DGeoStability; matlab; crosssection; probabilistic
20140122 Water Management and engineering)uuid:287de608564e4751b806b59be0505a53Dhttp://resolver.tudelft.nl/uuid:287de608564e4751b806b59be0505a53GConstraint Handling in Lifecycle Optimization Using Ensemble GradientsAlim, M.EJansen, J.D. (mentor); Leeuwenburg, O. (mentor); Egberts, P. (mentor)Constrained optimization is the pro< cess of optimizing an objective function with respect to some variables in the presence of constraints on those variables themselves or on some function of those variables. This thesis focused on using the Ensemble Optimization method to improve the NPV (Net Present Value) as the objective function of waterflooding a reservoir with an Lshaped sealing fault under constraints. The optimization controls are injection rates for the inputconstrained optimization and valves opening for the outputconstrained optimization. The constraints are field injection rate for the inputconstrained optimization and field production rate for the outputconstrained optimization. Three Matlab optimization methods were tested, of which the SQP (Sequential Quadratic Programming) method performed the best. For dealing with the constraints, it is better to let the optimizer handle them instead of the simulator. Two ways to help the optimizer to have a better constraint adherence are by using the constraint scaling and improving the quality of the gradients. Having too many variables may lead to a lower objective function due to the approximate gradients inaccuracies. Regularization (smoothing) can help to improve the objective function in this problem.9constraint; optimization; ensemble; gradients; lifecycle)uuid:d9af64cb5d2c41adaf883acf937b49d7Dhttp://resolver.tudelft.nl/uuid:d9af64cb5d2c41adaf883acf937b49d7CThe Effects of MultiCriteria Routing on Dynamic Traffic Management Zhang, F.Hoogendoorn, S.P. (mentor); Knoop, V.L. (mentor); Chen, Y.S. (mentor); Goni Ros, B. (mentor); Hajiahmadi, M. (mentor); Wiggenraad, P.B.L. (mentor)^For the degree of Master of Science in Transport & Planning at Delft University of Technology.ODTM; green traffic; emission; dynasmartP; multicriteria routing; optimization
20131127Transport and Planning)uuid:ae68639fc2314750909876cb029f7227Dhttp://resolver.tudelft.nl/uuid:ae68639fc2314750909876cb029f7227%Parametric Massing Optimization ToolsChristodoulou, A.Van den Dobbelsteen, A. (mentor); Coenders, J. (mentor); Rolvink, A. (mentor); Van den Ham, E. (mentor); Den Hollander, J.P. (mentor))The separation between architect and engineer is a relatively recent event, in comparison to the long history of human constructions. In modern times the separation of the two professions and the involvement of engineers in later design stages has proved problematic, because of the large effort needed to make changes in later design stages. On the other hand, the rising importance of engineering and financial objectives that building projects have to meet, calls for a more integrated design approach since the very early design stages. Contemporary parametric tools give the possibility to enhance multidisciplinary communication by providing the ability to quickly extract needed values from preliminary design geometries (or massings ) and assess them through properly defined evaluation scripts. This thesis investigated this prospect, focusing on the aspect of energy demand, which emerges as a central design consideration in contemporary architecture. The thesis report identified the main objectives that would serve as fitness values for it s assessment optimization systems, including in them the main parameters of influence for each of these objectives. These objectives have been the minimization of solar gains, annual heating and cooling demand, annual total energy demand per GFA, annual total energy demand per NFA, and embodied + operational (for 1, 10, 50 years) CO2 emissions. The choice of the optimization objective and thus the optimization system that has to be set to assess it, was proved to have great influence on the optimization process and results. Because of that, this thesis concluded that it this is a point that has to be considered carefully, according to the design s priorities, to find out which specific objective is set as fitness value, for each design project. That is because an extension of the optimization in unneeded areas, might diminish the accuracy of the results and increase the computati< onal demand needed. To support these assessment and optimization systems, a parametric toolbox has been developed, named MEOtoolbox (MEO derived from the initials of the words Massing Energy Optimization). The components developed mainly aimed to facilitate the calculation of the annual demand for heating and cooling, using the quasisteady state method for energy demand calculations described in the ISO13790 international standard. The MEOtoolbox will be made available to download after the end of this thesis project, through MEOtoolbox.blogspot.com. Possible design scenarios where the MEOtoolbox could be particularly useful, have been outlined through design dilemmas that also formed the case studies of this thesis. To validate the results of these case studies, results, relevant to the case studies, have been beforehand compared to results of similar studies and software. The design case studies have investigated the effect of tilting the facades of a recreation centre in Paris (France), and the effect of orientation and the effect of self shading in a design of a highrise building for the European Union in Brussels (Belgium). The study showed that: Tilting downwards a south facing facade, in Paris, can reduce the solar load in half during summer, while not greatly reducing solar load in winter. For the climate of Brussels, the maximum effect that orientation could have, for the particular design geometry, was an increase of 5% in the annual cooling demand and 1% in the annual heating demand. The effect of shifting in order to selfshade building geometries proved that it is an effective way of reducing cooling demand, without increasing greatly the heating demand. The two case studies also exemplified some of the additional benefits and shortcomings of the parametric tools. In the advantages, it has been shown how design choices can be visually supported, forming arguments for a specific design decision. On the shortcomings, the unavailability of tools to assess the multiplicity of parameters that a designer might consider and the sensitivity of the results in certain parameters, (which could, if not set properly, lead to invalid feedback) are issues that have to be addressed by parametric design software developers, for example through detailed manuals. For the comparative research, six basic building typologies ( Warehouse , Cube , Tower , Caterpillar , Fence , Slab ) were compared with regards to their operational energy per area in different climates and with different glass percentages in the facades. The thesis concluded that: For all the typologies studied, with the absence of external shading and for the glass percentages studied, cooling demand seems to be more critical for the determination of optimal energy massing, due to it s greater fluxuation depending on the typology. As far as the absolute energy demand values are concerned, location seems to be the most largely influencing parameter, followed by glass percentage. Orientation and programmatic function seem to have much less influence on the absolute value of the energy demand of the typologies. As far as the ratio between the typologies is concerned, the switch of the assessment value from Energy per GFA to Energy per NFA, strongly influences the energy demand per area ratio between the typologies, as spaces with less rentable space often seem to be good energy solutions. Minimizing the expected Energy/NFA gives different results than Energy/GFA, taking into account also space efficiency. Since NFA is usually a primary goal for construction and realestate companies, it is a realistic aim to try to minimize energy costs to cover a specific programmatic NFA demand. Location also influences largely the ratio between typologies, which seems to be similar in locations with similar ratio between the needs in heating and cooling. The typology study showed that the energy per NFA can be reduced in the magnitude of 40% by selecting an optimal typology for the climate of the Netherlands For the climate of Amsterdam, and for the characteristics for facade and structure < employed for the analysis, embodied energy seems to correspond, roughly, to 10 years of operational energy for all of the typologies. The fact that this was the result for all the typologies studied, suggests that it could potentially be used as a rule of thumb, when assessing the importance of embodied energy for a specific project, depending on it s expected functional lifetime.Uparametric; optimization; energy performance; built environment; building engineeringBuilding Technology & Physics)uuid:1e5ca95ddf7344aab856855c142d84efDhttp://resolver.tudelft.nl/uuid:1e5ca95ddf7344aab856855c142d84ef;Improving the economic performance of AkzoNobel's EVB plantBorren, A.J.]Herder, P.M. (mentor); Lukszo, Z. (mentor); Stougie, L. (mentor); De Bruijne, M.L.C. (mentor)AkzoNobel Energie Voorzienings Bedrijf (EVB Energy Supply Company) is located at the Botlek business park in Rotterdam. EVB produces and supplies energy and utilities to other production plants at the Botlek site. Due to the intertwined value chain and reuse of each other s residual and waste products, the situation at the Botlek is complicated. This report describes the development of a decision support model that contributes to an improved economic efficiency of AkzoNobel s EVB plant. The decision support model calculates the optimal production settings that minimize the variable costs and assure that at all times the critical operational conditions are met. By comparing the results of the optimization model with the base case model, it can be concluded that there is a savings potential of more than 6% of the variable costs of the EVB plant. From the optimized production settings a pattern is distinguished. This pattern is translated into a set of operational rules that can be applied to the EVB plant to realize the savings potential. The next step is to carefully analyze the consequences of the operational rules for the operations and the implications for customers. Only then the new operational rules can be incorporated and the savings potential can be realized.Loptimization; decision support model; chemical industry; economic efficiency
20140301Energy & IndustrySEPAM)uuid:597b318ca1af4fde865f4422f548336bDhttp://resolver.tudelft.nl/uuid:597b318ca1af4fde865f4422f548336b8ORM Optimization through Automatic Prefetching in WebDSLGersen, C.M./Groenewegen, D.M. (mentor); Visser, E. (mentor)ObjectRelational Mapping (ORM) frameworks can be used to fetch entities from a relational database. The entities that are referenced through properties are normally not fetched initially, instead they are fetched automatically by the ORM framework, when they are used by the application. This is called lazyfetching and can result in many queries, causing overhead. The number of queries can be reduced by prefetching multiple entities at once. There are two types of prefetching techniques, static and dynamic. Static techniques perform optimization during compilation and dynamic techniques collect information during runtime in order to perform prefetching. Multiple static prefetching techniques are implemented into WebDSL that all use the same static code analysis, however, they generate different queries. The static analysis determines the entities that are going to be used and should be prefetched. These static techniques are compared to the dynamic techniques already present inside the Hibernate ORM framework. The evaluation is performed using the OO7 benchmark and complete WebDSL applications. The results of the OO7 benchmark show a response time improvement of up to 69% over lazyfetching. On complete web applications some of the static techniques implemented in WebDSL improve the performance on average, however, the performance may be improved further, using a more finegrained method of choosing an optimization technique.optimization; prefetching; ORM; DSL; database)uuid:7867b8d128df49feb0027e6b8367bda4Dhttp://resolver.tudelft.nl/uuid:7867b8d128df49feb0027e6b8367bda4]A predictive sourcing model for multi Export Credit Agency financed large industrial< projectsJansen, P.R.KCunningham, S.W. (mentor); Storm, S.T.H. (mentor); Thissen, W.A.H. (mentor)CB&I is experiencing an issue in a new project to be executed in Russia, named NKNK. Despite the rich experience CB&I has with projects, there is a continuous struggle with the sourcing process in projects which it involves financing by multiple export credit agencies. The issue at stake is, CB&I does not know beforehand in which countries it is most likely to source its equipment to achieve to lowest possible sourcing costs. However, budgets available in countries will be set in an inception phase of a project. A preliminary estimation method is needed to determine the amount of budget needed in multiple countries, in order to increase the probability of minimizing total sourcing costs. In order to accomplish this, a new cost estimation methodology is needed. This combines strategic sourcing theory, descriptive statistics on suppliers, cost differentials among countries of manufacturing, macroeconomic theory, the role of export credit agencies in trade finance, conventional cost estimation methods, linear optimization, and Monte Carlo simulations. The importance of strategic sourcing is underpinned in this thesis. Theoretical optimal sourcing strategies are suggested on the basis of the level of perceived competition. The perceived level of competition within different industries is acquired through questionnaires with industry experts. The suggested sourcing strategies are tested on their practical applicability in large industrial projects. It turns out that there are serious limitations in applying multiple sourcing strategies, due to the nature of the highly customized equipment needed in these projects. Predominantly, single sourcing strategies are used, in which a number of suppliers is inquired for a bid. It is shown, through a linear regression analysis, there is a significant positive correlation between the perceived level of competition and the number of suppliers inquired for a bid. Descriptive statistics on suppliers involve per equipment type (more formally known as purchase order category), the number of suppliers selected and their most likely country of manufacturing. It is discussed that there are multiple restrictions in selecting potential suppliers for a project. Firstly, suppliers can only be selected and inquired for a bid, if they are stated in an Approved Vendor List . Secondly, ECA involved financing limits the budget available in each country to a certain extent. Therefore, selecting suppliers in a country where probably no budget is available, is a waste of effort. Thirdly, the increasing administrative burden in selecting larger numbers of suppliers poses limitations. Through a comparison on descriptive statistics on suppliers in two very similar projects, but with different project contexts, the effects of these limitations are determined. It is hypothesized there are sourcing cost differences among countries for particular purchase order categories. Through a literature review, macroeconomic factors that could explain these cost differentials are determined. These are categorized in economic, infrastructural, labor, supply based, and political factors. For each macroeconomic category indicators are selected to represent these. A total of twelve indicators per country are reduced to two factor scores per country, through a dimension reduction technique (principal component analysis). Based on quotations submitted by suppliers for a completed project in the near past, significant cost differentials among countries are determined using categorical variables in a linear regression. A statistical refinement has been done to place countries in a cost category. Factor scores per country and descriptive statistics on suppliers are used to substantiate these cost rankings. Combining cost differentials, macroeconomic indicators, and descriptive statistics proved to be a valuable tool to determine in which country one is most likely to receive the least expensive quotations. The role of export credit agencies (ECA< s) in project finance is explored through a literature review. ECAs cover political and commercial risks for exporters and credit providing entities. ECAs are heterogeneous and there is no definitive model for ECAs. For terms associated with project finance (medium to longterm), the most widely used mechanism by ECAs is buyer credit. ECAs are involved by issuing insurance, for defaults, directly to the exporter s bank. ECAs are also involved in buyer credit by offering a precompletion risk facility. A recourse agreement is included, meaning defaults caused by the exporter can be reclaimed from the exporter and disbursed to the lending bank. To quantitatively compare differences in terms and conditions of ECAs, a new methodology is developed in this thesis. This methodology involves a discounted Interest Rate Coefficient , which incorporates ECA premiums rolled over into the loan in the financing period, and terms and conditions involved in the repayment period. Through a questionnaire terms and conditions applicable to the NKNK project are acquired, which are mainly budgetary constraints, insurance premiums, and interest rates. Combining the results of the questionnaire and the interest rate coefficient, necessary inputs are obtained for linear optimization and Monte Carlo simulations. The basis of the newly developed preliminary sourcing cost estimation methodology is a sourcing allocation table , which can be used as a direct input in a linear optimization model developed in line with this thesis. The methodology starts with listing all purchase orders for a project in the sourcing allocation table. Next, it is evaluated which data is readily available, with respect to suppliers, supplier countries, quotation values, and purchase order value estimates. Data which is not readily available on suppliers and supplier countries are estimated per purchase order category, based on the descriptive statistics on number of potential suppliers and their distribution among countries. For purchase orders of which no quotations or estimates are available, conventional estimation techniques are used. The order of magnitude method is used on a reference project, which is indexed to accommodate the inflationary impact of time. Dummy quotations are generated to fill in the missing data on suppliers, their countries, and quotation values. These dummy quotations take significant cost differentials among countries per purchase order category into account. In these quotations, values are randomly generated according to the average spread of quotation values, using a uniform distribution. Trade finance estimates are also included in the sourcing allocation table. Now the sourcing allocation tables contains, based on live data and dummy quotations, for each purchase order a number of suppliers, their country in manufacturing, and quotation values. As there are numerous randomly generated parameters, there is no definitive optimized value. Rather there is a range of possible outcomes, determined by doing a Monte Carlo simulation with the linear optimization model. The output of these simulations are, a probability distribution of the total optimized value, a probability distribution of the expenditures within each country, and an average distribution of ECA budgetary flows towards sourcing countries. The new methodology for preliminary estimation of sourcing costs is seen by CB&I as a valuable tool to determine in an early phase of the project where budgets are most likely needed. This allows to set ECA budgets properly, to increase the probability of minimizing sourcing costs. The first results are already presented to the client, which was impressed with the result. It gives a clear graphical representation of the estimated total costs, budgets needed in which countries, and where the budgets are spent. Evenly important, it shows the uncertainty in all these estimates, through probability distribution. In addition, this tool allows easy identification of the cost impact of different scenarios, such as exploring the cost effect of excluding budget from < a certain ECA country.Doptimization; Export Credit Agency; sourcing strategy; sourcing cost
20130813Management of Technology)uuid:da349f17a65c482e8d949ff8a41c66d6Dhttp://resolver.tudelft.nl/uuid:da349f17a65c482e8d949ff8a41c66d6;Waveform Optimization for CompressiveSensing Radar SystemsZegov, L.T.&Leus, G. (mentor); Pribic, R. (mentor)"Compressive sensing (CS) provides a new paradigm in data acquisition and signal processing in radar, based on the assumptions of sparsity of an unknown radar scene, and the incoherence of the transmitted signal. The resolution in the conventional pulsecompression radar is foreseen to be improved by the implementation of CS. An unknown sparse radar scene can then be recovered through CS with a high probability, even in the case of an underdetermined linear system. However, the theoretical framework of CS radar has to be verified in an actual radar system, accounting for practical system aspects, such as the signal bandwidth, ease of generation and acquisition, system complexity, etc. In this thesis, we investigate linear frequency modulated (LFM), Alltop and Bjrck waveforms, which show theoretically favorable properties in a CSradar system, in the basic radar problem of rangeonly estimation. The aforementioned waveforms were investigated through a model of a digital radar system  from signal generation in the transmitter, to sparse signal recovery in the receiver. The capabilities of the CSradar versus the conventional pulse compression radar were demonstrated, and the Alltop and Bjrck sequences are proven to outperform the commonly used linear LFM waveform in typical CSradar scenarios.2compressive sensing; radar; waveform; optimizationCircuits and Systems)uuid:5baa10596a254bfc8328ae6fda18c598Dhttp://resolver.tudelft.nl/uuid:5baa10596a254bfc8328ae6fda18c598GEfficiency analysis and design methodology of hybrid propulsion systemsKwasieckyj, B.Stapersma, D. (mentor)A hybrid propulsion system features both a diesel engine and an electric motor for propulsion. The degrees of freedom with power generation raise the question how this division between power can be optimised in such a way that the engines are running with their optimal fuel efficiency. A generalised method to determine the power generation for all operating modes for a vessel, with a focus on the lowest fuel consumption of the diesel engines is developed.@hybrid propulsion; ship; taguchi; orthogonal array; optimization
20130323,Ship Design, Production and Operation (SDPO))uuid:49a3bfaa012f4e3bac34b8d9a760c4fcDhttp://resolver.tudelft.nl/uuid:49a3bfaa012f4e3bac34b8d9a760c4fc)Porting GCC to a Clustered VLIW ProcessorShankar, A.*Turjan, A. (mentor); Molnos, A.M. (mentor)A clustered architecture is a viable design choice when aiming to increase the performance of a VLIW processor while avoiding the hardware complexity and increased access times associated with a centralized register file. However, this places additional responsibility on the compiler: the production of an efficient cluster assignment. In this thesis, we describe how we ported the GNU Compiler Collection (GCC), a popular free compiler, to a clustered version of the Embedded Vector Processor (EVP), a VLIW vector processor being developed at STEricsson. The aim of this thesis project was to produce a prototype GCC backend for the clustered EVP, and to benchmark it. In this report we describe our implementation in detail, presenting an approach that tackles the problem of clustering, commenting upon existing algorithms, choosing and improving upon one of them while designing a GCC RTL optimization pass for cluster assignment. We visually inspected our prototype for functional correctness, and benchmarked it against the original EVP design and the corresponding production compiler. Our measurements show a 27% speedup in compute intensive components of the EVP's WCDMA workload.AGCC; cluster; assignment; VLIW; processor; compiler; optimization
20160201Computer Engineering)uuid:02468c775c644df89a241ed7ad9d1408Dhttp://resolve< r.tudelft.nl/uuid:02468c775c644df89a241ed7ad9d1408^Optimization of Space Trajectories Including Multiple Gravity Assists and Deep Space ManeuversMusegaas, P.?The optimization of highthrust interplanetary trajectories continues to draw attention. Especially when both Multiple Gravity Assists (MGA) as well as Deep Space Maneuvers (DSMs) are included, the optimization is typically very difficult. The search space may be characterized by a large number of minima and is furthermore very sensitive to small deviations in the decision vector. Various options are available to model these highthrust trajectories. The trajectory may be modeled using a simple MGA trajectory model as well as using models including DSMs. Both a position and a velocity formulation variant may be adopted and also unpowered or powered swingbys may be used. These trajectory models were implemented to study the effect of both DSM as well as powered swingbys. Especially the option to perform DSMs proved to be vital for obtaining good trajectories. Also powered swingbys may improve the efficiency of the trajectory. The velocity formulation variant proved to be much easier to optimize than the position formulation model. By analyzing the sensitivity and dependency of the various parameters in both models, a proposal for an even better trajectory model is suggested. Also regarding the optimization of these trajectories many options are available. Especially metaheuristics have proven to be very successful in optimizing these trajectories. Various studies have shown the importance of proper tuning of the basic versions of these metaheuristics, which is however often overlooked. This study applied a very rigorous tuning scheme to find the optimal settings for DE, GA and PSO. The results clearly reveal the superiority of DE above other methods. The tuned variants of DE outperformed other settings by one or multiple orders of magnitude, revealing the importance of this tuning scheme. The tuned variants of DE helped to improve a large number of instances in the Global Optimization Trajectory Problem (GTOP) database of ESA. Also the efficiency of these DE variants was shown to be competitive with, and sometimes better than, the best algorithms encountered in literature.PDeep Space Maneuver; optimization; GTOP; highthrust; interplanetary; trajectory
20130226)uuid:f928487ecf6e4658a4157d7290ea83f2Dhttp://resolver.tudelft.nl/uuid:f928487ecf6e4658a4157d7290ea83f26Strategie voor meerjarig wegonderhoud op autosnelwegenBackx, J.J.A.M.GSanders, F.M. (mentor); Verlaan, J.G. (mentor); Zuurbier, F.S. (mentor)In this research a decision model designed for the optimal planning of maintenance for road constructions to a particular section of a highway for a longer period. Optimal refers to the minimization of construction maintaining the required quality of the product and performance conditions. The optimization problem consists of assigning maintenance actions (A) to segments (S) over a planning horizon (T) equal to the contract for the multiyear road maintenance.#maintenance; optimization; planning
20121027)uuid:c032128b6759443488ab67b9eeec4e0bDhttp://resolver.tudelft.nl/uuid:c032128b6759443488ab67b9eeec4e0blDynamic in situ calibration of an instrumented treadmill for systems identification and parameter estimation
Amirtha, T.R._Sloot, L. (mentor); De Groot, J. (mentor); De Vlugt, E. (mentor); Van Der Helm, F.C.T. (mentor)Existing recalibration methods for instrumented treadmills have mainly been performed when the instrumented treadmill has been in static operation i.e. the belts are not running. The effect recalibrating during experimental operation, i.e. while the belts are running, on the ground reaction force (GRF) and the center of pressure (CoP) accuracy has not yet been studied due to difficulties of obtaining a range of test points across the treading area during experimental operation. Therefore, the effect of the dynamics of the treadmill s moving parts on the recalibration process is not known. In addition, the GRF and CoP accuracy re< quirements are not known for systems identification and parameter estimation (SIPE) experiments on instrumented treadmills. Here, a technique is described to comprehensively recalibrate a splitbelt, instrumented treadmill, while it operates under experimental conditions, for SIPE of the lower extremity dynamics during gait. Recalibration matrices are created with datasets that were generated under static and experimental treadmill operation and are assessed on validation datasets. No relationship was determined between the treadmill s dynamics and the GRF and CoP errors. The dynamic recalibration resulted in lower rootmeansquare GRF and CoP errors than the static recalibration did and was more rapid to calculate. The dynamic recalibration matrix was additionally validated by performing SIPE of a load on the treadmill, which resulted in a relative error of 2%.force platform; force plate; calibration; center of pressure; ground reaction force; gait; treadmill; accuracy; motion analysis; optimization; system identification; parameter estimation
20121029BioMechanical EngineeringBiomedical Engineering)uuid:3b1c6432cfbf4fec894b9f6b870015f5Dhttp://resolver.tudelft.nl/uuid:3b1c6432cfbf4fec894b9f6b870015f50Wing Shape Multidisciplinary Design OptimizationMariens, J.
Multidisciplinary design optimizations have shown great benefits for aerospace applications in the past. Especially in the last decades with the advent of high speed computing. Still computational time limits the desire for models with high level of fidelity cannot be always fulfilled. As a conse quence, fidelity is often sacrificed in order to keep the computing time of the optimization within limits. There is always a compromise required to select proper tools for an optimization problem. In this final thesis work, the differences between existing weight modeling techniques are investi gated. Secondly, the results of using different weight modeling techniques in multidisciplinary design optimization of aircraft wings is compared. The aircraft maximum takeoff weight was selected as the objective function. The wing configuration of a generic turboprop and turbofan passenger aircraft were considered for these optimizations. This should aid future studies of wing shapes in early design stages to select a proper weight prediction technique for a given case. A quasithree dimensional aerodynamic solver was developed to calculate the wing aerodynamic characteristics. Various statistical prediction methods (low level of fidelity) and a quasianalytical method (medium level of fidelity) are used to estimate the structural wing weight. Furthermore, the optimal wing shape was found using a local optimization algorithm and is compared to the results found using a novel optimization algorithm to find the global optimum. The quasithreedimensional aerodynamic solver was validated using experimental data and other available aerodynamic tools. Compared to the results generated by other tools, the developed solver has a wider range of validity. Most important of all, it is up to 10 times faster and the results show good agreement with other data. Several test cases were used to prove the robustness and effectiveness of the global optimization algorithm. A comparison of the different weight estimation methods indicated that the lower level fidelity methods are insensitive for some wing parameters. The results of the optimizations showed that the optimum wing shape is affected by the used weight modeling technique. Use of different weight prediction methods strongly affects the computational times and the convergence history. The global optimization algorithm was able to find the global solution for the wing shape optimization. However, the search for the global optimum comes at a cost: the computational time is significantly larger.ewing; shape; optimization; quasi3D; multidisciplinary; MDO; locsmooth; wing weight prediction; EMWET
20120831Aircraft Design)uuid:9c81ea6437a4478192472d42cb439e19Dhttp://resolver.tudelft.nl/uuid:9c81ea6437a4478192472d42cb439e1< 9>A Multidisciplinary Optimization of Composite Space EnclosuresKoerselman, J.R.&Vos, R. (mentor); Brander, T. (mentor)AA design methodology for composite space enclosures was generated. As a result a panel of an electronics housing structure, as part of a general satellite traversing both GEO and LEO, was designed and optimized. A mass saving of 18% was achieved over a conventional aluminum panel, while assuring structural integrity for acceleration loads, avoiding vibrational resonance with other satellite components, allowing electrical conductance, providing sufficient radiation protection from the harsh space environment and at the same time assuring manufacturability. The optimized structure was composed of layers of carbon fiber composite and tungsten foils. For radiation purposes the layers were placed asymmetric around the geometric midplane, resulting in shape distortions due to residual thermal stresses from the curing process. These shape distortions were kept to a minimum. The validity of the theoretical models was assessed by means of testing for shape distortions, radiation attenuation, bonding strength and electrical resistivity. The bonding of the tungsten with the prepreg material was found to be problematic, but an improvement in lap shear strength was found with respect to methods proposed in the literature. A chemical etching surface treatment with a reduced etching time of one minute was proposed for the tungsten foils.{multidisciplinary; optimization; composite; space; enclosure; tungsten; surface treatment; SIDER; induced shape distortions=Design, Integration and Operations of Aircraft and Rotorcraft)uuid:856057f224d44078aa58e0941b5b21c1Dhttp://resolver.tudelft.nl/uuid:856057f224d44078aa58e0941b5b21c1UNumerical Optimization of Hydraulic Fracture Stage Placement in a Gas Shale ReservoirHolt, S.IJansen, J.D. (mentor); Leeuwenburgh, O. (mentor); Van Bergen, F. (mentor)The upstream oil and gas industry focuses increasingly on unconventional gas resources to maintain the level of its hydrocarbon reserves. To unlock the full potential of gas shale reservoirs, horizontal wells are drilled and active stimulation of the reservoirs, in the form of multistage hydraulic fracturing, is performed. This new technique has radically changed the energy future of the United States and is on the forefront of changing it in Europe as well. The hydraulic fracturing treatment is a costly, resource intensive and potentially environmentally dangerous procedure. The objective of this thesis is to create a realistic and versatile gas shale reservoir model and optimize the placement and number of hydraulic fracture stages along a horizontal well bore, thereby maximizing the production of gas while minimizing the amount of money that is spent to do so. On the basis of the computationally efficient ensemble based optimization of vertical well placement, an idea coined and investigated by Leeuwenburgh et al. (2010), it is postulated that numerical optimization can aid in finding the optimal placement of hydraulic fracture stages along a horizontal well bore in an equally computationally efficient manner. Three gradientbased optimization algorithms (Ensemble based Optimization: EnOpt (Chen, 2008), Simultaneous Perturbation Stochastic Approximation: SPSA (Spall, 1998) and finite difference gradient estimation) that work with continuous variables, are used to approximate the gradient. Because hydraulic fracture stage locations in a reservoir simulator are commonly treated as discrete variables (well grid block indices), standard implementations of gradientbased optimization are not applicable for optimal hydraulic fracture stage placement. We propose three distinct variable parameterizing placement methods to overcome the inherent continuous to discrete variables conversion issues. After the theoretical arguments about the strengths and weaknesses of the proposed optimization routines, both single well and multiple well scenario experiments are performed. Good results are obtained from the various experiments which favor an o< ptimization with the EnOpt algorithm in combination with the fracture stage interval placement method.optimization; hydraulic fracturing; multistage; hydraulic fracture; ensemble based optimization; horizontal well; well placement; stimulation; gas shale reservoir; shale gas
Geotechnology)uuid:06252984bcfd49a5941669b948bcc0ffDhttp://resolver.tudelft.nl/uuid:06252984bcfd49a5941669b948bcc0ffAn optimization model for a TrainFreePeriod planning for ProRail based on the maintenance needs of the Dutch railway infrastructureJenema, A.R.BThe thesis reports on the Dutch railway infrastructure manager ProRail, on the literature study, on the determined Top 10 of maintenance activities that are determining the maintenance schedule, on the developing of the optimization model that finds such a maintenance schedule, and finally on the results and conclusions.:ProRail; optimization; maintenance; railway infrastructure)uuid:de8519a20f49471f9b0fe10f7df0132bDhttp://resolver.tudelft.nl/uuid:de8519a20f49471f9b0fe10f7df0132bProject Risk Management Practices: How can the current Project Risk Management practices surrounding medium construction projects be optimized?Souffront, L.F.W.M.DVan Beers, C. (mentor); Filippov, S. (mentor); Veeneman, W. (mentor)lMultiple instruments and procedures are used within the different project management areas in order to execute projects in an efficient and controllable way. Project Risk Management (PRM) has become over the past years a crucial part of the project management practices and is seen by many practitioners as a key factor to go towards more successful projects. More and more organizations are adopting this practice in an effort to achieve a better strategic alignment, increase the project success, and optimize the utilization of their resources. The research aims at generating new insights in the field of project risk management. This was done by investigating the main tenets of the risk management theory and by comparing it to empirical data gathered through a case study. Several short, medium, and long term recommendation were made based on a Risk Maturity Model.Vmedium project; project risk management; construction sector; optimization; case study
20110819(Technology, Strategy, & Entrepreneurship)uuid:9da72fb025d746ffa58f8c490524f297Dhttp://resolver.tudelft.nl/uuid:9da72fb025d746ffa58f8c490524f297=Statistical analysis of newspaper headlines with optimizationJacobs, A.G.M.M.Vallentin, F. (mentor) sparse; optimization; newspapers)uuid:bdda7a3310734a38aabb124367b3a3e1Dhttp://resolver.tudelft.nl/uuid:bdda7a3310734a38aabb124367b3a3e1ZRobust ensemble based multiobjective production optimization: Application to smart mells.
Fonseca, R.M.0Jansen, J.D. (mentor); Leeuwenburgh, O. (mentor) Recent improvements in dynamic reservoir modeling have led to an increase in the application of modelbased optimization of hydrocarbon bearing reservoirs. Numerous studies and articles have indicated the possibility of improving reservoir management using these dynamic models, coupled with methods to reduce uncertainties in the static models, to optimize reservoir performance. These studies have focused on maximizing the lifecycle performance of the project. Thus life cycle optimization is essentially a singleobjective optimization problem. In reality, shortterm targets usually drive operational decisions. The impact of shortterm targets should be included in the optimization to achieve a more realistic solution. The process of optimizing these shortterm targets constrained to life cycle targets is a form of multiobjective optimization. Several methods have been suggested to achieve multiobjective reservoir flooding optimization (Van Essen et al. 2011). These methods have been implemented with the adjoint formulation. This thesis proposes the use of an ensemblebased optimization technique (EnOpt) for multiobjective optimization. The optimization of smart wells or production schedules (inflow control valve (ICV) settings) is the objective of this work. We also prop< ose variations to the existing multiobjective algorithms suggested by Van Essen et al. (2011). We propose the use of the BFGS algorithm to improve the computational efficiency. Undiscounted Net Present Value (NPV) and highly discounted NPV are the longterm and shortterm objective functions used in this thesis. We also propose an extension of the optimization functionality to better cope with model uncertainties. This robust ensemblebased multiobjective production optimization framework has been applied and tested on a synthetic reservoir model. In our test cases, the ensemblebased multiobjective optimization methods achieved a 14.2% increase in the secondary objective at the cost of only a minor decrease between 0.20.5% in the primary objective.optimization)uuid:1de4f520efc742d78266b388451f4d14Dhttp://resolver.tudelft.nl/uuid:1de4f520efc742d78266b388451f4d14_Stochastic Open Pit Design with a Network Flow Algorithm: Application at Escondida Norte, ChileVan Eldert, J.8De Ruiter, J.J. (mentor); Dimitrakopoulos, R.G. (mentor)In the optimization of open pit mine design, the LerchsGrossmann algorithm is the industry standard, although network flow algorithms are also well suited, efficient, and known. The stochastic version of the conventional (deterministic) network flow algorithm is based on the use of multiple simulated realizations of the ore deposit, thus accounting for geological uncertainty. In comparison, the conventional pit optimization methods use only one estimated or averagetype model of the deposit and assume it represents the exact deposit in the ground. The use of multiple scenarios results in the ability to generate risk profiles in terms of both grade and material types for pit designs and production schedules. This thesis focuses on the application of the stochastic maximum flow algorithm for multiple ore processing destinations at the Escondida Norte copper mine, Chile. The case study shows the optimal pushback layout minimising geological risk during the lifeofmine. The limitation of this method is that it uses only a part of the local joint uncertainty of the block grades and material types. However, it can be extended to account for simulated commodity price forecasts as well as discounting.Eoptimization; open pit; mining; stochastic; maximum flow; mine designDepartment of GeotechnologySection Resources Engineering)uuid:25446e4d262649a08b0ed9889df343b5Dhttp://resolver.tudelft.nl/uuid:25446e4d262649a08b0ed9889df343b5RGeneration costs estimation in the Spanish Mainland Power System from 2011 to 2020Crisostomo Ramirez, J.D.Ramos, A. (mentor) The electricity sector in Spain had been evolving steadily in an ascendant rate since the liberalization in late 90 s. Demand was expected to keep growing but it suddenly dropped in 2009 creating an unbalance in the system in terms of demand and capacity available. In addition the increasing share of renewable energy contribution has also imposed an additional pressure on the hydro and thermal technologies leaving less residual demand for such technologies. The current and expected scenario in the Spanish mainland power system seems to be harder for the ordinary regime technologies for the coming years. It has just been issued a Royal Decree to support the domestic coal mines, imposing quotas for coal units using such coal. This work has the purpose of gather all the regulatory and economical constraints and apply them to estimate the generation costs for the following ten years. The approach to do such extensive task is to apply a regulated cost structure based on fixed and variable costs already proved in a previous work as a reference model to contrast the system costs in the mainland power system in Spain. The generation dispatch is done using a traditional approach of unit commitment based on the least cost dispatch and taking into consideration the different constraints to reflect the most plausible behavior of market players. The results are consistent with the costs associated to the different technologies. Nuclear units are base load< during the whole year and CCGT is the technology that balances the system because of demandgeneration variations. The most stable technology in terms of cost and production is the Nuclear while the technology with the lowest costs is hydro. Coal and CCGT technologies appear to be the most expensive and become the marginal technologies. Regarding to the evolution of the generation mix, there are thermal units decommissioned because of aging and the new Industrial Emissions Directive issued by EU. In addition, an assumption was made of what in reality would happen when the existing thermal units are not being dispatched and the owners decide closure. It was also included new hydro power plants either under construction or planned to be commissioned and the necessary additional MW needed as CCGT units in order to keep security of supply in the system. The latter was done mainly to keep the Coverage Index in the minimum level required by the system operator.®ulated cost structure; optimization
20110810 Modelling+MSc. Engineering and Policy Analysis  EMIN)uuid:d586ee6e4815456187d96ae00bdb739eDhttp://resolver.tudelft.nl/uuid:d586ee6e4815456187d96ae00bdb739eMultidisciplinary Design Optimization in the Conceptual Design Phase: Creating a Conceptual Design of the Blended WingBody with the BLISS Optimization StrategyHendrich, T.J.M.[Schroijen, M.J.T. (mentor); Bijl, H. (mentor); Visser, H.G. (mentor); La Rocca, G. (mentor)Traditionally, the aircraft design process is divided into three phases: conceptual, preliminary and detailed design. In each subsequent phase, the fidelity of the analysis tools increases and more and more details of the design geometry are frozen. In each phase a number of design variants is generated, fully analyzing them with the tools available, and then doing trade studies between important design variables to finally choose the best variant. In the past, this approach has shown good results for 'Kansas city' type aircraft, which could be decomposed into different airframe parts with distinct functions, such as wings, tail, engines and fuselage. Each part needs to fullfill its own set of requirements and could be designed and optimized relatively independently from the others. For the new generation of large transport aircraft, such as the Blended Wing Body (BWB), the traditional design approach is less suited. The Blended WingBody  studied by Boeing and many others as a future longhaul transport aircraft concept  is characterized by an integrated airframe, in which the aforementioned parts can no longer be clearly distinguished. The Blended WingBody features many and strong interactions between the various design disciplines and airframe subparts. Using the traditional design doctrine, these interactions greatly increase the required time to design. Over the past years,Multidisciplinary Design Optimization (MDO) is being considered as an alternative. Nowadays, in industry the MDO approach is mainly used in the detail design phase and for isolated, welldefined design cases. The goal of this project is to create an MDO framework which can aid the designer in optimizing entire aircraft designs in the conceptual phase. This framework is shaped to the Bilevel Integrated System Synthesis (BLISS) strategy. This strategy splits the optimization into two levels: a disciplinary level, and a system one. Before optimization, BLISS performs a sensitivity analysis to obtain linearized global sensitivities of the design objective and constraints to each of the design variables. Validation is done using three cases: two sample problems from literature with known solutions, and the optimization of a simplified Boeing 747 wing for maximum aerodynamic efficiency using an aerodynamic and structural model. All three cases were optimized succesfully. Finally, as a proofofconcept for MDO, the framework is required to find an conceptual design of the Blended WingBody with minimum structural weight and minimum drag across a given mission. Meanwhile, structural, aerodynamic and performance constraints had to be < satisfied. The problem features 5 disciplines, 93 constraints, 110 states and in total 92 design variables. Again, BLISS could converge to a solution, requiring 4 hours per cycle. By tuning the design variables, BLISS managed to converge to a final design in 22 cycles. The final design satisfies all constraints, except for the large local Mach number on the outboard wing. Similar problems were identified in several other Blended WingBody studies. The results support BLISS as a viable candidate method for introducing MDO in the conceptual design practice.FMDO; multidisciplinary; BLISS; Blended Wing Body; design; optimization)uuid:f97a2a79a3bf4bccb0a6819e70ccd62bDhttp://resolver.tudelft.nl/uuid:f97a2a79a3bf4bccb0a6819e70ccd62bA new suit for the IJsselmeer: Possibilities for facing the future needs of the lake by means of an optimized dynamic target water level
Talsma, J.Van de Giesen, N.C. (mentor)Introduction and problem definition The IJsselmeer is located in the center of the Netherlands. For its relevance for the Dutch economy and society, it is often addressed as the Wet Hearth of the country. When looking into the future, the IJsselmeer is under climate threats. Wetter winters will bring more water into the system, in combination with sea level rise, and lower gravity discharge to the Waddenzee. This will generate safety issues. On the other hand summers will be drier, putting the satisfaction of water demand in danger. Research approach and research question The goal of the research is to define for the IJsselmeer a dynamic target water level which is variable through the whole year by means of an optimization approach. The optimization uses a single objective function considering dikes safety and water demand. Such approach has been chosen because follows a different path than the ones mainly used so far to tackle the issue. When management measures alone are not enough to define a climateproof IJsselmeer, extra measures are taken into consideration: a pumping station at the Afsluitdijk and early storage in March. The main research question asks for an evaluation of the optimization methodology used to define efficient alternatives for the IJsselmeer. The subquestion requires the assessment of the flexibility of the IJsselmeer towards a climateproof system, and the definition of extra measures, when needed. Methodology The definition of the optimum measures is achieved in several steps. Firstly the objective of the problem owner is defined. The Dienst IJsselmeergebied is the only problem owner. Its interests are safety and water demand satisfaction. Then indicators are derived from the objectives, and merged into the objective function. Classes of measures are selected, and a model of the system designed for their evaluation. Finally the optimization problems are defined in order to design the optimum alternatives. Results A different planning of the target water level alone is not able to satisfy the needs of safety and water demand on the long term. As it is now, the IJsselmeer is flexible on the short term, but not enough to accommodate the impacts of longer horizons: extra measures are needed in order to define a climate proof system in 2050 and 2100. Pumping station at the Afsluitijk is an effective measure to guarantee safety for all the scenarios. Early storage in March is effective in the medium horizon (2050) but need high target water levels along the summer for the long term (2100). This might generate safety issues. Even if applied on a simplified case, the use of an optimization methodology manages to define a realistic picture of the flexibility of the IJsselmeer, and retrieves efficient options for possible future strategies. For this reasons, the present research can be considered a successful implementation of an optimization approach for the IJsselmeer. Conclusions and recommendations For the short term it is recommended to use the flexibility of the system, implementing the changes in summer target water levels which would allow deeper satisfaction of water demand. For the medium/long term, op< tions for early storage need to be investigated together with the summer target water levels needed. This would probably require reinforcement of the dikes. Options for safety can be then defined for the new reinforced system, considering combinations of pumping station and raise of the dikes. A more extensive and detailed optimization tool should be realized for the IJsselmeer, and applied for the definition of the measures above. In particular it is recommended to use a multiobjective analysis and include costs in the definition of the indicators.(optimization; dynamic target water level
20110504Watermanagement)uuid:1c5639e7f7ee4b9bb15a074039906860Dhttp://resolver.tudelft.nl/uuid:1c5639e7f7ee4b9bb15a074039906860QThe Simulationbased Multiobjective Evolutionary OptimizatioN (SIMEON) FrameworkHalim, R.A._Verbraeck, A. (mentor); Seck, M.D. (mentor); Cunningham, S. (mentor); Van Houten, S.P. (mentor)&A powerful combination of simulation and optimization has been successfully applied to solve realworld decision making problems (Fu et al., 2000; Fu, Glover, & April, 2005). Unfortunately, there are scientific and application problems with this method. Firstly, there is no transparent and formal structure to define the integration between simulation and optimization. Secondly, there are challenges to ensure a proper balance between the various desired features of the simulationbased optimization method (i.e. generality, efficiency, highdimensionality and transparency)(Fu, 2002). This research provides two contributions to the problems above by providing: 1) the design of the framework that addresses the knowledge gap above; 2) the implementation of the framework that fulfills the aforementioned features in Java. The proposed framework is developed based on Zeigler s modeling and simulation framework and the phases of an optimization study in operations research. The test and evaluation show that the desired features are successfully satisfied.Bframework; simulation; multiobjective; evolutionary; optimization<Systems Engineering, Policy Analysis, and Management (SEPAM)Systems Engineering)uuid:2dbcdb3d06064707b7ba6e7ceafd549bDhttp://resolver.tudelft.nl/uuid:2dbcdb3d06064707b7ba6e7ceafd549bAircraft Fuselage Design Study: Parametric Modeling, Structural Analysis, Material Evaluation and Optimization for Aircraft Fuselage?en, I.`Alderliesten, R.C. (mentor); Benedictus, R. (mentor); Rans, C.D. (mentor); Neelis, B.M. (mentor) The strong search for lightweight materials has become a trend in the aerospace industry. Aircraft manufacturers are responding to this trend and new aerospace materials are introduced to build lighter aircrafts. However material manufacturers, like Tata Steel, are unfamiliar with the determination of running loads and the behavior of materials in fuselage structures. Therefore an evaluation tool is needed for determining the running loads and evaluating the performance of new materials. This will give material manufacturers better insight in what properties and performance are specifically needed for materials in aircraft structures. The goal of this project is to develop an analytic design, analysis and evaluation tool for both metal and composite fuselage configurations in Visual Basic Application in order to gain insight into the structural performance of these material classes and to estimate the weight and required structural dimensions for both aluminum and composite fuselages. The fuselage geometry is setup parametrical and modeled as a simplified tube with variable crosssection without cutouts and wing box, and it is divided in bays and skin panels. By modeling the aerodynamic, gravity, ground reaction forces and internal pressure a free body diagram and force/moment distribution is created for several flight and ground load cases, like 1G flight, lateral gust or landing load cases. The critical load cases are used for analysis. The running loads, like bending stress, longitudinal stress, circumferential stress and shear stress are calculated for the entire aircraft fuselage. A< clear load pattern is created in order to evaluate the materials. The materials are evaluated for strength, stability and several other failure modes, like fatigue and crack growth. The skin panels are optimized for these evaluation methodologies and after doing so a minimum fuselage weight is obtained for conventional aircraft configurations. The Airbus A320 is taken as reference aircraft and the running loads and optimization results of the model are validated with this aircraft. The model proved to be valid and is therefore considered suitable to be used as an analysis and evaluation tool. The final stage of the project involved an initial assessment of aluminum and composite as structural material.maircraft design; fuselage design; parametric modeling; structural analysis; optimization; aluminum; Ilhan Sen+Mechanics, Aerospace Structures & Materials)uuid:810eb93fc55d4b28a8ba3b831987c5ffDhttp://resolver.tudelft.nl/uuid:810eb93fc55d4b28a8ba3b831987c5ff2Suburban 2.0: Differentiated houses for the massesKramer, N.D.F.>Biloria, N. (mentor); Bier, H.H. (mentor); Sobota, M. (mentor)Population growth and immigration increase the demand for mass housing developments all over the world. These developments are widely criticized for being monofunctional, monotypological, and monocultural. This project is a design method for a new kind of mass housing. All design rules are reformulated as algorithms, that interact with each other. Important input for the design rules is the future dweller. This leads to a bottom up, dynamic process, providing each dweller with a well fitted house, in a differentiated environment that provides public space and services. The geometry is optimized to use material as efficient as possible, which would be possible in the near future with full scale 3D printers that are being developed at the moment.urban; architecture; social; userspecific; masscustomization; complexity; selforganization; hyperbody; complex geometry; optimization; additive manufacturing
20101111 Hyperbody)uuid:bccecdebc382445ebf2f62b4fcff7a78Dhttp://resolver.tudelft.nl/uuid:bccecdebc382445ebf2f62b4fcff7a78=NonInvasive Electromagnetic Ablation of Female Breast TumorsBrink, W.M.*Kooij, B.J. (mentor); Lager, I.E. (mentor)5Breast cancer is the most common malignant tumor among women today. Available techniques for treating breast cancer often introduce strong side effects. The noninvasive electromagnetic ablation of breast tumors has a lot of potential, because it can provide a quick treatment modality without introducing harmful side effects. In this project we assess the feasibility of noninvasive electromagnetic ablation of female breast tumors. The two main challenges in this project are: 1. The computation of electromagnetic fields inside the female breast. 2. The focussing of power such that the power dissipated in the tumor is maximized while the power dissipated in healthy tissue is minimized. In our investigation we simulate a twodimensional configuration with a circular array of linesources operating at a singlefrequency within the range of 1 to 10 GHz. The electromagnetic fields are computed using a discretized EFIE method, after which we evaluate three algorithms that focus the dissipated power in order to gain insight in the potential of this treatment modality.electromagnetics; breast; cancer; treatment; therapy; hyperthermia; thermal; ablation; antenna; array; optimization; scattering
20100928Telecommunications*Microwave Technology and Systems for Radar)uuid:93d863ff63634f9f8ab95be23b9d96e9Dhttp://resolver.tudelft.nl/uuid:93d863ff63634f9f8ab95be23b9d96e9LOptimization of the AlShaheen Field Performance using Smart Well TechnologyGelderblom, D.O.CJansen, J.D. (mentor); Do, S.H. (mentor); Kapteijn, P.K.A. (mentor)This MSc thesis reports the results on optimizing the AlShaheen field performance using smart well technology. The field is currently being developed by Maersk Oil and Gas (MOG) offshore Qatar, using largescale water injection on very long horizontal wells. The studie< d reservoir consists of a laterally uniform, tight matrix. However, undesired water shortcircuiting between injectors and producers due to localized heterogeneity leads to reduced sweep efficiency and increased water production, thereby reducing the economic life of approximately 10% of the wells. Smart well technology combines monitoring and control capabilities with multisegment completions in order to optimize flooding mechanisms. In this study two different optimization strategies were simulated on a sector model containing different level of heterogeneity. The first method comprised a reactive, measurementbased approach, where injection segments were shutin when increased water production was observed in production segments. The second method comprised a proactive, modelbased approach where the optimal shutin timing of injection segments was obtained from gradient information. The evaluated flooding mechanisms include water injection and WaterAlternatingGas (WAG) injection. Results show optimization with smart well technology can significantly improve recovery and reduce water and gas circulation under varying conditions of reservoir heterogeneity. The measurementbased optimization confirms that the technology can improve reservoir engineering by its increased downhole monitoring capabilities. Results from measurementbased optimization approach the optimum found by modelbased optimization.Coptimization; optimisation; Al Shaheen; AlShaheen; smart; gradientSection Petroleum Engineering)uuid:d9b524b0d2e14bde883acc6313a1d8c0Dhttp://resolver.tudelft.nl/uuid:d9b524b0d2e14bde883acc6313a1d8c0"Automated ImplantProcessor DesignDave, D.Gaydadjiev, G. (mentor); Strydis, C. (mentor)As we move towards an aging population, it is likely that an increasing number of people will require an increasing diversity of implants, but at a lower cost to the society. Also, as computer technology progresses, smaller, more powerful, and less battery intensive implants can be designed. However, present implant design methodology is highly inefficient at meeting these goals as it suffers from nonreuse of existing knowledge by relying heavily on custom designs and ASICs. The SiMS project was started with the goal of creating predesigned, pretested, and precertified toolbox of components for biomedical implants that can be assembled in a modular fashion for various application scenarios. One of the most important components in such a toolbox is the processor. Designing such a processor is a nontrivial task and previous work has concentrated on studying the effect of changing the processor inputparameters (such as caches), one parameter at a time. The present work represents a shift in this methodology, as we now allow covariation in all possible input parameters in order to find optimal configurations in terms of the output objectives  power, performance, and area. Towards this end, we implement ImpEDE  "Implantableprocessor Evolutionary Designspace Explorer"  a framework that performs multiobjective optimization of processor parameters, and hence gives as output a Pareto optimal set of processors. The framework consists of a cache simulator and a cycleaccurate processor simulator running benchmarks and workloads designed for medical implants, in order to simulate the optimization objectives. A popular, highly configurable, multiobjective genetic algorithm, NSGAII, performs the actual optimization. Supporting scripts add modularity by acting as the interface between the genetic algorithm and the simulators, enabling easy replacement with new simulators. The whole framework is parallelized such that extra computation cycles of the idle laboratory CPUs can be utilized, thereby giving a considerable speedup without requiring any special hardware. We perform experiments on the nondominated solution fronts evolved by the framework on a subset of benchmarks, in order to optimize parameters of the genetic algorithm, with an aim towards speeding up convergence. We also examine the effects of changing the workload size run< by the benchmarks. A solution Pareto optimal front consisting of optimal processor configurations across all benchmarks is found. This front is used as a reference in order to characterize the benchmarks in the ImpBench suite. Finally, the objective space of the reference front is compared to existing implant designs, and a set of "generic processors" are chosen such that all the existing implant applications studied can be covered.vimplant; pareto; genetic algorithm; designspace exploration; optimization; power; area; energy; processor; simulation)uuid:0e0bc750bd3e4b8a8c09019f22326fefDhttp://resolver.tudelft.nl/uuid:0e0bc750bd3e4b8a8c09019f22326fef2Increasing the energy efficiency of glass faades.Van Kilsdonk, J.M.A.BVan Timmeren, A. (mentor); Veer, F.A. (mentor); Klein, T. (mentor)Design of a sun shading system which is optimized to generate as much energy as possible. This has been done by calculating the optimal size and positioning of the slats. The user also played a central role in the design process. Not only the design of the slats is innovative, also the way they are connected to the glass facade. This is done by Fischerplugs, which make the faade easy (dis)mountable and gives it a hightech, lightweight look.ePV cell; glass; facade; sustainable energy; Fischer system; innovative; integral design; optimizationResearch & Design)uuid:932855fc0dd34a1aa2bb685aaa0c54c1Dhttp://resolver.tudelft.nl/uuid:932855fc0dd34a1aa2bb685aaa0c54c1JStadskantoor  Station  Spoorzone Delft  Grid Relaxation  Rain AnalysisHaasnoot, M.yClaessens, F. (mentor); Borgart, A. (mentor); Stouffs, R.M.F. (mentor); Wilms Floet, W.W.L.M. (mentor); Mihl, H. (mentor)Met als bijlage: A0 poster@relaxation; gridshell; dubble curved; optimization; stadskantoor$TU Delft, Architecture, Architecture)uuid:a544e3eaa75d4bf0a7da3a38bf72e666Dhttp://resolver.tudelft.nl/uuid:a544e3eaa75d4bf0a7da3a38bf72e666*The speed optimization of a printing pressWadman, W.S.1Hooghiemstra, G. (mentor); Lopuhaa, H.P. (mentor)bPCM Uitgevers is van de grootste uitgeverijen van Nederland. Ze produceert de Volkskrant, het NRC Handelsblad, het Algemeen Dagblad, de Trouw en diverse kleinere kranten. Voor het drukken van de kranten heeft PCM een vestiging in Amsterdam.Iedere ochtend wordt in Amsterdam een groot aantal kranten gedrukt, waarna zij door vrachtwagens worden opgehaald en op tijd door Nederland gedistribueerd moeten worden. De voorspelbaarheid van de werksnelheid van de pers is hiervoor erg belangrijk, maar helaas niet erg betrouwbaar: de pers valt wel eens uit en ligt een onzekere tijd stil. Hierdoor kan het zijn dat kranten te vroeg dan wel te laat klaar zijn ten opzichte van de af en aanrijdende vrachtwagens. Dit zijn ongewenste situaties. Er zijn marges opgegeven waarbinnen het falen van het perssysteem 'acceptabel' is.We zullen een maat gaan definin en beargumenteren die aangeeft hoeveel de pers 'faalt' tijdens het productieproces. Hierna zullen we gaan onderzoeken of er een bepaalde optimale perssnelheid tussen twee storingen is, waarbij naar verwachting de mate van onacceptabel falen van het systeem het kleinst is.2optimization; mathematics; stochastic; probabilityWTU Delft, Electrical Engineering, Mathematics and Computer Science, Applied Mathematics)uuid:b9319341ae544b38a25c3972f3ac9062Dhttp://resolver.tudelft.nl/uuid:b9319341ae544b38a25c3972f3ac9062;Trajectory optimization for a mission to Neptune and TritonMelman, J.C.P.\Ambrosius, B.A.C. (mentor); Noomen, R. (mentor); Ortega, G. (mentor); Biesbroek, R. (mentor)Binterplanetary; optimization; gravity assist; swingby; low thrustDTU Delft, Aerospace Engineering, Astrodynamics and Satellite Systems)uuid:1b4c982d8d3940fc9184282f4116a585Dhttp://resolver.tudelft.nl/uuid:1b4c982d8d3940fc9184282f4116a585Regularization of Water Flooding OptimizationMalekzadeh, R.The use of smart well technology to optimize water flooding introduces a large number of control parameters both in space (well segments) and time. The problem of finding the optimal control parameters< to maximize net present value as an objective function can be solved with the aid of a gradientbased optimization method. Using too many parameters may lead to a large number of local maxima in the objective function, so the gradientbased optimization method may result in suboptimal solutions. In this thesis, proper orthogonal decomposition is applied to regularize gradientbased control parameter optimization by projecting the original high dimensional control space onto a low dimensional subspace and thus reduce the number of control parameters. Since in a low dimensional subspace there are fewer local maxima, the solution is more likely to reach a local maximum that is in the close vicinity of the global solution. To evaluate the efficiency of our proposed method, ordinary multiscale parameterization as developed by Lien et al. (2005) is also applied to the optimization of the control parameters. A multiscale approach starts from optimization of a very coarse representative parameter. Then the number of parameters is gradually increased until convergence is reached. Numerical examples indicate that a regularization approach with the aid of proper orthogonal decomposition may speed up the convergence rate, and also may increase the convergence to the global solution within shorter optimization time compared to optimization without regularization technique. The method effectively reduces the control effort by grouping multiple well settings in space and time and treating them as one control parameter.4smart well; simulation; optimization; regularization)uuid:ae1f3abace144de39e9c8b75072c7f48Dhttp://resolver.tudelft.nl/uuid:ae1f3abace144de39e9c8b75072c7f48NTrailing for a better alternative  Logistic optimisation of dredging projects
Nieman, A.lHolierhoek, C.K. (mentor); Ridder, H.A.J. (mentor); Horstmeier, T.H.W. (mentor); D' Angremond, K.G. (mentor)Determining the way to make a reclamation project with many excavation areas or borrow areas, and several pieces of dredging equipment at minimum costs takes too much time to do by hand. A linear programming application is made to support the allocation of excavation areas and pieces of equipment to reclamation areas. Such an application was already available at HAM, but it could only be used in limited cases. The newly developed application can be used not just for allocation optimisation of soil in reclamation projects, but also for allocation optimisation in dredging projects aswell as a mix ofthe two as well. This application is a linear model of the cost items in a project. Preconditions areadded to the model for limits to available sand, limits to project duration, limits imposed byworking methods and options for working in joint ventures. The optimisation application can beused for obtaining the cheapest working method while making a tender, or while executing a project. The model is implemented in an executable and a reliable solver is included to calculate optimal solutions. A simple shell is made in Microsoft Excel that provides an interface familiar to the user. The program is tested for stability and speed. The program is also tested on a few projects to establish its practical value. By introducing an execution step 1 , the new optimisation program can be used for complete projects while planning preconditions can be included. Once a project has been cast in the model, it can be used for rapid calculation of different scenarios. In most cases, working methods obtained by optimisation proved to be cheaper than working methods obtained by traditional methods. Some additions can be made in the future. Options to generate input with Monte Carlo simulation can be included to the shell or in the executable. Time can be saved and mistakes can be avoided if the large amount of data is stored in a database system. A tool can be developed which represents the solution in some graphical form for easier interpretation and comparison. Chapter 2 deals with the problem and goal definition of this project. The optimisation model cannot be used in all types of projects. Chapter 3< describes in what cases and how to use the optimisation model. Chapter 4 Explains Why optimisation is chosen for achieving the objective of this thesis. The aspects of dredging processes that have an influence on the costs are described in section 5. Section 6 describes how a new model is made with the old model, tjur7^2 as a starting point. Section 7 explains Why it is advisable to resort to commercial software for solving the model. The experience obtained from three projects is described in section 8. Chapter 9 summarises the conclusions drawn from developing and testing the optimisation tool. The actions that have to be taken to finish the development and evolve the program "Optimise" into a tool with automated analysis, are written in section 10.0reclamation; dredging; optimization; programming)uuid:35fbea8eecc34209b09fdddcd1d5e3e2Dhttp://resolver.tudelft.nl/uuid:35fbea8eecc34209b09fdddcd1d5e3e2,Minimaliseren restlading: "Volvox Terranova"Bruinsma, J.D'Angremond, K. (mentor); Tutuarima, W.H. (mentor); Van Kesteren, W. (mentor); Peerlkamp, K. (mentor); Van Oord ACZ (contributor)De sleephopperzuiger "Volvox Terranova" is eigendom van Van Oord ACZ en heeft een laadcapaciteit van 20.000 m3. Na het walpersen van de lading blijft een restlading over van ca. m3. Het doel van dit onderzoek is een voorstel tot aanpassing van de "Volvox Terranova" te presenteren om de restlading te minimaliseren, zonder de leegzuigproductie negatief te beinvloeden. Om dit te bereiken is een analyse uitgevoerd van het ontwerp van het schip, de leegzuigvolgorde, de processen in het beun, de verschijningsvorm van de restlading en het huidige jetsysteem. Tevens is in een schaalmodel de effectiviteit van een nieuwe leegzuigdeur beproefd. In een beunsectie is na het leegzuigen van het beun ca.  a  m3 zand (restlading) aanwezig. De hoeveelheid restlading in een beunsectie wordt met name beinvloed door de positie van de leegzuigdeur en het functioneren van het jetsysteem. Om de hoeveelheid restlading te verminderen zal in de opschoonslag, de laatste fase van het leegzuigen, geconcentreerd en zo laag mogelijk per beunsectie afgezogen moeten worden. Op dit moment wordt in het schip in twee beunsecties tegelijk op een hoogte van  meter boven het laagste gedeelte van het beun afgezogen. Door het aanbrengen van een nieuwe leegzuigdeur wordt het afzuigpunt verlaagd en zal het afzuigdebiet tot een sectie worden beperkt en daardoor verdubbelen. Het jetsysteem zal aangepast dienen te worden aan de situatie met een nieuwe leegzuigdeur. In een proevenserie is de effectiviteit van een nieuwe leegzuigdeur getest. De hoeveelheid restlading was bij de proeven met de nieuwe leegzuigdeur ongeveer % van de restlading die gemeten werd bij de proeven met de bestaande leegzuigdeuren. Tijdens het leegzuigen wordt door het jetwater uit het jetsysteem de korrelspanningen in het zandpakket gereduceerd, zodat het zand ten gevolge van de maartekracht gemakkelijker in de richting van de leegzuigdeur afschuift. Het jetdebiet dat nodig is om de korrelspanningen te verminderen is met name afhankelijk van de korreldiameter van het zand. Voordat de jets voldoende water spuiten zal de jet met behulp van een hogere jetdruk moeten opstarten. In de laatste fase van het leegzuigproces zal de evenwichtshelling van het zand bereikt worden en zal het zand moeten worden weggespoeld door de jets. De erosieve werking van een jet is afhankelijk van de uitstroomsnelheid en de invloedsafstand van deze jet. De uitstroomsnelheid wordt met name bepaald door het verschil in jetdruk over een jet. De invloedsafstand van de jet en het debiet uit de jet nemen toe als de diameter van de nozzle wordt vergroot. In de proeven is geconstateerd dat bij het vergroten van het jetvermogen de afzuigconcentratie wordt verhoogd. Tevens is vastgesteld dat het vergroten van de nozzle diameter de effectiviteit van een jet vergroot en de restlading vermindert. Op grond van de uitkomsten van het theoretisch onderzoek en de resultaten van de proeven wordt geadviseerd om in de "Volvox Terranova" per beunse< ctie een nieuwe leegzuigdeur aan te brengen en de nozzle diameters in het beun te vergroten..dredging; trailing hopper dredge; optimizationBTU Delft, Civil Engineering and Geosciences, Hydraulic Engineering)uuid:9a4537a9935440b5a59e266ac18bb697Dhttp://resolver.tudelft.nl/uuid:9a4537a9935440b5a59e266ac18bb697Ontwerpmodel van een schutsluisRietdijk, J.dBakker, K.J. (mentor); Horstmeier, T.H.W. (mentor); De Vries, J.T. (mentor); Vrijling, J.K. (mentor)IHet ontwerpmodel heeft als doelstelling het aan de hand van functionele en operationele eisen bepalen van het 'optimale' ontwerp van een schutsluis. Voor de defmitie van het 'optimale' ontwerp is gebruik gemaakt van de ontwerpfilosofie van de Bouwdienst. De Bouwdienst ontwerpt op basis van het voldoen aan de gestelde functionele eisen en op basis van het economisch optimum, vanuit een positie met maatschappelijke verantwoordelijkheid Onder het economisch optimum wordt verstaan: Een object is economisch optimaal ontworpen indien het aan de gestelde eisen voldoet en indien de som van de stichtingskosten en de verdisconteerde verwachte kosten minimaal zijn. De verwachte kosten behelzen o.a. inspectie en reparatiekosten die noodzakelijk zijn om gedurende de totale geplande levensduur aan de gestelde eisen te blijven voldoen en kosten van sloop na ajloop van de gebruiksfase. Het 'optimale' ontwerp van de sluis is het ontwerp dat voldoet aan de gestelde eisen en waarvan tevens de totale verdisconteerde kosten over de levensduur minimaal zijn. Het ontwerpmodel is gecreeerd aan de hand van de volgende stappen: I. Bepalen van de gewenste uitkomst 2. Opstellen van de uitgangspunten en aannames voor het model 3. Bepalen van de benodigde invoergegevens 4. Het opstellen van de relaties in het model 5. Het optimaliseren van het schutsluisontwer*ship lock; optimization; inland navigationSectiion Hydraulic Engineering)uuid:d25d26d232cc40eba73c06b31a360cb2Dhttp://resolver.tudelft.nl/uuid:d25d26d232cc40eba73c06b31a360cb27Optimalisatie van baggerwerkzaamheden op de MiddenWaalVan Berkel, T.QHavinga, H. (mentor); Van der Schrieck, G.L.M. (mentor); De Vriend, H.J. (mentor)De Waal is de meest bevaren rivier van Europa, jaarlijks wordt er zo'n 150 miljoen ton vracht over getransporteerd. Dit zal de komende 10 jaar met 40% toe nemen. Rijkswaterstaat heeft om de bevaarbaarheid en veiligheid te kunnen blijven waarborgen het Waalproject opgezet. Hoofddoelstelling van dit project is het vergroten van de vaargeul bij OLR (Overeengekomen Lage Rijnafvoer). Voor de Waal is OLR gelijk aan 777 m3/s en zijn de afmetingen van de vaargeul 2,50 m diep en 150 m breed. Voor de nieuwe eisen geldt een diepte van 2,80 m en een breedte van 170 m. Om aan deze eisen te voldoen is voor de MiddenWaal (tussen Nijmegen en Tiel) besloten om alle knelpunten weg te baggeren. Hierbij wordt het gebaggerde zand in diepere delen van de rivier teruggestort. Uit vooronderzoek is gebleken dat deze baggerwerkzaamheden kunnen worden geoptimaliseerd. Dit door gebruik te maken van de natuurlijke morfologische reactie van de rivier op ingrepen in het dwarsprofiel. Met name het nivelleren van bochten, waarbij zand vanuit de ondiepe binnenbocht in de diepe buitenbocht wordt gestort, en opeenvolgende zandvangen zijn veelbelovend. Het doel van dit onderzoek is een analyse van de mogelijkheden om de baggerhoeveelheden te minimaliseren via dergelijke ingrepen in het dwarsprofiel. De morfologische reacties op ingrepen zijn met SobekSedredge berekend. Dit rekenpakket simuleert in een quasi tweedimensionale omgeving de sedimentbeweging en morfologie in rivieren. Doordat niet eerder met dit pakket was gewerkt is veel tijd besteed aan het opzetten van het model. De optimalisatie van het nivelleren is gedaan door te varieren met de mate van nivellering, de lengte waarover de ingreep plaatsvindt en het tijdsinterval tussen onderhoudsbaggerwerkzaamheden. De resultaten van deze berekeningen zijn vergeleken met de door Rijkswaterstaat geplande werkzaamheden. Bij deze werkzaamheden wordt elk knelpunt weggebaggerd en wordt het vrijgekomen <zand in diepe gedeeltes van de rivier teruggestort. Uit deze vergelijking bleek dat de jaarlijkse onderhoudsbaggerwerkzaamheden bij bochtnivellering groter zijn dan bij de huidige aanpak van knelpunten. Eveneens bleek dat door het snelle uitdempen van de morfologische reactie de breedte en diepte winst te gering te zijn om veel knelpunten op te lossen. Bij de berekeningen met zandvangen kon door beperkingen van het rekenmodel alleen worden gekeken naar de gevolgen van een enkele zandvang in plaats van opeenvolgende zandvangen. Hierbij bleek dat de morfologische reactie geYnduceerd door een zandvang snel uitdempt en dat onderhoudsbaggerwerkzaamheden groter zijn dan de geplande werkzaamheden. De hoofdconclusie van dit onderzoek is dat het onderzochte gebruik van de morfologische reactie op ingrepen in het dwarsprofiel om de baggervolumina te minimaliseren geen voordelen oplevert.)dredging; optimization; river maintenanceSectioin Hydraulic Engineering
*+&ffffff?'ffffff?(?)?"dXX333333?333333?U}}}}}}}}}} }
}}}
}}}}}}}}}}}}
@
!
"
#
$
%@
&
'
(
)
*
+
,
@
.
/
(
0
1
2
3
4
5@
6
7
(
8
9
:
;
<
=@
>
?
(
@
A
B
C
D
E@
F
G
(
H
I
J
K
L
M
N@
O
(
P
Q
R
S
T
U@
V
W
(
X
Y
Z
[
\
]
^
_ @
`
a
(
b
c
d
e
f
g
@
h
i
(
j
k
l
m
n
o@
p
q
(
r
s
t
u
v
w@
x
y
j
z
{

}
~
@
(
@
(
@
(
0
@
(
@
(
0
@
@
(
@
(
@
(
@
(
@
(
@
(
@
@
(
@
(
@
(
@
(
@
(
@
@
@
(
j
!
!
!
!
!!@
!
!
!
! (
!!
!"
"#
"$
"%
"&
"'"@
"(
")
"
" (
"*
"
"+
#,
#
#.
#/
#0#@
#1
#2
#
# (
#@
#3
$4
$5
$6
$7
$8$@
$9
$:
$
$ (
$;
$
$
%<
%=
%>
%?
%@%@
%A
%
% (
%B
%C
%D
&E
&F
&G
&H
&I&@
&J
&K
&
& (
&
&L
&M
'N
'O
'P
'Q
'R'@
'S
'T
'
' (
'
'j
(U
(V
(W
(X
(Y(@
(Z
([
(
( (
(
(
)\
)]
)^
)_
)`)@
)a
)b
)
)
)
)j
*c
*d
*e
*f
*g*@
*h
*i
*
* (
*@
*j
*3
+k
+l
+m
+n
+o+@
+p
+q
+
+ (
+r
+
+
+s
,t
,u
,v
,w
,x,@
,y
,z
,
, (
,@
,{
,
}
~


@



 (




.
.
.
.
..@
.
.
.
. (
.
.
.
.
/
/
/
/
//@
/
/
/
/ (
/@
/
0
0
0
0
00@
0
0
0
0 (
0@
0
1
1
1
1
11@
1
1
1 (
1
1
1
2
2
2
2
22@
2
2
2
2 (
2@
2
3
3
3
3
33@
3
3
3
3 (
3@
3
3
4
4
4
4
44@
4
4
4
4 (
4
4
4
4
5
5
5
5
55@
5
5
5
5 (
5@
5
5
6
6
6
6
66@
6
6
6
6 (
6@
63
63
7
7
7
7
77@
7
7
7
7 (
7
7@
7
7
8
8
8
8
88@
8
8
8
8 (
8
8@
83
83
9
9
9
9
99@
9
9
9
9 (
9!
9
9
:
:
:
:
::@
:
:
:
: (
:@
:
:
;
;
;
;
;;@
;
;
;
; (
;@
;
;
<
<
<
<
<<@
<
<
<
< (
<
<
<
<
=
=
=
=
=
=@
=
=
=
= (
=
=
=
=
>
>
>
>
>>@
>
>
>
> (
>@
>
?
?
?
?
??@
?
?
?
? (
?
?
?!
@"
@#
@$
@%
@&@@
@'
@(
@
@ (
@)
@@
@3
A*
A+
A,
A
A.A@
A/
A0
A
A (
A
A1
A2
B3
B4
B5
B6
B7B@
B8
B9
B
B (
B
B
B
B:
C;
C<
C=
C>
C?C@
C@
CA
C
C (
C
CB
CC
DD
DE
DF
DG
DHD@
DI
DJ
D
D (
D
D
DK
EL
EM
EN
EO
EPE@
EQ
ER
E
E (
ES
E@
E{
ET
FU
FV
FW
FX
FYF@
FZ
F[
F
F (
FC
F\
F]
G^
G_
G`
Ga
GbG@
Gc
Gd
G
G
G
Ge
G
Hf
Hg
Hh
Hi
HjH@
Hk
Hl
H
H (
H
Hm
Hn
Io
Ip
Iq
Ir
IsI@
It
Iu
I
I (
I
Iv
Jw
Jx
Jy
Jz
J{J@
J
J}
J
J (
J
J
J~
K
K
K
K
KK@
K
K
K
K (
KC
K\
K
L
L
L
L
LL@
L
L
L
L (
LC
LD
L
M
M
M
M
MMx@
M
M
M
M (
M
M
M
N
N
N
N
NNx@
N
N
N
N (
N!
N
O
O
O
O
OOx@
O
O
O
O (
O@
O
O
P
P
P
P
PPx@
P
P
P
P (
P
P
P
Q
Q
Q
Q
QQx@
Q
Q
Q
Q (
Q
Qv
Q
R
R
R
R
RRx@
R
R
R
R (
R!
R
R
S
S
S
S
SSx@
S
S
S
S (
S
S
S!
S
S
T
T
T
T
TTx@
T
T
T
T
T
T
T
T
U
U
U
U
UUx@
U
U
U
U (
U
U
U
U
V
V
V
V
VgVx@
V
V
V
V (
V
V@
V
V
V
W
W
W
W
WWx@
W
W
W
W (
W
W
W
X
X
X
X
XXx@
X
X
X
X (
X
X
X
Y
Y
Y
Y
YYx@
Y
Y
Y
Y (
Y@
Y
Z
Z
Z
Z
ZZx@
Z
Z
Z
Z (
ZC
Z
Z
[
[
[
[
[[x@
[
[
[
[ (
[
[
[
[
\
\
\
\
\\t@
\
\
\
\ (
\
\
\
]
]
]
]
]]t@
]
]!
]
] (
]"
]
]
]#
^$
^%
^&
^'
^(^t@
^)
^*
^
^ (
^
^1
^+
_,
_
_.
_/
_0_t@
_1
_2
_
_ (
_3
_!
_4
_5
`6
`7
`8
`9
`:`t@
`;
`<
`
` (
`
`B
a=
a>
a?
a@
aAat@
aB
aC
a
a (
aD
a!
a
aE
bF
bG
bH
bI
bJbt@
bK
bL
b
b (
b
bM
cN
cO
cP
cQ
cRct@
cS
cT
c
c (
cU
c
c
cV
dW
dX
dY
dZ
d[dt@
d\
d]
d
d (
d^
d
dn
d_
e`
ea
eb
ec
eet@
ed
ee
e
e (
ef
e@
e
fg
fh
fi
fj
fkfp@
fl
fm
f
f (
fn
f
f
f#
go
gp
gq
gr
gsgp@
gt
gu
g
g (
gv
g
gw
gx
hy
hz
h{
h
hhp@
h}
h~
h
h (
h
h@
h3
h
i
i
i
i
iip@
i
i
i
i (
i@
i
i
j
j
j
j
jjl@
j
j
j
j (
j
j
j
k
k
k
k
k
kl@
k
k
k
k (
k
ke
l
l
l
l
lll@
l
l
l
l (
l
l!
l
lE
m
m
m
m
mml@
m
m
m
m
mj
n
n
n
n
nnl@
n
n
n
n (
n
n
n
o
o
o
o
ool@
o
o
o
o (
o
o
o
p
p
p
p
ppl@
p
p
p
p (
p
p
p!
p
p
q
q
q
q
qql@
q
q
q
q (
q@
q
r
r
r
r
rrl@
r
r
r
r (
r
r
r
r
s
s
s
s
ssh@
s
s
s
s (
s!
s
s
t
t
t
t
tth@
t
t
t
t (
t@
t
u
u
u
u
uuh@
u
u
u
u (
u
u\
u\
u
v
v
v
v
vvh@
v
v
v
v (
v
v
v
v
w
w
w
w
wwh@
w
w
w
w (
w
w
x
x
x
x
xxh@
x
x
x
x (
x
x_
x_
y
y
y
y
yyh@
y
y
y
y (
y\
yD
y
z
z
z
z
zz`@
z
z
z
z (
z
z\
{
{
{
{
{{\@
{
{
{
{
{
{




\@


 (

!
@
}"
}#
}$
}%
}}T@
}&
}'
}
} (
}
}
}K
~(
~)
~*
~+
~,~D@
~
~.
~
~ (
~
~
/
0
1
2
3D@
4
5
(
6
7
8
9
:
;@@
<
=
(
>
?
@
A
B
C<@
D
E
(
F
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~>@ddyKyKhttp://resolver.tudelft.nl/uuid:2529862bd90c4ee7a995fcc54281e65ayKyKhttp://resolver.tudelft.nl/uuid:45de7f7d6ebe4b98b370f0b79e08eca8yKyKhttp://resolver.tudelft.nl/uuid:6eb18a633cc543d5a38ca8992b9cddd1yKyKhttp://resolver.tudelft.nl/uuid:6d0e608eb4d64d7f8f6e1ffed2802347yKyKhttp://resolver.tudelft.nl/uuid:e08c31c21371465da5bc666433945249yKyKhttp://resolver.tudelft.nl/uuid:396e511219a048d3b83d9341a9fad583yKyKhttp://resolver.tudelft.nl/uuid:1a15154f7d084c5cbdc14966f958e498yKyKhttp://resolver.tudelft.nl/uuid:31642fd0f3824b9aa78c5bfdcb48fa31 yKyKhttp://resolver.tudelft.nl/uuid:2f82f333a0c94119ae0f66bb0694c324
yKyKhttp://resolver.tudelft.nl/uuid:369fabdd992e47e381eeddcc55756085yKyKhttp://resolver.tudelft.nl/uuid:19dd0340faa2416fbc67bbc9248a7154yKyKhttp://resolver.tudelft.nl/uuid:e129aa531cca469ebf0980142d4b879c
yKyKhttp://resolver.tudelft.nl/uuid:da43fc88c219446b999d24cd0e830a93yKyKhttp://resolver.tudelft.nl/uuid:99d5ed9ac7064cb68caa9dbc8c9822c9yKyKhttp://resolver.tudelft.nl/uuid:66eca2a7321d44cdba1d9ad501f80177yKyKhttp://resolver.tudelft.nl/uuid:c1ba7b4e129f480ca8947b4727ee0faeyKyKhttp://resolver.tudelft.nl/uuid:14461c4690a344b88b64ac6dae303f88yKyKhttp://resolver.tudelft.nl/uuid:7285b0a3ace84568b35d2a6dc8180f6byKyKhttp://resolver.tudelft.nl/uuid:f259641b8c22423d8c8543943ecf4fa5yKyKhttp://resolver.tudelft.nl/uuid:64683ba450b4485fbe7422ec766cbf0dyKyKhttp://resolver.tudelft.nl/uuid:6bc3aacfa97b44bc82f9e7fc542852adyKyKhttp://resolver.tudelft.nl/uuid:e6ab0d7e5e4342979369cfe07c623eebyKyKhttp://resolver.tudelft.nl/uuid:987820a06a9c460f8d5aa7b6c1ba29e7yKyKhttp://resolver.tudelft.nl/uuid:dabadd3819f4413eb597e8777a9bbb88yKyKhttp://resolver.tudelft.nl/uuid:87a93c28cd5f434896cff5855ea77ac1yKyKhttp://resolver.tudelft.nl/uuid:6b07b0c4553442e490670dcb23fc0646yKyKhttp://resolver.tudelft.nl/uuid:ef49b460a456433e841aacb793febc53yKyKhttp://resolver.tudelft.nl/uuid:696e112f697b49e8a524c5efbe0663dayKyKhttp://resolver.tudelft.nl/uuid:642f10762f8a4ad391ebea7b6c40f2dfyKyKhttp://resolver.tudelft.nl/uuid:faa2c6dce5ca4486b607d963f650dad2yKyKhttp://resolver.tudelft.nl/uuid:03af3d1b98d84c1499ffa448b4f4b2d0 yKyKhttp://resolver.tudelft.nl/uuid:69719e2d564947daa39ee9107487eab1!!yKyKhttp://resolver.tudelft.nl/uuid:5321a5d4ab404403b09c70c617abfc77""yKyKhttp://resolver.tudelft.nl/uuid:1f228e88c7e7431d96afdf1abb195edd##yKyKhttp://resolver.tudelft.nl/uuid:b10a0d0039494122a3db6996d5596afb$$yKyKhttp://resolver.tudelft.nl/uuid:87dc296d57f44506829b2c1d33982e15%%yKyKhttp://resolver.tudelft.nl/uuid:5849d327fa7a4591a4680368b2713374&&yKyKhttp://resolver.tudelft.nl/uuid:d059fea6286149b4ae365d31db109231''yKyKhttp://resolver.tudelft.nl/uuid:e9b513c8751b45a79d99a51177c918a2((yKyKhttp://resolver.tudelft.nl/uuid:c7baa01feb374bf1aceb3f58c575bdd1))yKyKhttp://resolver.tudelft.nl/uuid:4cf2c369b7b94db899da9ab5b436f64e**yKyKhttp://resolver.tudelft.nl/uuid:92868b2d87c44e0b9761698dd54f02f9++yKyKhttp://resolver.tudelft.nl/uuid:eb4a8dd4e02448d797844bbecbebe1f1,,yKyKhttp://resolver.tudelft.nl/uuid:698e7ac823ac4c13831bf9125838ff1cyKyKhttp://resolver.tudelft.nl/uuid:446a43d3f7294392b72d8f68737e5a64..yKyKhttp://resolver.tudelft.nl/uuid:cca2c63c29c64e1c88f5f2600bcedbce//yKyKhttp://resolver.tudelft.nl/uuid:6fe5df50ff934624a50c37fb6331eedf00yKyKhttp://resolver.tudelft.nl/uuid:c5e0bc71db3a4619a658b0a773f4590411yKyKhttp://resolver.tudelft.nl/uuid:a91c3cf801a24747aaeaf5636725690522yKyKhttp://resolver.tudelft.nl/uuid:13fd72f6946b4fc1bcfdee1d737abe8533yKyKhttp://resolver.tudelft.nl/uuid:23623188d98749eb883bea52e15f784244yKyKhttp://resolver.tudelft.nl/uuid:24fa41ff5d3c4abbb0b5cd230e9bf89c55yKyKhttp://resolver.tudelft.nl/uuid:4d22ccddf4e7458eb4557405eaba624566yKyKhttp://resolver.tudelft.nl/uuid:9e0a6ef653ac422d977116cd8225072c77yKyKhttp://resolver.tudelft.nl/uuid:dee2d842ca9140868ef8771cfee1b0dc88yKyKhttp://resolver.tudelft.nl/uuid:ebb54c533c794e6794057451e2eca85099yKyKhttp://resolver.tudelft.nl/uuid:bc4479001e7744a3aa2bfc0d5f5c292f::yKyKhttp://resolver.tudelft.nl/uuid:7897c8cd14354d3996934eda558a3e6b;;yKyKhttp://resolver.tudelft.nl/uuid:fc0a57ce33df4cd2a61b66e702cc9cf8<<yKyKhttp://resolver.tudelft.nl/uuid:9cffd913b3d74f0fa5660c73f8a7d490==yKyKhttp://resolver.tudelft.nl/uuid:41e4de26825c493eb89a1e460969bd30>>yKyKhttp://resolver.tudelft.nl/uuid:0c5a179649d74e7d943157448f9ce09a??yKyKhttp://resolver.tudelft.nl/uuid:0607236ec6d141a7873bc487065cea34@@yKyKhttp://resolver.tudelft.nl/uuid:0dd5517167684e46b6cd970eb912e2acAAyKyKhttp://resolver.tudelft.nl/uuid:a24ca984b35a4da6b4783329f7cd1f2bBByKyKhttp://resolver.tudelft.nl/uuid:e9e5bab509e04bf9b1c19508d381c0c6CCyKyKhttp://resolver.tudelft.nl/uuid:6cf6c9080f5d4096b3adaa96fd1ff382DDyKyKhttp://resolver.tudelft.nl/uuid:897b5abe07134c4aad5d6c76d33c8bd2EEyKyKhttp://resolver.tudelft.nl/uuid:21fa0317d7c14ba1835e0a771ab50923FFyKyKhttp://resolver.tudelft.nl/uuid:bb64f92af6834a458a0a98d22e87b3afGGyKyKhttp://resolver.tudelft.nl/uuid:dad2f7fb545a46b3a266a418060d1fe4HHyKyKhttp://resolver.tudelft.nl/uuid:ab83f3969d07459d8e07cf284ac5124fIIyKyKhttp://resolver.tudelft.nl/uuid:22772c2f7fb94775946173099de52583JJyKyKhttp://resolver.tudelft.nl/uuid:3d92d97423374593a003e422489d8966KKyKyKhttp://resolver.tudelft.nl/uuid:3624611d6f544db4a3b0a4ac47b38131LLyKyKhttp://resolver.tudelft.nl/uuid:b80682e9c39a45209f0bbe3efd5cd6e4MMyKyKhttp://resolver.tudelft.nl/uuid:7baa3baf8b144a18a843fe1817719d59NNyKyKhttp://resolver.tudelft.nl/uuid:b97923b62d5b4763a89d9017747b1c8dOOyKyKhttp://resolver.tudelft.nl/uuid:1cffab121f164d9da8d2770d15030f11PPyKyKhttp://resolver.tudelft.nl/uuid:94455039e5324219b223b759cc317046QQyKyKhttp://resolver.tudelft.nl/uuid:86815e55bbba45b4915b6f321b485940RRyKyKhttp://resolver.tudelft.nl/uuid:6994221172164c09b9e5e5e452240a5bSSyKyKhttp://resolver.tudelft.nl/uuid:95cfae42d59d4183a3db43f66fc45ee1TTyKyKhttp://resolver.tudelft.nl/uuid:dc2a5b72afe04fd99a12a804a855408aUUyKyKhttp://resolver.tudelft.nl/uuid:3efae2c3b0924c689dcf4ab0d4cff398VVyKyKhttp://resolver.tudelft.nl/uuid:13dab427a77c476ba352f1cb7cf6a0e1WWyKyKhttp://resolver.tudelft.nl/uuid:034c13143d9a425abe9c6cf73756f9f1XXyKyKhttp://resolver.tudelft.nl/uuid:cc25fc3120ac46e09173b8e53ebef400YYyKyKhttp://resolver.tudelft.nl/uuid:a6f1539bd6b049959bca777a277e1295ZZyKyKhttp://resolver.tudelft.nl/uuid:d1d56fecc63a4920b961c5ef0244588c[[yKyKhttp://resolver.tudelft.nl/uuid:519b5492935649148391c39614a2567d\\yKyKhttp://resolver.tudelft.nl/uuid:287de608564e4751b806b59be0505a53]]yKyKhttp://resolver.tudelft.nl/uuid:d9af64cb5d2c41adaf883acf937b49d7^^yKyKhttp://resolver.tudelft.nl/uuid:ae68639fc2314750909876cb029f7227__yKyKhttp://resolver.tudelft.nl/uuid:1e5ca95ddf7344aab856855c142d84ef``yKyKhttp://resolver.tudelft.nl/uuid:597b318ca1af4fde865f4422f548336baayKyKhttp://resolver.tudelft.nl/uuid:7867b8d128df49feb0027e6b8367bda4bbyKyKhttp://resolver.tudelft.nl/uuid:da349f17a65c482e8d949ff8a41c66d6ccyKyKhttp://resolver.tudelft.nl/uuid:5baa10596a254bfc8328ae6fda18c598ddyKyKhttp://resolver.tudelft.nl/uuid:49a3bfaa012f4e3bac34b8d9a760c4fceeyKyKhttp://resolver.tudelft.nl/uuid:02468c775c644df89a241ed7ad9d1408ffyKyKhttp://resolver.tudelft.nl/uuid:f928487ecf6e4658a4157d7290ea83f2ggyKyKhttp://resolver.tudelft.nl/uuid:c032128b6759443488ab67b9eeec4e0bhhyKyKhttp://resolver.tudelft.nl/uuid:3b1c6432cfbf4fec894b9f6b870015f5iiyKyKhttp://resolver.tudelft.nl/uuid:9c81ea6437a4478192472d42cb439e19jjyKyKhttp://resolver.tudelft.nl/uuid:856057f224d44078aa58e0941b5b21c1kkyKyKhttp://resolver.tudelft.nl/uuid:06252984bcfd49a5941669b948bcc0ffllyKyKhttp://resolver.tudelft.nl/uuid:de8519a20f49471f9b0fe10f7df0132bmmyKyKhttp://resolver.tudelft.nl/uuid:9da72fb025d746ffa58f8c490524f297nnyKyKhttp://resolver.tudelft.nl/uuid:bdda7a3310734a38aabb124367b3a3e1ooyKyKhttp://resolver.tudelft.nl/uuid:1de4f520efc742d78266b388451f4d14ppyKyKhttp://resolver.tudelft.nl/uuid:25446e4d262649a08b0ed9889df343b5qqyKyKhttp://resolver.tudelft.nl/uuid:d586ee6e4815456187d96ae00bdb739erryKyKhttp://resolver.tudelft.nl/uuid:f97a2a79a3bf4bccb0a6819e70ccd62bssyKyKhttp://resolver.tudelft.nl/uuid:1c5639e7f7ee4b9bb15a074039906860ttyKyKhttp://resolver.tudelft.nl/uuid:2dbcdb3d06064707b7ba6e7ceafd549buuyKyKhttp://resolver.tudelft.nl/uuid:810eb93fc55d4b28a8ba3b831987c5ffvvyKyKhttp://resolver.tudelft.nl/uuid:bccecdebc382445ebf2f62b4fcff7a78wwyKyKhttp://resolver.tudelft.nl/uuid:93d863ff63634f9f8ab95be23b9d96e9xxyKyKhttp://resolver.tudelft.nl/uuid:d9b524b0d2e14bde883acc6313a1d8c0yyyKyKhttp://resolver.tudelft.nl/uuid:0e0bc750bd3e4b8a8c09019f22326fefzzyKyKhttp://resolver.tudelft.nl/uuid:932855fc0dd34a1aa2bb685aaa0c54c1{{yKyKhttp://resolver.tudelft.nl/uuid:a544e3eaa75d4bf0a7da3a38bf72e666yKyKhttp://resolver.tudelft.nl/uuid:b9319341ae544b38a25c3972f3ac9062}}yKyKhttp://resolver.tudelft.nl/uuid:1b4c982d8d3940fc9184282f4116a585~~yKyKhttp://resolver.tudelft.nl/uuid:ae1f3abace144de39e9c8b75072c7f48yKyKhttp://resolver.tudelft.nl/uuid:35fbea8eecc34209b09fdddcd1d5e3e2yKyKhttp://resolver.tudelft.nl/uuid:9a4537a9935440b5a59e266ac18bb697yKyKhttp://resolver.tudelft.nl/uuid:d25d26d232cc40eba73c06b31a360cb2gg
Root Entry FRc>Rc>@SummaryInformation( F<Workbook F>
DocumentSummaryInformation8 F
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"