"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:f3183aaa-0f96-419a-ae6e-78f9f860e41d","http://resolver.tudelft.nl/uuid:f3183aaa-0f96-419a-ae6e-78f9f860e41d","Crowdshipping as a delivery solution for outlier parcels: A case study in The Hague","Tang, Keying (TU Delft Civil Engineering & Geosciences; Transport, Infrastructure and Logistics)","Correia, Gonçalo (graduation committee); de Bok, M.A. (mentor); Cebeci, M.S. (mentor); van Duin, Ron (graduation committee); Delft University of Technology (degree granting institution)","2024","The increasing parcel demand is resulting in increasing traffic congestion and emission problems in urban freight system. Crowdshipping service is an innovative logistics solution which envisions using the excess capacity in existing passenger transport to perform the delivery tasks. However, it may bring extra vehicle kilometres travelled due to detours and new trips generated. To address this challenge, most studies focus on designing and evaluating the crowdshipping service for all parcel demand, while one study points out the potentially larger benefits of crowdshipping service for outlier parcels.
Therefore, this study aims to investigate the transport impacts of crowdshipping service for outlier parcels, which are defined as the parcels with high environmental impacts. A case study is conducted in The Hague. First, the parcel carbon footprint is calculated to segregate the outlier parcels. Then, a public transport-based crowdshipping delivery scenario is proposed, with parcel lockers at train stations as the transfer points and train travellers as the potential occasional couriers. The simulation results show that outsourcing the outlier parcels to crowdshipping service is beneficial to the transport system and prioritising outlier parcels of logistics service providers with low market shares can achieve more savings in transport and higher service efficiency.","Last-mile delivery; Crowdshipping; simulation; Case study; City Logistics; Parcel Locker","en","master thesis","","","","","","","","","","","","Transport, Infrastructure and Logistics","",""
"uuid:3337014d-ec2e-4d04-b48d-9a5684d6e488","http://resolver.tudelft.nl/uuid:3337014d-ec2e-4d04-b48d-9a5684d6e488","Reducing journey times for en-route charging using V2X communication","Kruit, Sebastiaan (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Embedded Systems)","Langendoen, K.G. (mentor); de Weerdt, M.M. (graduation committee); Boehmsdorff, Pasqual (graduation committee); Delft University of Technology (degree granting institution)","2024","This thesis project explores how V2X communication between electric vehicles (EV) and charging stations can be used to reduce en-route charging times. This is done using standardized V2X messages for EV charging, as well as proposing an extension to these messages to include data on the intentions of other vehicles. As individual vehicles can have significant effects on the total waiting time at EV charging stations, knowing the intentions of other vehicles will allow drivers to better avoid congested charging stations and achieve a lower total journey time. The performance of the system is evaluated using state-of-the-art traffic and communications simulators showing that using V2X can reduce journey times by 70%. Aside from demonstrating the performance using simulation the system has also been implemented on a real life test vehicle using functional V2X hardware to show that the system is viable for implementation.","V2X; simulation; Connected Vehicles; Electric vehicles (EVs); Electric Vehicles Charging","en","master thesis","","","","","","","","","","","","Electrical Engineering | Embedded Systems","",""
"uuid:b76a158a-be59-47eb-80e6-f3b1498b0b6d","http://resolver.tudelft.nl/uuid:b76a158a-be59-47eb-80e6-f3b1498b0b6d","Short-term interactions between Staphylococcus aureus and Pseudomonas aeruginosa: BEP Report","van der Poel, Jenneke (TU Delft Electrical Engineering, Mathematics and Computer Science)","Idema, T. (mentor); Dubbeldam, J.L.A. (mentor); Zwanikken, J.W. (graduation committee); Gijswijt, Dion (graduation committee); Delft University of Technology (degree granting institution)","2024","Staphylococcus aureus and Pseudomonas aeruginosa are two species of bacteria that are involved in numerous conditions, including lung infections and chronic wound infections. The aim of this project was to study the short-term interactions that occur when P. aeruginosa first encounters an established S. aureus colony, which it then seeks to break apart whilst mixing with S. aureus. Limoli et al. have studied these interactions using experiments, and have thus identified several key aspects involved in these interactions, such as the mechanisms that P. aeruginosa employs to approach the S. aureus colony. The means by which we intended to study interactions between S. aureus and P. aeruginosa is a model that was made by previous members of the Idema group and that was based on the experiments by Limoli et al. In this report, we discuss this model and the biological background relevant to it. We also document the problems that we encountered while trying to run simulations using an existing implementation of this model.","bacteria; modelling; simulation","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics","",""
"uuid:34d27f02-6221-4afd-8de4-f6bce629b20f","http://resolver.tudelft.nl/uuid:34d27f02-6221-4afd-8de4-f6bce629b20f","Mechanism Design for Optimal Contest","Gu, Yajuan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Fokkink, R.J. (mentor); Delft University of Technology (degree granting institution)","2024","In this thesis report, we delve into a contest game within game theory, where agents’ risk-taking capacity, rather than effort, becomes the pivotal variable. High-quality performance in the game is associated with a higher probability of leading to superior scores through the use of risky strategies. The application of this con test game to labor markets prompts considerations for both employees and employers. From the perspective of employees, understanding the balance between risk-taking and payoff can be helpful in decision-making. On the employer’s side, the efficiency of selection mechanisms becomes a critical factor. We assess selection efficiency by examining the winning rates of high-type individuals. We use two parameters—market quality and market size in our analysis. Surprisingly, our theoretical analysis reveals a non-monotonic relationship between these factors and selection efficiency. Contrary to expectations, we find that as market quality improves or the number of agents increases, the winning rates of high types may decrease, resulting in reduced selection efficiency for employers. Simulation experiments inspired by Fictitious Play and evolutionary game theory are conducted to research deeper into these dynamics. Learning rules and replicator dynamics under four scenarios are designed to address the inherent volatility in agents’ strategic choices, test optimal strategies, and enable a comprehensive comparison of selection efficiency. A mechanism is proposed, derived from agents gaining experience from their usual behavior, and attempts to align outcomes more closely with Nash equilibrium, improving the optimal result. The study’s unexpected findings about single-round screening in certain conditions highlight the need for tailored selection processes in different markets. In summary, this research brings a fresh perspective to contest games. It encourages a rethink of traditional ideas and provides practical insights for decision-makers, especially in labor market.","Optimal Contest; Selection Efficiency; simulation; risk-taking","en","master thesis","","","","","","","","2024-02-29","","","","Applied Mathematics","",""
"uuid:65aa5e5c-1c3e-4b1c-9a7d-72734a17015d","http://resolver.tudelft.nl/uuid:65aa5e5c-1c3e-4b1c-9a7d-72734a17015d","The Hierarchical Subspace Iteration Method for Computing Vibration Modes of Elastic Objects","van Dijk, Julian (TU Delft Electrical Engineering, Mathematics and Computer Science)","Hildebrandt, K.A. (mentor); Eisemann, E. (graduation committee); Isufi, E. (graduation committee); Delft University of Technology (degree granting institution)","2023","The Hierarchical Subspace Iteration Method is a novel method used to compute eigenpairs of the Laplace-Beltrami problem. It reduces the number of iterations required for convergence by restricting the problem to a smaller space and prolonging the solution as a starting point. This method has shown great performance improvements for Laplace-Beltrami eigenproblems.
We propose an adaptation to the Hierarchical Subspace Iteration Method that allows for computing vibration modes of elastic objects. We evaluate potential optimizations that can be made, as well as the performance characteristics of the method. Our method was shown to be faster than SIM in most cases while even beating Matlab's Lanczos solver in some cases.","Modal analysis; Linear elasticity; subspace iteration; vibration modes; hierarchical subspace iteration method; cuda; simulation","en","master thesis","","","","","","","","2023-11-24","","","","Computer Science","",""
"uuid:ab98e53a-5931-4b78-98ec-f0cb6df20986","http://resolver.tudelft.nl/uuid:ab98e53a-5931-4b78-98ec-f0cb6df20986","GSL-Bench: High Fidelity Gas Source Localization Benchmarking","Erwich, Hajo (TU Delft Aerospace Engineering)","de Croon, G.C.H.E. (mentor); Duisterhof, B.P. (graduation committee); Delft University of Technology (degree granting institution)","2023","Gas Source Localization (GSL) is a challenging field of research within the robotics community. Existing methods vary widely and each has its own strengths and weaknesses. Existing GSL evaluations vary in environment size, wind conditions, and gas simulation fidelity, thereby complicating objective comparison between algorithms. They also lack photo-realistic rendering for the integration of obstacle avoidance. In this paper, we propose GSL-Bench, a benchmarking suite to evaluate the performance of GSL algorithms. GSL-Bench features high-fidelity graphics and gas simulation. Realism is further increased by simulating relevant gas and wind sensors. Scene generation is simplified with the introduction of AutoGDM+, capable of procedural environment generation, CFD and particle-based gas dispersion simulation. To illustrate GSL-Bench's capabilities, three algorithms are compared in six warehouse settings of increasing complexity: E. Coli, dung beetle and a random walker. Our results demonstrate GSL-Bench's ability to provide valuable insights into algorithm performance.","gas sensing; benchmarking; simulation; source localization; gas dispersion; odour souce localization","en","master thesis","","","","","","https://sites.google.com/view/gslbench/ Website providing additional results and instructions","","","","","","Aerospace Engineering","",""
"uuid:211fbc67-cd20-4615-9706-30ef088534a3","http://resolver.tudelft.nl/uuid:211fbc67-cd20-4615-9706-30ef088534a3","The Impact of Load Carrier Types and Staging-Level Designs on Cross-Docking Performance under Uncertainty: A Discrete Event Simulation Study","Hofstee, Toon (TU Delft Civil Engineering & Geosciences)","Duinkerken, M.B. (mentor); Fazi, S. (mentor); Negenborn, R.R. (mentor); Delft University of Technology (degree granting institution)","2023","This research paper focuses on improving the performance of cross-docking operations under uncertainty in the context of e-commerce logistics. The growth of e-commerce sales has increased product returns and complexity to supply chains. To address this issue, this study investigates how cross-docking operations can be improved under external and internal uncertainty factors. The research begins with a literature review to understand cross-docking facilities (CDFs) and measures to mitigate the effects of uncertainty. The current state of a CDF in a case study for a Fourth Party Logistics (4PL) provider is examined, and by reflecting on the literature overview, two potential means for decreasing the effects of uncertainty are identified: staging-level design and load carrier-type design.
A Discrete Event Simulation (DES) model is developed to test the effects of staging-level design and load carrier types on the performance of the CDF. The simulation model captures input factors such as truck arrivals, freight levels, and the purity level of cross-docking. The simulation model’s performance is tested for different scenarios, and the effects of different design alternatives are analyzed.
The results demonstrate that two-stage cross-docking with pallets can significantly reduce the total makespan and improve operational efficiency compared to single-stage cross-docking with pallets. The results also show that using roll containers significantly decreases the chance of intra-terminal congestion but also results in longer unloading and reloading times. The research contributes to the understanding of cross-docking operations under uncertainty, stresses the importance of staginglevel
and load carrier type design on CDF performance, and provides insights for logistics companies seeking to optimize their e-commerce supply chains.","cross-docking; logistics; 4PL; simulation; modeling; staging-level design; load carrier type; cross-docking purity","en","master thesis","","","","","","","","2023-08-29","","","","Transport, Infrastructure and Logistics","",""
"uuid:a2b132e9-8d38-4553-8587-0c9e3341b202","http://resolver.tudelft.nl/uuid:a2b132e9-8d38-4553-8587-0c9e3341b202","BRiM: A Modular Bicycle-Rider Modeling Framework","Stienstra, Timo (TU Delft Mechanical, Maritime and Materials Engineering)","Brockie, S.G. (mentor); Moore, J.K. (mentor); Happee, R. (graduation committee); Delft University of Technology (degree granting institution)","2023","Bicycles have been studied extensively over the past 200 years, with mathematical models providing valuable insights into various aspects of bicycle dynamics and rider control. However, the lack of a common framework for creating and sharing bicycle-rider models hinders the development of advanced models, research reproducibility, and dissemination. This thesis addresses this gap by introducing BRiM: an open-source modular and extensible framework for creating Bicycle-Rider Models.
The modular setup of BRiM relies on a systematic approach to define a model and form the analytical equations of motion. For the involved analytical computations BRiM utilizes SymPy, a Computer Algebra System. The systematic approach consists of four stages. The first stage defines the objects in the system, such as symbols and bodies. Secondly, the kinematic relationships between the objects, such as angular velocities between reference frames, are established. The third and the fourth stages, which are order-independent, specify the loads and constraints acting upon the system. The decoupling BRiM required to achieve modularity is enabled through this systematic approach, because computations within a stage are mostly order-independent.
The core of BRiM employs the systematic approach within a unified framework for modeling mechanical systems in general. It describes a model using a tree representation, in which a model is defined as an aggregation of smaller submodels. The relationships between submodels are established by parent models, using interchangeable connections to accommodate complex relations, such as tyre models between the ground and a wheel. This application of submodels enables swapping and adding submodels, making the overarching model both modular and extensible. Actuation within BRiM can either be specified by attaching prespecified groups of loads to models and connections, or by utilizing the interface provided by the mechanics module in SymPy, which offers the flexibility to even manipulate equations in detail.
BRiM applies this generalized framework to create modular bicycle-rider models. Both a stationary bicycle and a modular bicycle based on Moore's convention of the Carvallo-Whipple bicycle have been constructed. These bicycle models are extensible to bicycle-rider models by including an upper and/or lower body. Within the rider models each joint can be actuated by a linear torsional spring-damper. BRiM integrates parametrization of models, which provides mappings between symbolic quantities used in equations and experimentally determined values, using the existing open-source BicycleParameters library. Additionally, SymMePlot, a visualization package for symbolically defined mechanical systems, has been developed and integrated within BRiM to visualize the created bicycle-rider models.
The effectiveness of BRiM is demonstrated through optimization and simulation tasks. Firstly, a real-time forward simulation of a torque-driven upper body bicycle-rider is performed. Secondly, an optimization problem is solved, involving the tracking of a rolling disc along a sinusoidal trajectory while minimizing the control torques. These demonstrations highlight the seamless integration of BRiM with other scientific tools and BRiM's potential for practical applications.
In conclusion, BRiM fills the gap in bicycle dynamics research by providing a modular and extensible framework for creating and sharing bicycle-rider models. Its systematic approach, unified framework, and integration capabilities enable efficient model development, research reproducibility, and further advancement in bicycle research.
still require a demarcation of the possible routes that a criminal fugitive will take to be used effectively. Therefore, this study explored the possibility of making likelihood estimations of possible escape routes.
Because of a lack of reliable data, alternative methods to determine likelihood of escape routes are needed. A method that could be used is simulation. Simulation of human behaviour is however complex and careful consideration of the
assumptions in such a model is needed to be able to have a high level of confidence in the resulting outcome. To do this, it is important that the theoretical background on which behavioural factors influence the criminal fugitive route-choice behaviour is complete and it is known how these factors affect the resulting routes. This is the knowledge gap addressed in this study.
To address this knowledge gap, the question of what effect behavioural factors from criminal route choice behaviour have on escape routes will be answered. This is done by determining which main factors influence criminal fugitive route-choice behaviour and how these factors influence the resulting escape routes. The method used to answer these questions is a combination the development of
a theoretical background based on a literature review of existing research and expert opinion and a quantitative sensitivity analysis on a simulation model.
Because of a lack of research on criminal fugitive route-choice behaviour, it was necessary to use literature from the following research fields to find relevant topics: criminal decision-making, rationality in decision making and route-choice decision-making. From the literature in these fields, it was found that many different personal and crime characteristics exist, but it is unknown how these affect route choice behaviour. Next to this, it was found that rational decision-making cannot be assumed for the criminal situation and that bounded rationality needs to be considered. Lastly, from the route-choice decision-making literature, it was found that many different route-choice factors are relevant. The following list of route-choice behavioural factors was found: obstacle avoidance, risky behaviour, traffic avoidance, route distance and maximum speed, and preference for main or residential roads. For the route choice decision-making modelling methods, the following relevant topics were found: cost benefit
calculations, short or long-term goals, emotional state, choice prioritisation and timing. These two lists of factors should be considered when conceptualising criminal fugitive route-choice behaviour.
In the conceptualisation phase of this study, it was found that while many different suspect and crime characteristics might affect suspect behaviour, no specific behavioural profiles could be used to conceptualise route-choice behaviour. Therefore it was chosen to conceptualise the behaviour by creating
dynamic strategy profiles based on behavioural route-choice factors. From the list of behavioural route choice factors to include in these strategy profiles, it was found that they can be described as either a preference or avoidance of road characteristics. The road characteristics seen to be avoided are cameras, obstacles, one-way roads and high traffic. The preferred road characteristics are a high number of lanes, residential roads, a high maximum speed and short roads. Next, it was found that there is a distinction in decisions based on long or short-term goals, which require either low or full network familiarity. For general route-choice behaviour, the conceptualisation of a route choice as a whole route between an origin and destination location was found to be most appropriate. When considering the rationality of the decisions made for the route choices, it was found that there is too much uncertainty and ambiguity in the considered bounded rationality conceptualisation to use them for aconcrete route-choice conceptualisation. Therefore, alterations to the assumptions of rationality are
used to concentualise this. Finally, the emotional state of a fugitive is included in the conceptualisation through the possibility of changing route-choice strategies. This conceptualisation is further used to describe the general criminal fugitive route-choice behaviour in this study.
To measure the influence of the behavioural route-choice factors in the conceptualisation, a route cost model was developed. In this model, the cost of a route is calculated using the characteristics of the edges in a road network. Based on this model, an experimental design is defined including a case study and sensitivity analysis to find the quantitative influence of route-choice behaviour on route metrics describing differences in escape routes through route length and overlap.
When evaluating the results of the case studies and sensitivity analysis, it was found that the influence of behavioural route-choice factors on routes depends on the origin and destination locations and the distribution of edge characteristics over a road network. Next to this, it was found that there were no
behavioural profiles leading to routes with specific characteristics and that in practical application, a broad set of strategies should be included when finding important locations is a road network to use for positioning police units. To do this, a method of using heat maps to find these locations was proposed.
This method combined with the route cost model described in this study was found to have high applicability but more research needs to be done on the usability of this method.
From the findings of this study, it can be concluded that criminal fugitive route-choice behaviour is complex and that different possible conceptualisations exist to be used for different purposes of studying general route-choice behaviour or specific behavioural factors. This affects the ability to measure the influence of behavioural factors on the resulting routes. Limitations were found on the
measurement techniques used in the quantitative method to measure differences in routes which reduced the ability to interpret the resulting influence of behavioural factors on the routes. This showed that to find the influence of behavioural factors on the routes, the results of this study can show that the route-choice factors defined in the conceptualisation affect the routes but that more qualitative research is needed to find how these factors influence the resulting routes.
To conclude, the findings of this study add to current research by showcasing the complexity of modelling route-choice decision-making and human behaviour in general and the many considerations that need to be taken when doing so. Next, it shows the difficulty of using quantitative and qualitative methods on the data type of routes to determine relations between factors influencing route-choice behaviour and resulting routes. And lastly, it adds to the current literature by developing an overview of the factors influencing criminal fugitive route-choice behaviour that need to be considered in the simulation of fugitive escape routes.","criminal decision-making; simulation; human behaviour; route-choices","en","master thesis","","","","","","","","","","","","Engineering and Policy Analysis","",""
"uuid:ec15c377-36d0-412b-8ace-736db2492a5a","http://resolver.tudelft.nl/uuid:ec15c377-36d0-412b-8ace-736db2492a5a","Integrating urban context in daylighting simulation: The design consequences in Dutch urban areas, regarding visual & non-visual levels of daylight","Koster, Daniël (TU Delft Architecture and the Built Environment)","Brembilla, E. (mentor); Rafiee, A. (graduation committee); Straub, A. (graduation committee); Delft University of Technology (degree granting institution)","2023","The Netherlands is facing a housing demand of 1 million homes before 2030. Most of these residences are planned to be built in and around existing cities, causing an increase in urban densities with sub-optimal indoor daylighting conditions as a result. Simultaneously, the daylight assessment methodology for buildings in the Netherlands is set to change from the Dutch NEN 2057 to the European EN 17037. The European norm uses more accurate metrics to express daylighting performance but does not consider urban context (i.e. external buildings) in the simulation models. As a result, a concern is that indoor daylighting in dense urban areas is inadequately protected. Moreover, it is unknown to what extent the urban context affects the well-being of humans, regarding visual and non-visual levels of daylight.
A multitude of daylight simulations is run and analysed in the thesis to better understand the impact of the urban context on indoor daylighting performance. Visual daylighting is assessed following the EN 17037 methodology with urban context integrated. Non-visual daylight performance is assessed using two novel metrics: melanopic autonomy and melanopic isotropy. The results have revealed that the discrepancy between simulations with and without the integration of urban context is up to 90% for realistic residences throughout the Netherlands, depending on urban characteristics and density. On average, indoor daylighting is decreased by 36% when the urban context is integrated with the EN 17037. The non-visual stimulus was found to be sufficient in residences that are compliant with EUmin levels but insufficient for residences that only comply with the Dutch building code. Sky view factor (SVF) and Building Floor were found to be useful indicators of daylighting performance in early design stages. Urban density indicators such as the FSI and OSR seem to be negatively correlated with daylighting performance.
The thesis concludes with the advice to include urban context in daylighting simulations so that bad daylighting can be properly mitigated. Effective mitigation strategies are increasing glass transmission values, interior reflectance values, and exterior building reflectance values. Another effective strategy is to avoid bad daylighting conditions in the first place by not positioning residences on the first 5 building floors in high-density urban areas. The results from this thesis can be used by daylighting designers and architects who are interested in ensuring adequate and healthy daylighting conditions in the residences they design: not only in digital environments but in the real world.","daylight; simulation; non-visual daylight; urban context; EN 17037; urban density; Sky view factor","en","master thesis","","","","","","","","","","","","Architecture, Urbanism and Building Sciences | Building Technology","",""
"uuid:ef354713-924e-4907-a44f-95b67efa638e","http://resolver.tudelft.nl/uuid:ef354713-924e-4907-a44f-95b67efa638e","Improving DRL Of Vision-Based Navigation By Stereo Image Prediction","den Ridder, Luc (TU Delft Aerospace Engineering)","de Croon, G.C.H.E. (mentor); Wu, Y. (mentor); Delft University of Technology (degree granting institution)","2023","Although deep reinforcement learning (DRL) is a highly promising approach to learning robotic vision-based control, it is plagued by long training times. This report introduces a DRL setup that relies on self-supervised learning for extracting depth information valuable for navigation. Specifically, a literature study is conducted to investigate the effects of learning how to synthesize one view from the other in a stereo-vision setup without relying on any preliminary knowledge of the camera extrinisics and how it can be integrated for its downstream use for an obstacle avoidance task. As such, the literature study concludes that competitive geometry-free monocular-to-stereo image view synthesis is feasible due to recent developments in computer vision. The scientific paper further develops concepts proposed in the literature study and benchmarks the proposed architectures on depth estimation benchmarks for KITTI. Competitive results are achieved for view synthesis and despite sub-optimal performance compared to state-of-the-art monocular depth estimation, an ability to encode depth and detect shapes is present and, therefore, satisfactory for the application to DRL. Additionally, the research examines the benefits of using the latent space of a view synthesis architecture compared to other feature extractor methods as an input to the PPO agent implemented as auxiliary tasks. This method achieves quicker convergence and better performance for an obstacle avoidance task in a simulated indoor environment than the autoencoding feature extractor and end-to-end DRL methods. It is only outperformed by the monocular depth estimation feature extractor method. Overall, this research provides valuable insights for developing more efficient and effective DRL methods for monocular camera-based drones. Finally, the complementary code for this research can be found: \url{https://github.com/ldenridder/drl-obstacle-avoidance-view-synthesis}.","Autonomous Navigation; UAV; Deep Reinforcement Learning; Self-supervised learning; Auxiliary tasks; Monocular Vision; Depth Estimation; Feature Extraction; simulation","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:38603eac-e9c1-4814-81e2-73aa2df9c25a","http://resolver.tudelft.nl/uuid:38603eac-e9c1-4814-81e2-73aa2df9c25a","Weathering and debris simulation on high-resolution voxel scenes","Kuijpers, Mika (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Computer Graphics and Visualisation)","Eisemann, E. (mentor); Dijkstra, Y.M. (graduation committee); Billeter, M.J. (graduation committee); Delft University of Technology (degree granting institution)","2023","Voxels are cells on a 3D regular grid. Voxel-based scenes have many applications, including frequent use in simulations or games. Over the years, the field of high-resolution voxel scenes has progressed significantly, allowing for compressing and real-time editing of high-resolution scenes. This possibility of editing high-resolution scenes led to various editing tools. This thesis aims to expand this set of tools with a focus on more complex simulation-based solutions. We explored the field of terrain weathering and decided on a spheroidal weathering tool to perform weathering on voxel-based scenes. One of the existing limitations of this particular method was that it did not account for debris simulation. We ultimately decided to add a granular simulation using a layer-based approach that integrates with the weathering tool but is also able to function in a standalone manner.
This thesis presents three highly-customizable editing tools for voxel-based scenes: spheroidal weathering, granular simulation, and a combined tool that enables weathering with debris simulation. These tools are integrated into the HashDAG framework, which is a high-resolution voxel-grid method building upon a directed acyclic graph representation of a sparse voxel octree. Our solutions enable close to real-time editing for changes with a smaller regional support. There is room for improvement in terms of performance at scale, with various potential ideas presented in the future work section.","voxel; weathering; spheroidal weathering; granular simulation; debris simulation; simulation; CUDA","en","master thesis","","","","","","","","","","","","","",""
"uuid:3ca0fdd3-6743-472c-9cdc-6c08a92d4ed2","http://resolver.tudelft.nl/uuid:3ca0fdd3-6743-472c-9cdc-6c08a92d4ed2","Is human-in-the-loop reinforcement learning enhanced if the robot emotes its learning progress?: An experimental study","Lijcklama à Nijeholt, Floortje (TU Delft Mechanical, Maritime and Materials Engineering)","Broekens, Joost (mentor); de Winter, J.C.F. (mentor); Dodou, D. (graduation committee); Delft University of Technology (degree granting institution)","2023","As technology continues to evolve at a rapid pace, robots are becoming an increasingly common sight in our daily lives.
Robots that work with humans need to adapt to a variety of users and tasks, and learn to optimise their behaviour. For non-specialist users to interact with such robots, the robot's learning process needs to be transparent through its behaviour. Reinforcement Learning (RL) is a promising learning method to achieve this adaptability. However, the behaviour generated by RL is not inherently transparent because of the exploration/exploitation trade-off that is needed to optimise a policy for a specific task.
A RL algorithm is Temporal Difference (TD) learning. In TD learning, the algorithm updates a Q-table to keep track of Q-values. Q-values represent the expected future rewards that the agent (the actor that decides what action to take) can receive by taking a specific action in a certain state. Calculating the Q-values involves a value called the Temporal Difference, which is the difference between the current Q-value with the received reward added and the Q-value for the future state and chosen action.
Emotions are a natural way of communicating intent and situational appraisal for humans. In this study, emotional expressions based on Temporal Differences were implemented as a means to increase the transparency of a robot's learning progress. The effects on the robot's learning progress, learning result, and user experience were analysed.
A between-subject experiment with 61 participants on the following three robot modes was performed: no emotions, simulated emotions, and simulated emotions with matching attribution (see Table \ref{table:robotModes}). The simulated emotions are hope, fear, joy, and distress, which were expressed by a humanoid robot. The robot mode with simulated emotions and matching attributions would explain for what task it was feeling hope or fear. The task was a simple task where a human teacher had to help a humanoid robot to learn to express three different colours based on human commands.
The results demonstrate minimal differences between these three conditions. This means that for simple tasks, emotional expressions grounded in RL do not have a significant effect, and thus do not help nor hurt. The findings are discussed, and it is proposed that emotion simulation is beneficial for tasks that are more complex, afford some robot autonomy, and for which the emotion is informative about how the user should influence the robot's actions to the benefit of the robot's policy.","reinforcement learning; emotion; simulation; Temporal difference","en","master thesis","","","","","","","","","","","","Mechanical Engineering","",""
"uuid:e2de25a7-971e-4575-b2d9-e38392ff25ee","http://resolver.tudelft.nl/uuid:e2de25a7-971e-4575-b2d9-e38392ff25ee","A Stochastic Discrete Event Simulation of Airline Network and Maintenance Operations","Varenna, Sara (TU Delft Aerospace Engineering)","Santos, Bruno F. (mentor); Delft University of Technology (degree granting institution)","2023","The complexities associated with airline operations require operations planning to be divided into multiple problems solved sequentially by the respective departments: (1) network planning, and (2) maintenance planning.
Furthermore, airline operations take place in an intrinsically uncertain environment, which requires the development of robust plans and the use of effective recovery policies. Despite the close interaction of network and maintenance plans in this dynamic environment, it is current airline practice to evaluate plans from the two domains separately, thus not representing airline operations from an integrated perspective. To this end, a modular, stochastic, discrete event simulation model of airline operations named ANEMOS (Airline Network and Maintenance Operations Simulation) is presented in this paper. The model integrates network and maintenance operations dynamics, allowing the evaluation of plans, policies, and scenarios from both domains. The model is validated using data provided by a major European airline, and it is shown that the simulated results closely resemble the airline's historical operational performance. Finally, the model's capabilities are demonstrated with a case study investigating the effects of adding a second reserve aircraft to a fleet of fifty wide-body aircraft. Results show that the second reserve is capable of reducing cancellations by 55%, but the lost revenue associated with keeping an aircraft non-operational make it a very costly solution, with the avoided costs of disruptions quantified at 6.2% of the lost profit.","airline; simulation; discrete event simulation; operations optimization","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:b063d33d-2032-4aaf-a93f-b87bd01f8108","http://resolver.tudelft.nl/uuid:b063d33d-2032-4aaf-a93f-b87bd01f8108","Evaluating the influence of different sit-to- stand strategies on the biomechanics of the upper extremity dependent on age and sex","Dielissen, Tim (TU Delft Mechanical, Maritime and Materials Engineering)","van der Kruk, E. (mentor); Veeger, H.E.J. (graduation committee); Delft University of Technology (degree granting institution)","2023","Introduction: During the sit-to-stand (STS) motion, thigh push-of (TP) is frequently used, yet the biomechanical advantage for the upper extremity, is relatively unknown. In this thesis, the STS motion is analyzed for three different techniques; TP, armrest push-off (AP), and no arm aid (NA). The aim of this study is to determine the biomechanical advantage of the TP strategy through examining the joint moments (JM), and muscle forces (MF). Furthermore, the study aims to find whether age or gender affects the JM and MF generated in the TP, AP, and NA strategies. Method: Time to stand (TTS), JM and MF exerted on the upper extremity were examined for TP, AP and NA strategies for 34 participants across 3 groups: EM, elderly female (EF), and young males (YM). The metrics were obtained through inverse kinematic (IK), inverse dynamic (ID), and static optimization (SO) simulations in a 3D musculoskeletal model. Results: The time-to-stand (TTS) in elderly participants is significantly longer in the TP strategy than in the AP and NA strategies. For elderly people, the TP strategy results in upper extremity JM lower than during AP and equal as in NA. Similarly, the TP strategy results in significantly lower MF than the AP strategy, and equal MF as in the NA strategy. Conclusion: The TP strategy takes longer than AP and reduces the JM and MF for elderly participants. Moreover, the TP strategy does not yield higher JM and MF than the NA strategy for any participant group. Thus, the biomechanical advantage of the TP strategy for elderly people, are lowered JM and MF in the upper extremity","Musculoskeletal Modeling; simulation; Sit-to-stand; upper extremity; musculoskeletal model","en","master thesis","","","","","","","","","","","","Biomedical Engineering","",""
"uuid:dbb05309-7487-40b5-95ea-8ee1a5762514","http://resolver.tudelft.nl/uuid:dbb05309-7487-40b5-95ea-8ee1a5762514","Hypervelocity Impact Simulation using Smoothed-Particle Hydrodynamics","Harazim, Mateusz (TU Delft Aerospace Engineering)","Bisagni, C. (mentor); Cardone, T. (graduation committee); Delft University of Technology (degree granting institution)","2023","The issue of space debris in Low Earth Orbit is a growing concern that requires comprehensive solutions. This includes preventing further pollution, removing existing debris, and designing resilient spacecraft that can withstand impacts. This thesis focuses on the improvement of spacecraft shielding structures to provide effective protection against hypervelocity impacts. The Smoothed-Particle Hydrodynamics method was used for the simulation of hypervelocity impacts using LS-DYNA. The method is preferred for the simulation of impacts at high velocity and was used to reproduce two studies from the past. The simulation was then performed on a double plate system, with results compared to data from experiments provided by Airbus Defence and Space GmbH, under co-supervison from ESA. The simulation accurately predicted the hole diameter in the 1st plate with less than 2% error, while the damage zone in the second plate showed considerable variance. The simulations consistently overpredict the damage zone, leading to conservative results.","SPH; Smoothed-Particle Hydrodynamics; Hypervelocity impact; Space Debris; High-speed Impact; Whipple Shield; ESA; simulation; LS-DYNA; Aluminium","en","master thesis","","","","","","","","2023-02-22","","","","Aerospace Engineering","",""
"uuid:c6add3df-5dce-4322-9e5e-ffa7a876591d","http://resolver.tudelft.nl/uuid:c6add3df-5dce-4322-9e5e-ffa7a876591d","Controlling grip force by maintaining a constant frictional safety margin to improve robotic grasping","Langens, Coco (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Cognitive Robotics)","Wiertlewski, M. (mentor); Willemet, L. (mentor); Plooij, M. (mentor); Kober, J. (graduation committee); van der Kruk, E. (graduation committee); Delft University of Technology (degree granting institution)","2022","Manipulating soft and fragile objects is a challenging task in robotic grasping. The key challenge for robotic grasping is to exert enough grip force to prevent slipping while being gentle enough to prevent damage to an object. Existing grippers used for processes like automatic harvesting of fruits, either apply excessive grip force leading to object damage or react to slip resulting in object release from the gripper. The aim of this study is to develop a grip force controller that uses tactile feedback to maintain a constant frictional safety margin over the minimum required grip force, called Safety Margin Control. Tactile sensors can provide information on friction, which is used to predict slip. An optical tactile sensor is modeled and used in simulations where Safety Margin Control regulates the grip force during interaction with various virtual objects. The deformation of the sensor’s soft viscoelastic membrane is described by local frictional behavior and used to estimate the safety margin. The desired safety margin is set to 30%, based on comparison to the way humans control grip force in their fingertips. The desired value can be tuned to favor release over damage and vice versa. Safety Margin Control is compared to two baseline controllers: React To Slip and Conservative Control. The performance is evaluated based on maximum pressure and total lateral displacement of the object relative to the sensor. Safety Margin Control results in a pressure decrease of 44% on average compared to Conservative Control, and no significant pressure change was observed compared to React To Slip. The total lateral displacement for Safety Margin Control is 0 mm, as opposed to 1.3 mm for React To Slip. Safety Margin Control provides a way forward for automated harvesting as the pressure exerted on an object can be reduced while no slip occurs.","automated harvesting; control; friction; robotic grasping; safety margin; simulation; tactile sensing","en","master thesis","","","","","","","","","","","","Mechanical Engineering | Vehicle Engineering | Cognitive Robotics","",""
"uuid:f37486e9-57ae-4426-895b-4da5cb46a84d","http://resolver.tudelft.nl/uuid:f37486e9-57ae-4426-895b-4da5cb46a84d","Improving the PX4 software-in-the-loop multirotor UAV simulator accuracy","te Braake, Michiel (TU Delft Mechanical, Maritime and Materials Engineering)","Benders, D. (mentor); Ferranti, L. (mentor); Wisse, M. (graduation committee); Delft University of Technology (degree granting institution)","2022","Recent developments have enabled the mass production of cheap, high-performance multirotors. As a result of this, the multirotor has found a large variety of different uses. Testing new algorithms using real-life flights is costly in terms of time and potentially in terms of materials in the case of a crash. Simulators have quickly gained prominence as a tool in algorithm development. The accuracy of these simulators is vital to ensure the step from simulated flights to real-life testing is as small as possible. Current simulator models are held back due to a lack of properly identified multirotor coefficients. Often, the coefficients from a different multirotor are used, even if the new multirotor is vastly different in shape and size. Additionally, most researchers do not have a sufficient overview of which parts of the simulator are most important in providing an accurate simulator environment.
This research will give an overview of the different parts that influence the simulator results. The simulation of the motor controllers and rotor aerodynamics is identified as a major contributor to inaccuracies. The coefficients used in these two components will be found by performing a system identification process of the NXP HoverGames quadcopter. This identification process entails determining the mass, dimensions, inertia, motor response and thrust, torque, rotor drag and rolling moment coefficients. Experiments will be performed to find the correct coefficient values and to evaluate the impact that each effect has on the simulator's performance. A focus is put on ease of repeatability, such that other researchers can use the same process for their specific multirotor.
The theoretical background behind each effect in the motor controller and rotor aerodynamics models is discussed. This theoretical knowledge is used to perform experiments to find the coefficients for each effect and validate them. As a result, new coefficients for the motor controller simulator, thrust, torque and rotor drag were found. Additionally, this process showed that the rolling moment is much smaller than the errors currently existing in the model and this effect has, therefore, been ignored. The motor controller simulation is identified as the largest cause of inaccuracies in the current model. This simulation maps the motor commands to the output rotational velocity of the rotor and simulates the time response of this system. The formulas currently used for this do not provide a sufficient model of real-life behaviour. An analysis is done to gain a better understanding of what variables influence motor behaviour.
Finally, the new coefficients are put to the test in a variety of different trajectories to compare the trajectory tracking performance with the old model. This comparison consists of two parts. In the first part, a qualitative analysis is used to understand the difference in simulator behaviour. In the second part, a set of metrics are defined to quantify the difference in accuracy. This comparison process is used to prove that the new model provides significant improvements compared to the old coefficients, especially in the case of faster, more aggressive manoeuvres.","quadcopter; SITL; simulation; PX4","en","master thesis","","","","","","","","","","","","","",""
"uuid:a5764b08-9dca-4398-9309-fb99fe3a271d","http://resolver.tudelft.nl/uuid:a5764b08-9dca-4398-9309-fb99fe3a271d","Efficiency of an analytical propagator with collision detection for Keplerian systems","Aliberti, Dylan (TU Delft Applied Sciences; TU Delft Electrical Engineering, Mathematics and Computer Science)","Visser, P.M. (mentor); Thijssen, J.M. (graduation committee); Bouwman, W.G. (graduation committee); van Gijzen, M.B. (graduation committee); Delft University of Technology (degree granting institution)","2022","The aim of this thesis was to test the efficiency in practice of an analytical propagator with collision detection for N-body Keplerian systems. This can be used to simulate the evolution of a protoplanetary disk, which gives insight into how planetary systems form. The analytic propagator calculates collisions one by one, while a numerical propagator would compute each time step. The idea of using the analytic propagator is that collisions are rare in astronomical scales, such that jumping from collision to collision and calculating it, is more efficient than calculating all the time steps that are between collisions. Simplifying the orbits of the planetesimals into perfect Keplerian orbits, analytical solutions exist which are used by the analytic propagator.
In this thesis, the runtimes of simulations were measured as well as other properties directly related to the runtime. The overall efficiency of the algorithm with respect to N seemed to be O(N3), which is one power less than previously predicted. The prediction was that the runtime of the full simulation is O(N2ε + N4s3/Ia3). Here ε is the maximum eccentricity, s/a is the ratio of a planetesimal's radius to the semi-major axis of its orbit, and I is the maximum inclination. This was calculated by estimating the total number of collisions to be O(N2s2/Ia2) and the runtime for each collision to be O(N2s/a). But the number of collisions turns from quadratic to linear in N, implying that above a certain N almost all planetesimals collide, which reduces the power of N by one. For comparison, the octree code has an algorithmic efficiency of O(N logN) per time step, and the number of steps for a fixed integration time grows as O(N4/3 log N).","numerical efficiency; simulation; gravitation; analytical propagator; collision detection; celestial mechanics; protoplanetary disks","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics | Applied Physics","",""
"uuid:d90b840c-24d5-4056-b790-da848906d092","http://resolver.tudelft.nl/uuid:d90b840c-24d5-4056-b790-da848906d092","Simulation of a Hybrid Short-Term & Long-Term Energy Storage System in Energy Communities","Betere Marcos, Guille (TU Delft Technology, Policy and Management; TU Delft Energy & Industry; TU Delft System Engineering)","Okur, Ö. (mentor); Lukszo, Z. (graduation committee); van der Weijden, Joep (graduation committee); Delft University of Technology (degree granting institution)","2022","The Paris agreement has set European countries on a path towards decarbonization of the energy market. Due to the high dependence of natural gas in the Netherlands, various challenges will be faced when facing out fossil fuels. The major drawback of RES is that they are non-dispatchable, meaning their output generation fluctuates over time with respect to weather conditions, resulting in a temporal mismatch of supply and demand. In order to allow shifting of non-dispatchable loads, short-term and long-term energy storage is required. The objective of this research was the implementation of a self-sufficient hybrid storage system in energy communities, including renewable energy generation, short-term and long-term energy storage. A simulation was conducted to form a techno-economic analysis of the system. The results from this simulation showed that the minimization of total costs is obtained by minimizing the capacities of the hydrogen system, as these represent the most expensive components of the system, and maximizing the PV generation, as it is the cheapest component throughout the lifetime. However, the results showed very high costs due to the high costs associated with the hydrogen system, which makes these systems with hydrogen storage impossible to compete with traditional fossil fuel sources. The electricity prices for households of the energy coming from the fuel cell can be greater than three times the electricity price of the national grid","Storage System; simulation; system; Renewable energy","en","master thesis","","","","","","","","","","","","Complex Systems Engineering and Management (CoSEM)","",""
"uuid:a7befb8f-3206-432b-b41f-c10870730f29","http://resolver.tudelft.nl/uuid:a7befb8f-3206-432b-b41f-c10870730f29","Modeling and scheduling an autonomous sorting system using a switching max-plus linear model","Smeets, Lucy (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)","van den Boom, A.J.J. (mentor); Ruijs, Mart (mentor); Laurenti, L. (graduation committee); Delft University of Technology (degree granting institution)","2022","Sorting systems form an example of event driven systems. These types of systems are referred to as discrete event systems (DES), and they consist of jobs that need to be performed at available resources. In an autonomous sorting system, jobs consist of robots receiving and delivering parcels at the correct locations. With scheduling, optimal allocation of the jobs to those resources over time is computed, where the decisions that need to be made are routing, ordering and synchronization. The behaviour of DES is often described by non-linear models, but max-plus linear (MPL) systems are a class of DES that can be described by a model that is linear in the max-plus algebra. This algebra uses two operators maximization and addition. Allowing different routes and switching between orders of jobs extends an MPL system to a switching max-plus linear (SMPL) system. Robots in a sorting system often have many routes to choose from, and need to make order choices with respect to other robots in the system.
In this thesis, a general SMPL model is made for the autonomous sorting system at software company Prime Vision, which can be applied to any sorting area design. The solution to the scheduling problem for the model results in a time schedule for the active robots at the correct locations in the sorting area, as well as the optimal decisions on routing, ordering and synchronization. The optimization problem is solved with a model predictive scheduling (MPS) approach and recast as a mixed integer linear programming (MILP) problem. The model is created in Python and the optimization problem is solved with Gurobi. The resulting schedule is visualized with a simulation, in which the decisions of the robots are clearly shown. An idea for implementation of the optimization into the sorting system is given as well.","max-plus algebra; switching max-plus linear system; model predictive scheduling; autonomous sorting system; mixed integer linear programming; Gurobi; Python; discrete event systems; optimization; simulation","en","master thesis","","","","","","","","2024-06-28","","","","Mechanical Engineering | Systems and Control","",""
"uuid:e10bc18a-5b28-4862-8084-e6f41db1d215","http://resolver.tudelft.nl/uuid:e10bc18a-5b28-4862-8084-e6f41db1d215","Efficiently coupling QM and MD for the study of electrode-electrolyte interfaces","Hermans, Sebastiaan (TU Delft Applied Sciences)","Hartkamp, R.M. (mentor); Steeneken, P.G. (graduation committee); Idema, T. (graduation committee); Delft University of Technology (degree granting institution)","2022","In this thesis, a proof of concept was established for the use of a novel coupled QM-MD approach to modelling metallic (copper) electrode-electrolyte interfaces. SCC-DFTB calculations of the instantaneous electronic structure of a copper electrode were coupled to a classical MD simulation of an electrode-electrolyte interface. The applied QM-MD method was described rigorously, and used to investigate the compound distribution and dynamics at the interface, relative to a fully classical MD simulation. Polarisation effects were observed to bring about a significant increase in the attraction between cations and the cathode. Moreover, local polarisation of the cathode was found to immobilise adsorbed cations, and induce an increased orientational preference of the nearby water dipoles. The secondary goal of this thesis was to explore to what extent neural networks are able to replicate SCC-DFTB calculations of the electronic charge density on a metallic electrode. Using a computer vision approach, qualitative evidence was obtained indicating that neural networks can be used to replicate SCC-DFTB predictions on periodic metallic surfaces.","DFT; DFTB; MD-simulation; simulation; Computational; Polarisation; Deep Learning; Neural Networks; electrode-electrolyte interface","en","master thesis","","","","","","","","","","","","Applied Physics","MSc Thesis",""
"uuid:06baf67a-5725-4e4a-bebb-a3bf60dcc25b","http://resolver.tudelft.nl/uuid:06baf67a-5725-4e4a-bebb-a3bf60dcc25b","Target-oriented predator and prey swarm control in obstacle-filled environments","Lupău, Cătălin (TU Delft Electrical Engineering, Mathematics and Computer Science)","Simha, A. (mentor); Sharma, S. (mentor); Venkatesha Prasad, R.R. (mentor); Chen, Lydia Y. (graduation committee); Delft University of Technology (degree granting institution)","2022","In this paper, we explore the creation of control algorithms for swarms of robots playing the role of either predator or prey in an environment filled with static obstacles. The paper devel- ops on a famous flock simulation model proposed by Craig Reynolds called boids. The paper analyzes a zero-sum game situation, in which one swarm of robots, the prey, is trying to reach a certain pre-determined target, while another swarm of robots, the predator, is trying to prevent it from reaching its objective. Swarm control algorithms for both the predator and the prey scenarios are analyzed in an arms race manner. The robots are modelled as point-mass holonomic entities, that can move in arbitrary directions. The proposed algorithms are tested on characteristics such as success rate and time in a simulated environment. As a result, a set of algorithms for both the predator and prey are proposed and their strength and weaknesses are discussed.","predator-prey; obstacle-avoidance; swarm control; boids; UAVs; robot swarms; war robots; military robots; Self-driving cars; collision avoidance; simulation; control algorithm","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:cc9b2441-1bb0-49c7-aafa-ffa2d62bb808","http://resolver.tudelft.nl/uuid:cc9b2441-1bb0-49c7-aafa-ffa2d62bb808","Tiny Chaotic Swimmers Achieving Great Collective Order: A study on the dynamics of self-propelling agents in a bacterial colony","Kersbergen, Mees (TU Delft Applied Sciences)","Idema, T. (mentor); Delft University of Technology (degree granting institution)","2022","“From chaos, order can emerge”, a counterintuitive statement but also one laying the foundation for complex living systems. Individually acting agents can collectively produce organised structures at a larger scale. Possibly the most ubiquitous example of this phenomenon are bacterial colonies.
Everywhere around us, a pandemonium of pushing and pulling produces complex structures. The most researched bacterium is E. coli and partially due to its excellent swimming capabilities, the internal structure of its colony is predominantly shaped through mechanical repulsion coupled with
individual motility. In this thesis, we are interested in the emergent dynamics within a bacterial colony. Our focus will lie on how the colony's density \rho affects the internal structure and motility.
To study the colony's interior, we built a three-dimensional individual-based model (IBM) with self-propelling sphero-cylindrical agents representing E. coli bacteria and governed by mechanical interactions. A downside of IBM is its computational costliness, posing an optimisation challenge which will also be covered in this thesis.
A phase transition spontaneously occurs over time from isotropic to an aligned nematic phase. This transition takes longer for higher-density systems. We found a linear relation between the density and the local order for a colony in a quasi-infinite domain. Furthermore, after equilibration, the particles initially behave ballistically. However, this changes to diffusive behaviour in a later stadium. The Reynolds number Re ~ 10^-3 is two orders of magnitude larger than expected for E. coli, possibly due to underestimating the viscosity.
On a final note, the method to determine the moment of equilibrium tequi gives an underestimation in the case of a two-step phase transition; an improved method is proposed.
We propose Radice, an instrument for data-driven analysis of IT-related operational risks in sustainable cloud datacenters. Unlike most state-of-the-art approaches used by the industry, Radice automates the process of risk analysis in datacenters and utilizes the large and diverse volume of data reported by the monitoring systems in datacenters, including environmental data. Underpinning this system is the trace-based, discrete-event simulator OpenDC, which enables the exploration of many risk scenarios through its support for diverse workloads, datacenter topologies, and operational phenomena. Radice’s interactive and explorative user interface assists datacenter operators in addressing complex decisions involving risks, providing them with actionable insights, automated visualizations, and suggestions to reduce risk.
We implement Radice and conduct a comprehensive evaluation of the system to demonstrate how it can aid datacenter operators when confronted with fundamental risk trade-offs. Although Radice is designed to work across many kinds of datacenters, in this work, we focus on private-cloud, business-critical workloads, and on public-cloud operations, representing the majority of workloads in Dutch datacenters. Our experiments show many interesting findings, supporting our claim for a need for data-driven risk analysis in datacenters. We highlight the increasing risk faced by datacenter operators due to price surges in the electricity and CO2 bond markets, and demonstrate how Radice can be used to control such risks. We further show that Radice can automatically optimize topology and operational settings in datacenters for risk, revealing configurations that reduce the overall risk by 10%–30%. Following extensive performance engineering, Radice is able to evaluate risk scenarios by a factor 70x–330x faster than others, opening possibilities for interactive risk exploration. We release Radice as free and open-source software for the community to inspect and re-use.","Risk analysis; cloud; datacenter; sustainability; simulation","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:a019afe5-f153-4767-a54f-aedf781c7968","http://resolver.tudelft.nl/uuid:a019afe5-f153-4767-a54f-aedf781c7968","Identification of near-Earth asteroids using multi-spacecraft systems","Vermeulen, Arjan (TU Delft Aerospace Engineering)","Guo, J. (mentor); Heiligers, M.J. (graduation committee); Delft University of Technology (degree granting institution)","2022","For several decades, humanity has worked on cataloguing the population of near-Earth asteroids (NEA's). Knowledge of these small members of the Solar system not only helps in defending Earth from asteroid impacts; study of the population might also provide valuable scientific insights, and open economic possibilities through developments such as asteroid mining. Surveys from Earth currently face challenges in detecting the smallest NEA's: day- and night cycles, weather patterns, and atmospheric distortion have encouraged several recent studies into the possibility of NEA surveys from deep space.
In this thesis, an extension to these proposals is studied: using multiple spacecraft co-operating on the survey. Next to the immediate increase in the data gathering capabilities, such systems offer synergistic benefits: Firstly, spacecraft can be placed in such a way as to cover each others blind spots caused by e.g. Solar glare. Secondly, imaging from multiple directions allows for triangulation to more quickly determine the orbit. Lastly, such a system could feature implementation of advanced search strategies, for example utilizing a part of the system as follow-up telescopes.
As one of the first works on predicting and optimizing the performance of such a multi-spacecraft NEA survey system, the work aims to provide a foundation on how to compose such a system, and what orbital configurations to select for its operation. A simulation tool was developed which explicitly models a NEA survey, and this model was validated against other research works and surveys. Using this tool, the behavior of the system is studied under changing of various parameters such as the number of spacecraft, thermal infrared or visual light telescopes, and various orbital elements. Following this, numerical optimization was performed to obtain conclusions with regards to the optimal composition and position of the system. The findings are supported by an investigation into the underlying principles driving the performance of the survey. Ultimately, results are obtained which can be used for more detailed studies into design of future NEA survey missions, or trade-offs against other concepts.","asteroid; near-earth asteroid; identification; asteroid survey; space system; distributed space system; simulation; optimization","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:7fd04eec-41d4-4967-b246-89fdfac2446e","http://resolver.tudelft.nl/uuid:7fd04eec-41d4-4967-b246-89fdfac2446e","Modelling, Control, and Handling Quality Analysis of the Flying-V","van Overeem, Simon (TU Delft Aerospace Engineering)","van Kampen, E. (mentor); Wang, Xuerui (graduation committee); Delft University of Technology (degree granting institution)","2022","Over the last five decades, the majority of commercial aircraft consisted of the traditional tube-and-wing configuration. This traditional configuration is approaching a fuel efficiency asymptote. Besides that, with the increasing number of passengers and cargo transported by air every year, and environmental impact as an important factor in aircraft design, there is a necessity for a solution that is able to boost aircraft efficiency. Currently, the faculty of Aerospace Engineering at TU Delft is working on a promising aircraft configuration, namely the Flying-V. This is a specific type of flying wing that is tailless, V-shaped, and consists of two cylindrical pressurised cabins located in the leading edge of the wing. Wind tunnel experiments show that the aircraft is longitudinally statically stable up to an angle of attack of 20¶, after that pitch break occurs. Besides that, research performed on the aerodynamic coefficients obtained using the Vortex Lattice Method and results from the maiden flight test of a scale model of the aircraft conclude that the Dutch roll mode is unstable. Therefore, this research defines a set of key stability and handling quality requirements based on civil aviation authorities combined with military standards for cruise- and approach conditions. These key requirements are consequently assessed with a simulation model of the aircraft using aerodynamic coefficients obtained from the Vortex Lattice Method and wind tunnel experiments. In an attempt to make the key stability and handling quality characteristics of the TU Delft Flying-V adhere to the defined requirements, this thesis aims to contribute to this research field by designing a nonlinear Incremental Nonlinear Dynamic Inversion (INDI) flight control system that is applied to the simulation model of the aircraft. Finally, the performance of the aircraft using this flight control system is assessed and proposals for aerodynamic design changes and control layout design changes are given.","Flying-V; Handling Qualities; flight control; simulation","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:194f8f0c-516b-4b49-b4e4-2fcb1e45c6f2","http://resolver.tudelft.nl/uuid:194f8f0c-516b-4b49-b4e4-2fcb1e45c6f2","Reverse supply chain improvement strategies for returnable packaging material: A case study at Prysmian Netherlands","Andrian Wicaksono Supriyanto, Andrian (TU Delft Civil Engineering and Geosciences)","Beelaerts van Blokland, W.W.A. (mentor); Vleugel, J (mentor); Middel, Frank (mentor); Delft University of Technology (degree granting institution)","2021","Prysmian Netherlands as a cable producer uses a cable drum to deliver products to their customers. However, these customers do not always participate in returning this returnable packaging material (RPM). Customer willingness and reverse supply chain (RSC) visibility are mentioned as the main contributors. The study aims to develop improvement strategies and assess their impacts through a discrete event simulation (DES). The strategies consider the approach from logistics system design, technological implementation, and compliance policy. The findings suggest that an RPM with a high product residual value such as drums promises a high financial return if recovered. However, maintaining the returned drum condition is of the utmost importance to ensure the recovery operations are fruitful. Moreover, a shorter RL cycle time does have a substantial effect on the RSC efficiency.","reverse supply chain; returnable packaging material; simulation; reverse logistics","en","master thesis","","","","","","","","","","","","","",""
"uuid:f5746161-23fe-4f73-afdf-2e12c69836be","http://resolver.tudelft.nl/uuid:f5746161-23fe-4f73-afdf-2e12c69836be","Simulation of polyolefins waste gasification for chemical recycling applications in Aspen Plus","Zamora Roman, Conrado (TU Delft Mechanical, Maritime and Materials Engineering)","de Jong, W. (mentor); Cutz, L. (graduation committee); Gilvari, H. (graduation committee); Eral, H.B. (graduation committee); Stikkelman, R.M. (graduation committee); Delft University of Technology (degree granting institution)","2021","The Dutch government program “ A circular economy in the Netherlands by 2050“ prioritizes the 100% recycling of plastics used in the country by 2050 to reduce the consumption of fossil resources and increase the value of the plastic waste, which is currently incinerated or exported in its majority [1]. This objective can be facilitated by including chemical recycling techniques to recover valuable chemicals such as syngas (H2/CO) and monomers (ethylene/propylene) from plastic waste [2]. Among the chemical recycling techniques, gasification is a mature technology with the highest flexibility on the feedstock composition, allowing to treat complex mixtures as plastic waste [3]. In this framework, the project “Towards improved the circularity of polyolefin-based packaging” evaluates the technology readiness level of gasification for recycling a plastic waste mixture representative of the packaging sector (39.6% of the European plastics demand in 2019 [4]), to increase the knowledge of polyolefins waste (PW) gasification to contribute in closing the plastics loop [5]. The Process and Energy Department of TU Delft is part of this project and is responsible for gasifying a polyolefins waste mixture representative of the packaging sector (PW-DKR350) in a novel Indirectly Heated Bubbling Fluidized Bed Steam Reformer (IHBFBSR) [6]. This thesis focuses on developing a kinetic model of the IHBFBSR, which describes the bed hydrodynamics according to the two-phase theory (TPM) in Aspen plus as a complementary tool for the validation of the experimental work and narrow down the number of laboratory tests by identifying the gasification parameters (temperature, ER and SF ratios) that optimizes the following key performance indicators: carbon conversion efficiency (CCE), cold gas efficiency (CGE), product gas yield (GY) and tar yield (TY). This document describes the development of the TPM-IHBFBSR model. It starts with a literature review of the most-used modelling approaches for carbonaceous materials. Next, it describes the upgrading strategies applied, according to the equilibrium and kinetic approaches, emphasizing in the hydrodynamic models and simulation settings. Through this part were identified the optimal gasification parameters: 680°C<T<800°C, ER=0.15 and SF=2. Finally, the comparison of the TPM-IHBFBSR model and its previous versions against two validation cases found in the literature, highlights the advantage of having developed an adaptable model to a particular PW mixture, making possible to continue improving it","Aspen Plus; Gasification; polyolefins; simulation","en","master thesis","","","","","","","","2022-10-15","","","","Mechanical Engineering | Process and Energy Technology","Towards improved the circularity of polyolefin-based packaging",""
"uuid:7ba53425-353f-4415-ba9e-f6e1c6965a4f","http://resolver.tudelft.nl/uuid:7ba53425-353f-4415-ba9e-f6e1c6965a4f","Criticality in the Abelian sandpile model","van Tol, Berend (TU Delft Applied Sciences; TU Delft Electrical Engineering, Mathematics and Computer Science)","Redig, F.H.J. (mentor); Thijssen, J.M. (mentor); Delft University of Technology (degree granting institution)","2021","In this thesis we study criticality in the context of the dissipative Abelian sandpile model. The model is linked to a simple trapped random walk, giving a practical method to determine criticality for certain landscapes of dissipative sites. The main results concern the lifetime of the random walk, especially the divergence of its first moment for traps placed on spherical shells. For the one dimensional case the point of divergence is determined with reasonable precision. In higher dimensions the divergence is shown to be possible for an infite amount of shells. The connection between the sandpile model and a random walk is shown mathematically and further researched via simulation.","Markov Process; Probability Theory; random walk; self-organized systems; simulation","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics | Applied Physics","",""
"uuid:7d67baa1-6e28-407a-9cab-9cd67e592d8e","http://resolver.tudelft.nl/uuid:7d67baa1-6e28-407a-9cab-9cd67e592d8e","Applying Constraint Programming To Enterprise Modelling","Andringa, Sytze (TU Delft Electrical Engineering, Mathematics and Computer Science)","Yorke-Smith, N. (mentor); van Essen, J.T. (graduation committee); van der Wal, C.N. (graduation committee); Delft University of Technology (degree granting institution)","2021","Enterprise Modelling (EM) is the process of producing models, which in turn can be used to support understanding, analysis, (re)design, reasoning, control and learning about various aspects of an enterprise. Various EM techniques and languages exist, and are often supported by computational tools, in particular simulation. The goal of this thesis is to study the effects and advantages of applying constraint programming (CP) to EM. To the best of my knowledge, no previous study has explicitly combined EM and CP. On the topic of applying CP to EM, this thesis explains where it can be applied, as well as its requirements and advantages. Furthermore, it explains a possible approach where a neural network, trained on a simulation model that represents an enterprise model, is embedded into a constraint program. This approach is supported with experiments, that show typical business objectives can be embedded in a constraint program and find solutions to it in a multi-objective context. The main conclusion is that due to CP being a declarative programming technique, business constraints and goals can be effectively modelled into a constraint program, making the approach understandable and intuitive for business analysts to use. This thesis argues alternative approaches to apply CP to EM can also be realised. Some of these, as well as improvements over the proposed method, are also discussed.","Enterprise Modelling; Constraint Programming (CP); machine learning; simulation; Optimisation; socio-technical systems","en","master thesis","","","","","","https://github.com/SytzeAndr/EM_to_CP Repository link GitHub repository with supplementary code","","","","","","Computer Science","",""
"uuid:d01066a8-1c44-40bf-81b2-9416aaea7e98","http://resolver.tudelft.nl/uuid:d01066a8-1c44-40bf-81b2-9416aaea7e98","Analysing the Effect of Asymmetry on the Performance of Atomic Ensemble Based Repeater Protocols","Jirovská, Hana (TU Delft Electrical Engineering, Mathematics and Computer Science)","Maier, D.J. (mentor); Wehner, S.D.C. (graduation committee); Gerritsen, B.H.M. (graduation committee); Delft University of Technology (degree granting institution)","2021","There has been a lot of research focused on the next generation of the internet, the so-called quantum networks. This analysis has been so far limited to mostly symmetrical architectures, but any near-term realisations of quantum networks using existing fibre topologies will contain asymmetry. In this thesis, we investigate how midpoint asymmetry affects quantum repeater protocols implemented with atomic ensembles. We extend the existing simulation framework to allow for midpoint asymmetry. By simulating asymmetry in elementary links, we show that the performance of an elementary link executing quantum key distribution decreases with an increasing degree of asymmetry. This effect can be mitigated by individual optimisation of photon sources at both ends of the elementary link. We present a way how to reduce the search space of such optimisations by developing a heuristic. The contributions of this thesis provide a crucial starting point for investigations of asymmetry in quantum repeater chains.","quantum networks; simulation; atomic ensembles; repeater protocols","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:6b948e04-29eb-4040-bc6a-9d126663bd4c","http://resolver.tudelft.nl/uuid:6b948e04-29eb-4040-bc6a-9d126663bd4c","The influence of improving repair quality on the unpredictability of the demand of spare parts in aviation","Heijenrath, Lars (TU Delft Aerospace Engineering)","Verhagen, W.J.C. (mentor); Delft University of Technology (degree granting institution)","2021","This study provides new insights in the problem of lumpy aircraft spare parts demands by incorporating new drivers that have an impact on the failure patterns of aircraft components. The study introduces a model and presents corresponding results that obtains component failure characteristics based on data from an aircraft manufacturer. A Monte Carlo simulation technique is used to take dierent repair qualities, fleet sizes, environmental conditions and shared component pool strategies into account. The outcome is evaluated to capture the impact of these parameters. Based on the occurring patterns of the failures, the demand patterns can be inferred. The study confirms the conclusion from previous research that the fleet size is the main contributor to the unpredictability of the demands of spare parts, but notes that this conclusion is not always usable in practice, as practical limitations regarding the extension of fleets are in play. The study concludes that an improvement of the repair quality is beneficial for the variance of the demand and the total amount of failures over time.","spare parts; demand; Aviation; simulation","en","master thesis","","","","","","","","","","","","Aerospace Engineering | Air Transport and Operations","",""
"uuid:7f0438f2-129d-44ff-889e-f5f922c8ebb2","http://resolver.tudelft.nl/uuid:7f0438f2-129d-44ff-889e-f5f922c8ebb2","Finding the optimal set of parking locations for maintenance trains in the Dutch railway network: An optimisation approach using a combination of discrete-event simulation and simulated annealing","Olberts, Olof (TU Delft Mechanical, Maritime and Materials Engineering)","Atasoy, B. (mentor); Negenborn, R.R. (graduation committee); van Biert, L. (graduation committee); Boiten, Wubbo (mentor); Hofstra, Klaas (mentor); Delft University of Technology (degree granting institution)","2021","The Dutch railway system is subject to maintenance, which is carried out by a group of rail contractors who need parking space to park and prepare trains in between projects. The objective of this study is to minimise the total costs as a result of the distance travelled by maintenance trains and the preservation of the included parking locations in the Dutch railway network. According to the dynamic nature and the NP-hardness of the Facility Location Problem (FLP), a Simulation-Optimisation (Sim-Opt) approach is proposed. This Sim-Opt consists of a Discrete Event Simulation (DES) and Simulated Annealing (SA) optimisation. Two neighbourhood functions are evaluated: a Random Search Algorithm (RSA) and a Utility Level Search Algorithm (ULSA), which takes the utility level of parking locations into consideration.
This research shows that DES is a feasible evaluation method for finding possible solutions to the FLP in a Sim-Opt approach. The results of this evaluation show that the ULSA converges sooner than the RSA. Furthermore, the spread in best solutions across all instances is tighter when applying the ULSA. The findings indicate that the ULSA gives more robust solutions when compared to the RSA and makes the SA process more efficient.
This research found that the best solutions in terms of overall cost includes on average 22 parking locations, which can reduce the maintenance cost by ~30%. The average increase in travel costs of ~9% does not justify the use of more capacity than the absolute minimum to house all trains.","Facility Location; Maintenance; Train; Railways; network; simulated annealing; Discrete Event Simulation; Optimisation; Parking; Infrastructure; Neighbourhood Function; Capacity; Dutch; Algorithm; simulation","en","master thesis","","","","","","","","","","","","Marine Technology | Transport Engineering and Logistics","",""
"uuid:e1a24e50-de78-44cb-8f65-e5963e16032e","http://resolver.tudelft.nl/uuid:e1a24e50-de78-44cb-8f65-e5963e16032e","Simulation of an underground cut and fill mine: A simulation approach using SimMine to determine the systems bottlenecks and the added value of additional miners in the production shift","van de Stadt, Michael (TU Delft Civil Engineering & Geosciences)","Soleymani Shishvan, M. (mentor); Keersemaker, M. (mentor); Lottermoser, Bernd (mentor); Guerrero, Dr. Rodrigo Serna (mentor); Delft University of Technology (degree granting institution); Aalto University (degree granting institution); Rheinisch-Westfälische Technische Hochschule (degree granting institution)","2021","The case study consist of a small underground mine with a small mining crew. The vehicle park is relatively large, and therefore it is necessary to establish the added value of additional miners or equipment for short-term production planning purposes, assuming that staff size currently limits production capacity to find out if staff size is indeed the bottleneck in the production capacity of the mine operation. When the bottlenecks of the mining system are known, it will be easier to focus on necessary areas and further implementations to improve the system.
The truck numbers used in the simulation study were ranging from 4 to 7, and the operator pool size was ranging from 10 to 15 people. Significant findings of this study are that with the current mine setup of 4 trucks, there would be no increase in production when adding operators. For the 24 scenarios the production increase was determined, the revenue change and the mining cost. By adding trucks and operators, a production increase of 19.38 % could be reached with 15 operator and 7 trucks.
process, that PSCAD/EMTDC phase model can yield sufficiently accurate results with reference to the lab measurements. In the next stage, the questioned strengths and weaknesses of each original design setup are determined through a series of lab tests and computer-aided simulations by utilizing of which the best setup is recognized. Seeking some approaches for further optimization of the setups is also another aim of this research. This thesis proves that the PSCAD/EMTDC phase model is capable to present an acceptably exact model for the circuits, despite having some limitations that will be explained. Operational
versions of all setups, in which the fundamental problems are cleared up, will also be proposed in this thesis. Among all the setups, Core Pulse Injection (CPI) and Table Pulse Injection (TPI) practices offer the best non-optimized (original design) and optimized results, in terms of quality of the delivered voltage across the target dielectric respectively, while the original design of Double Side Pulse Injection (DSPI) and Single Side Pulse Injection (SSPI) seem to be unreliable due to severe oscillatory behaviors. Nevertheless, two modified versions for DSPI and SSPI show quality results.
In this research, experiments and modelling have been performed to study combustion using 100% PFI methanol. Measurements are realized with varying: ignition timings, NOx emission settings, and manifold temperatures. Data collected during these measurements such as in-cylinder pressures, emissions, and temperatures, provided a comparison between running the engine on methanol or natural gas. In this comparison combustion stability is determined with the coefficient of variation (COV) of Pmax and of imep, optimum ignition timing is determined and engine efficiency is calculated and compared to NG. Modelling is accomplished with a TU Delft model of the G3508A SI engine adjusted for the use of 100% methanol as a fuel. A modified sub-model for the PFI and vaporization of methanol has been developed. These engine data will be used to validate the methanol engine model and to optimise the engine performance for further experimental runs and better understanding the use of methanol as a fuel.
The effect on the performance and the combustion when 100% methanol is used as fuel for a SI PFI engine compared to premixed injection of natural gas is shown in this work. The engine operates stably on methanol at 50% and 75% load within ignition timings of 16-24 °CA BTDC, but less stable than with NG. Heat release indicates an almost similar combustion duration, but shorter combustion duration is shown for methanol. Also with methanol, the crank angle where 50% of fuel is burnt (CA50) is shown earlier compared to NG. The faster premixed combustion, combined with a better found fuel consumption operating point, resulted in higher efficiencies for methanol compared to NG for the tested 50% and 75% load at comparable operating conditions.
noise, also from offset. The presence of this noise can decrease the performance
of a decoder using the Euclidean distance significantly. To negate the effects of
offset, a new distance, the modied Euclidean distance, was introduced, which
offers immunity to offset. However, the modified Euclidean distance is less
resistant to noise, which calls for methods to improve its resistance. The coset
of Hamming codes, constant weight codes and Berger codes are discussed and are simulated to investigate their performance with both distances. These codes are compared to each other for the Euclidean distance and the modified Euclidean
distance.","Coding Theory; offset; simulation; Optimization","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics","",""
"uuid:5c0e1f49-bd74-4249-a179-691a5c2f775b","http://resolver.tudelft.nl/uuid:5c0e1f49-bd74-4249-a179-691a5c2f775b","Identifying and improving the internal and external information exchange for shipbuilding processes","Douma, E.T. (TU Delft Mechanical, Maritime and Materials Engineering)","Pruijn, J.F.J. (mentor); de Vos, P. (graduation committee); Jiang, X. (graduation committee); Veraart, Hans (graduation committee); de Groot, Jos (graduation committee); Delft University of Technology (degree granting institution)","2020","To identify and improve the communication or information flow for a ship production process, this thesis provides an approach to use a Discrete Event Simulation (DES) model, based on Petri Net technique. By establishing a theoretical framework, the different simulation modelling options are analysed and based on practical requirements, a choice is made. Subsequently, a stakeholder analysis is performed and the building of the conceptual and mathematical model is discussed. Also, the verification and validation strategies are discussed and the results of experiments for a case-study are provided.","information flow; ship production process; shipbuilding; simulation; discrete event simulation; DES; communication; petri-net","en","master thesis","","","","","","","","","","","","Marine Technology | Ship Design","",""
"uuid:e16a9fdc-5f7f-4588-8f69-a64f07c24879","http://resolver.tudelft.nl/uuid:e16a9fdc-5f7f-4588-8f69-a64f07c24879","Performance evaluation of the CWI BRDF-fitting method under cloud-contaminated conditions: A numerical experiment using PROSAIL","Klein, Jigme (TU Delft Civil Engineering & Geosciences; TU Delft Geoscience and Remote Sensing)","Menenti, M. (mentor); Lhermitte, S.L.M. (graduation committee); Lindenbergh, R.C. (graduation committee); LIU, Qinhuo (graduation committee); Delft University of Technology (degree granting institution)","2020","Remote retrieval of Normalized Difference Vegetation Index (NDVI) over the Earth’s surface is a critical component of monitoring the surface processes of our planet. NDVI is a widely used and useful indicator of vegetation health and quantity however its retrieval using satellite data is hindered by the frequent presence of clouds in the Earth’s atmosphere. Zeng et al. (2016) developed a novel technique that estimates a surface's Bidirectional Reflectance Distribution Function (BRDF) with a RossLiMaignan (RLM) BRDF model from a set of observations. This method, the ChangingWeight Iterative (CWI) method, uses iterative a posteriori estimation of observation errors to reduce the impact of cloud-contaminated measurements in the sample. Its performance was compared to two conventional methods, ordinaryleast squares (OLS) and LiGao BRDF-fitting. The three different BRDFfitting methods were compared in a numerical experiment. 6,000 surface types covering a broad range of surface types were modeled using the canopy radiative transfer model PROSAIL. For each surface, sets of pseudo-observations of the surface’s red and NIR band reflectance were generated using realistic suntarget view geometries from the MODIS and MERSI satellite sensors. The effects of cloudcontamination were simulated by adding different numbers of cloudcontaminated observation to the sample, with varying degrees of contamination. The RLM BRDF model was fitted to these samples using the three different methods to estimate the BRDF model parameters. These were subsequently used to calculate a NDVI composite value. Each method’s estimate was compared to a reference value generated by PROSAIL. Results for the 6,000 surfaces confirmed that the CWI method is more noiseresistant than OLS and LiGao in situations with many observations (i.e. a large sample), and resulted in estimates that more closely matched the reference value from PROSAIL, compared to the conventional LiGao and OLS methods. In scenarios of lowcloud contamination, all three methods failed to detect and significantly suppress the impact of noisy observations, which was expected from existing literature. For a largesized sample of 13 pseudoobservations studied for the validation site Mongu, Zambia, the CWI method was observed to have a very accurate performance, for up to 5 contaminated observations in the sample. With smaller sized samples of 8 and 10 for two other validation sites, it was found that the RMSE of the CWI method would suddenly increase approximately tenfold when the number of contaminated observations increased beyond 2 and 3, respectively. After these ’tipping points’, the LiGao method was more accurate and outperformed CWI. The CWI method therefore performed promisingly when given a large enough sample size, and in these cases it was more accurate than the conventional Li-Gao and OLS methods. However, when it fails to correctly identify noisy observations, its accuracy could decrease suddenly, which should be taken into consideration for operational use. Since the results of the experiment were averaged over 6,000 different sampling points of the PROSAIL model's parameter space, it is suggested that the conclusions apply to a wide range of surface types found all over the Earth.","Remote Sensing; BRDF; NDVI; simulation; PROSAIL; reflectance","en","master thesis","","","","","","","","","","","","Geoscience and Remote Sensing","","40.005139, 116.383628"
"uuid:5adcdd5d-5e6d-4128-88a8-f21ac475476b","http://resolver.tudelft.nl/uuid:5adcdd5d-5e6d-4128-88a8-f21ac475476b","Evaluation of Time-window Trajectories with respect to Fuel Consumption and Arrival Time","Mesfum, Johannes (TU Delft Aerospace Engineering)","Mitici, Mihaela (mentor); Verbeek, René (graduation committee); Delft University of Technology (degree granting institution)","2020","The flight planning process is an extensive and long process to direct and maintain a high level of operations within the airspace. As air traffic demand grows year after year, it's worthwhile to optimise the European air traffic system further. One way of optimising the system, is by creating optimal flight schedules that solve for demand-capacity imbalances. These schedules require an evaluation with respect to punctuality and fuel consumption in the tactical phase. For this article, an air traffic model has been developed in BlueSky to simulate air traffic while taking the variance of wind and delay into account. Subsequently, the performance of several optimised schedules will be assessed and compared with respect to wind, speed changes, punctuality and fuel consumption.","scheduling; time-window; punctuality; fuel consumption; ensemble weather forecast; BlueSky; contract of objectives; demand-capacity imbalances; trajectory analysis; cruise altitude; Fuel efficiency investments; BADA; simulation; FMS; TIGGE; ECMWF; delayed flights; Delay reduction; air traffic simulation","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:635d3484-fcda-44f7-83bf-46525ddf6069","http://resolver.tudelft.nl/uuid:635d3484-fcda-44f7-83bf-46525ddf6069","Comparison of Power-to-X-to-Power technologies for energy storage in 2030","Boom, Senja (TU Delft Electrical Engineering, Mathematics and Computer Science)","Ramirez Ramirez, Andrea (mentor); Cvetkovic, Milos (graduation committee); Moncada Botero, Jonathan (graduation committee); Delft University of Technology (degree granting institution)","2020","The energy transition is advancing rapidly, and the Dutch electricity grid is changing with it. Increasing shares of variable renewable energy sources create mismatches between electricity supply and demand. These mismatches create a need for large-scale energy storage. Existing large-scale energy storage technologies are pumped hydro energy storage and compressed air energy storage, but their storage potential in the Netherlands is limited. The alternative is to store energy in chemical bonds, for example by producing hydrogen, and regenerate the electricity later. This type of storage is called a Power-to-X-to-Power (PtXtP) system. Power-to-X technologies have existed for a long time. This
report evaluates their potential in an energy storage application. First, PtXtP systems are compared to compressed air energy storage (CAES) and pumped hydro energy storage (PHES). The geographic potential of CAES and PHES in the Netherlands is limited for both technologies, proving large scale energy storage is a challenge to which PtXtP. Next, all PtXtP technologies are investigated and compared based on available literature. The three technologies with the most potential (hydrogen, ammonia and methane) are further investigated. This report gives a current and comprehensive overview of data on PtXtP system components, including amongst others their OPEX, CAPEX, efficiency and energy use. This data was used as input for several models of hydrogen, methane and ammonia storage systems, to determine system cost and performance in a dynamic system. Simulations are run with these PtXtP systems as energy storage technologies for a 1GW wind park. The simulations are used to identify main system bottlenecks, investigate the impact of intermittent use on system performance, and evaluate the potential of a PtXtP storage system. The first important bottleneck is the size of the hydrogen buffer required for operation of the Haber-Bosch reactor and Sabatier reactor. It is as large as or larger than the storage capacity in a hydrogen storage system. The second bottleneck is the size of ammonia and hydrogen fuel cells. The required fuel cell power is 425 MW, which is larger than any current or expected fuel cells. Next the simulations were used to investigate the performance of a PtXtP system as energy storage medium in a VRES system. The first important finding is the tradeoff between system flexibility and system sizing. An ammonia system with 33%-100% flexibility can be 60% smaller than a 0%-100% system, while still processing the same annual amount of hydrogen. Intermittent system use increases the levelized cost of storage significantly, in these models by factor 2.2-4, due to the unchanged CAPEX which must be paid for a reduced system output. The first important finding related to the PtXtP system is that the cost and energy consumption of hydrogen transport and storage are relatively small compared to energy conversion steps. The electrolyser proved to be the system component with the highest cost and energy loss. Finally, the added value of the storage system is significant wind park size reduction. The 1GW wind park size could be reduced by 35% when connected to a hydrogen storage system, while still meeting the same demand. In addition, zero grid exchange can only be achieved when implementing a storage system. With lower shares of grid exchange, storage becomes increasingly more valuable. The overall conclusion drawn is that hydrogen or methane systems seem to have the most potential for energy storage purposes. The report also shows energy storage is necessary, and no alternatives to PtXtP are available in the Netherlands. PtXtP will therefore have to play a large role in the future Dutch electricity grid. However, use of PtXtP storage will increase the price of electricity and several technological developments, mostly scale-ups, are necessary before a PtXtP
system is feasible.","Power-to-X-to-Power; Energy Storage; energy storage system; simulation; 2030; PtXtP; Hydrogen; ammonia; methane; Haber-Bosch Process; Sabatier; Electrolyser","en","master thesis","","","","","","","","","","","","Electrical Engineering | Sustainable Energy Technology","",""
"uuid:0f33eba1-ab14-4712-aee0-2471849cc034","http://resolver.tudelft.nl/uuid:0f33eba1-ab14-4712-aee0-2471849cc034","Enhancing the productivity of a one-way bottling and packaging production line: A case study at Heineken Zoeterwoude","de Vries, Ismay (TU Delft Civil Engineering and Geosciences)","Negenborn, Rudy (mentor); Ludema, Marcel (graduation committee); Duinkerken, Mark (mentor); Kogeler, Eric (mentor); Delft University of Technology (degree granting institution)","2020","This thesis investigates different ways in which the productivity of a one-way bottling and packaging production line can be increased. This is done by comparing different configurations regarding machine capacities, machine reliability and buffer capacities with each other. This is done with both the use of a generic model and a case study. The different configurations were compared and combined together with the use of a simulation model. Following from this it can be concluded that it is dependent on the reliability and the machine capacities which shape of protective capacity is the most productive. Besides, also the place in the system where the largest amount of buffer capacity needs to be placed is dependent on the reliability of the machines which are part of the production line.","production; production line; productivity; bottling; simulation; heineken; brewery; one-way","en","master thesis","","","","","","","","","","","","Civil Engineering | Construction Management and Engineering","",""
"uuid:32a90402-a577-4383-afe3-f8a865a287dc","http://resolver.tudelft.nl/uuid:32a90402-a577-4383-afe3-f8a865a287dc","Never Landing Drone","de Jong, C.P.L. (TU Delft Aerospace Engineering)","Remes, B.D.W. (mentor); de Croon, G.C.H.E. (graduation committee); Delft University of Technology (degree granting institution)","2020","Increasing endurance is a major challenge for battery-powered aerial vehicles. A method is presented which makes use of an updraft around obstacles to decrease the power consumption of a fixed-wing, unmanned aerial vehicle. Simulatory results have shown the conditions that the flight controller can fly in.
The effect of a change in wind velocity, wind direction and updraft has been analysed. The simulations showed that an increase in either updraft or absolute wind direction decrease the throttle consumption.
A change in wind velocity results in a shift of the flight controller’s boundaries. The simulations achieved sustained flight at 0 per cent throttle. The practical, autonomous tests reduced the average throttle down to 4.5 per cent in front of the boat. The unfavourable wind conditions and inaccuracies explain this minor
throttle requirement during the final experiment.","soaring; updraft; orographic lift; fixed-wing; flight control; simulation; practical tests","en","master thesis","","","","","","","","2021-01-07","","","","Aerospace Engineering","",""
"uuid:9c8dd410-fed7-402a-97a4-b58ee5bf21e6","http://resolver.tudelft.nl/uuid:9c8dd410-fed7-402a-97a4-b58ee5bf21e6","Inclusion of the lesion of chronic stroke patients into a volume conduction model: Simulating the influence of the lesion on the electric field distribution generated by tDCS","Jeukens, Floor (TU Delft Mechanical, Maritime and Materials Engineering)","Schouten, A.C. (mentor); Manoochehri, M. (mentor); Geelen, J.E. (graduation committee); Hunyadi, Borbala (graduation committee); Piastra, Maria Carla (mentor); van der Cruijsen, Joris (mentor); Delft University of Technology (degree granting institution)","2019","Stroke is a cerebrovascular disorder with 15 million cases every year worldwide. The most common symptom is motor deficits. In order to overcome such symptoms, the motor brain either repairs the damaged tissue or reorganises to compensate for the injured brain region. To stimulate this reorganisation transcranial Direct Current Stimulation (tDCS) is considered to be a promising thera- peutic intervention. Simulations of electric field distributions generated by tDCS currently entail individualised volume conduction models to improve tDCS. A volume conduction model includes geometry and conductivity properties of tissue types in healthy subjects. When applying existing models to chronic stroke subjects, electric field distribution patterns differ substantially compared to healthy subject distribution patterns. In current models, the lesion is not identified and acknowledged as a distinctive tissue type, as it is yet unclear what the lesion influence is. However, the lesion is a potential source of variability in desired electric field distribution which could result in different motor recovery. A volume conduction model is designed by combining the software SimNIBS, which can segment the head of healthy subjects and LINDA, able to distinguish lesion tissue of chronic stroke subjects. The location and the conductivity value of the lesion seem to influence the electric field distribution of tDCS where this individualised model is preferred. Including the lesion is an important advance towards the use of volume conduction models for chronic stroke subjects to prospectively find optimal electrode configurations, keep the safety margins and to prospectively analyse the results of tDCS.","tDCS; Lesion; simulation; Volume conduction model","en","master thesis","","","","","","","","","","","","","",""
"uuid:fc2f383e-a692-4479-94ac-a4054daa138e","http://resolver.tudelft.nl/uuid:fc2f383e-a692-4479-94ac-a4054daa138e","Dynamic electrical behaviour of a solar powered methanol micro-plant","Blankert, Olivier (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Photovoltaic Materials and Devices; ZEF B.V.)","Smets, A.H.M. (mentor); Delft University of Technology (degree granting institution)","2019","Liquid fuels are projected to account for 88% of total energy use in the transportation sector in 2040. Low cost of renewable energy poses a new opportunity for decarbonisation of the transport sector. As a liquid en- ergy carrier, methanol offers a promising solution for alternative fuels in aviation, shipping and long-distance trucking. Zero Emission Fuels (ZEF) is developing a state-of-the-art autonomous solar PV powered methanol micro-plant. By using air, sunlight, alkaline electrolysis and synthesis reaction, renewable liquid methanol is produced. The electrical system of the micro-plant is complex and comprises of various components and actuators. In order to understand the dynamic short-term electrical behaviour of the system, a simulation tool is developed in MATLAB and Simulink. Mathematical models of a 300 W PV panel, alkaline electrolyser cell stack, DC-DC (buck) converter, cartridge heater, Peltier element, a compressor, a brush-less DC fan and solenoid valve are built or adopted. Joined together in a model of the micro-plant system, they predict the electrical interplay on a 20 μs timescale. Simulations using various scenarios and different irradiation levels show several bottlenecks, prohibiting desired operation. The main problem uncovered is the discontinu- ous switching behaviour of the buck converter, causing distortions in the main circuit and badly influences the power generation of the PV panel. The addition of a buck converter input filter showed significantly improved micro-plant performance and more stability during operation. The observations regarding plant performance and response to disturbances show the importance of the dynamic simulation tool that was developed. It gives a realistic insight in the dynamics of the electrical interplay of components, as well as it shows the shortcomings and improvements of the original system design.","Dynamic Analysis; solar-to-fuel; Alkaline electrolysis; Methanol synthesis; micro-plant; electrical; simulation; system; autonomous; Direct air capture; direct coupling; Renewable Energy","en","master thesis","","","","","","","","2021-10-23","","","","","",""
"uuid:9a5789ee-17d6-49a2-929b-bef87ed31ca3","http://resolver.tudelft.nl/uuid:9a5789ee-17d6-49a2-929b-bef87ed31ca3","Molecular Dynamics Simulations of Non-Photochemical Laser-Induced Nucleation: Electrolyte Clustering by Nanoparticle Heating","van Waas, Tom (TU Delft Applied Sciences)","Hartkamp, Remco (graduation committee); Thijssen, Jos (mentor); Delft University of Technology (degree granting institution)","2019","Non-photochemical laser-induced nucleation (NPLIN) is a process where a crystalline phase is formed out of solution by exposure to a laser beam. In NPLIN, the nucleation probability is strongly dependent on the beam intensity and weakly dependent on the wavelength. NPLIN offers a feasible alternative to energy-intensive industrial crystallisation methods. Although several mechanisms have been proposed, little is known about NPLIN at the molecular level. Some theories suggest that nucleation rates are enhanced through the heating of nanoparticles by absorption of electromagnetic energy. In this work, molecular dynamics simulations are performed on the clustering of ions in the vicinity of a heated nanoparticle in an aqueous supersaturated KCl solution. The spherical symmetry of a spherical nanoparticle in solution is exploited by modelling a laterally periodic water column of an initial length of 500 Angstrom enclosed between a planar iron(III) oxide nanoparticle surface and a graphene piston. A cavitation bubble is formed after nanoparticle heating, leading to an increase of clustered ions according to a bond order criterion. The clustering should correspond to nucleation in experimental systems because the clusters satisfy Delta G < 0 for locally valid NPT ensembles. The results corroborate concentration based NPLIN mechanisms as the clustering is visibly induced by local solute evaporation. Pressure based mechanisms appear ineffective because no effects of pressure waves are observed. Thermostatting the graphene sheet does not yield observable dissipation of the thermal energy generated through the nanoparticle heating and at the ion clustering sites and, obstructing completion of the cavitation cycle at a feasible simulation duration. It is suggested to repeat the simulations using a conically shaped system and a piston of higher transverse thermal conductivity.","non-photochemical; laser; nucleation; crystallisation; molecular; dynamics; simulation; electrolytes; clustering; nanoparticle; solution; potassium; chloride","en","bachelor thesis","","","","","","","","","","","","Applied Physics","",""
"uuid:cbaf7f4f-ae81-4f54-ab75-c830d8c5cf6e","http://resolver.tudelft.nl/uuid:cbaf7f4f-ae81-4f54-ab75-c830d8c5cf6e","Composite Cylindrical Shell Buckling: Simulation & Experimental Correlation","Eberlein, David (TU Delft Aerospace Engineering; TU Delft Aerospace Structures & Materials; TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, Chiara (mentor); Bergsma, Otto (graduation committee); Giovani Pereira Castro, Saullo (graduation committee); Delft University of Technology (degree granting institution)","2019","Guidelines dating back 50 years, NASA SP-8007, are employed today in the design of thin-walled launch vehicle structures. Due to advances in materials, structural designs, and manufacturing techniques since the publication of SP-8007, the development of new knockdown factors for contemporary launch vehicle structures is an ongoing subject of research. The work presented herein was performed in collaboration with the NASA Engineering and Safety Center on the Shell Buckling Knockdown Factor Project. A laboratory-scale composite cylindrical shell test article, which had previously been designed according to a novel scaling methodology, was the subject of simulation and testing. Its inner, outer, and boundary surface imperfection signatures were measured and implemented in finite element models for buckling test simulations. These were then used to provide prediction data for an experiment conducted at NASA Langley Research Center. Buckling loads from the two pre-test analyses were within 0.08% and 3.7% of the experimental buckling load. The concurrence of axial shell stiffness, localized strains, and buckling shape evolution was also established between the experiment and simulations. A slight loading imperfection was found during the test; however, it was demonstrated through post-test analyses that this did not affect the buckling load substantially. The test article's 0.91 normalized buckling load was much higher than the 0.59 knockdown factor specified by SP-8007. The correlation between the experimental and simulation results, as well as their contrast with SP-8007's prescription, suggests that directly measured imperfections are capable of playing a role in the development of modern and potentially less conservative knockdown factors for future launch vehicle structures.","Finite Element Analysis; buckling; cylindrical; shells; composites; carbon fiber; NASA; shell; simulation; nonlinear; dynamic; experiment; experimental; Abaqus; finite element method; Finite Element Modeling","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:519a3e8a-b04a-4986-8b90-fb294dd750f5","http://resolver.tudelft.nl/uuid:519a3e8a-b04a-4986-8b90-fb294dd750f5","Feasibility Study for PV Park in TU Delft Campus Zuid","Vasileiadis-Sakatias, Thanasis (TU Delft Electrical Engineering, Mathematics and Computer Science)","Ziar, H. (mentor); Isabella, O. (mentor); Smets, A.H.M. (graduation committee); Cvetkovic, M. (graduation committee); Meerburg, D.A.N. (graduation committee); Delft University of Technology (degree granting institution)","2019","TU Delft, being in the frontier of research and progress in Europe and worldwide, is always interestedin exploring the possibilities of renewable energy production. In that respect, the Facility ManagementDepartment (FMD) of the university approached the Photovoltaic Materials and Devices (PVMD) group,whose research covers a wide area in the renewable energy sector, in order for the solar potential of anextensive available area in the South of the TU Delft Campus to be investigated. This thesis project aimsat shining a light on the latter, while also proposing a best business case scenario for realization.In order to deal with the issue at hand, a complete MATLAB-based modelling tool has been developedfor the simulation and evaluation of PV module and PV system performance. Additionally, a locationsurvey was conducted, which resulted in the recreation of the skyline profile and the study of the reflectivityof the ground. In case of using a bifacial PV module, the PVMD Toolbox can be integrated in theapproach.Using the developed modelling approach, different PV technologies were investigated. The bifacialmono-Si PV modules by LG were found to outperform the competition on yield and cost criteria. Theresults indicated that the best performance is achieved for a tilt of 40o and an azimuth of 165o. A sensitivityanalysis was also carried out, based on which a ground clearance height of 1.5 [m] was selected.Furthermore, the results extracted using the modelling tool were cross-validated using the PVMD Toolboxand the System Advisor Model (SAM), showing a satisfactory performance with the maximum deviationsbeing 1.5% and 3%, respectively.Moreover, two different potential loads were studied in conjunction with the solar modules. The firstwas the 1.25 [GWh] annual demand of the EXACT building, closely located to the investigated area. Inthis case a grid-connected PV system was designed with the total amount of modules being calculatedas 2,520. The second was an electrolyzer of a nominal capacity of 1.25 [MW], scheduled to be installedat the Process and Energy (P&E) Department of the university. The electrolyzer was assumed to supplyhydrogen to two fuel cell buses, and its respective PV system was designed both as grid-independent aswell as grid-connected. In the independent approach the total number of required PV modules was foundto be 4,000, while in the grid-connected approach the size of the PV system was chosen similar to theone designed for the EXACT building. Moreover, two hydrogen production strategies, a minimum anda maximum, were investigated. The respective produced hydrogen was found to be 9,100 and 20,619[kg].The choice of the best business case was based on a performance and cost analysis that was conducted.The conclusion drawn from this analysis was that decentralized PV systems have a better performance dueto higher inverter efficiency and lower cable losses, while the centralized approach has a better behaviourcost-wise due to the initial investment being smaller by 2%. Additionally, systems with a higher lifetimeyield better results. All in all, the grid-connected, decentralized, electrolyzer-coupled PV system with themaximum hydrogen production strategy was deemed as the best business case. The initial investment of1.28 [M €] is won back over a period of 5.1 [yr], having an LCoE of 8.2 [cts €/kWh]. The area requiredto fit the PV system is comprised of Zones B1, B2 and B3, which show the greatest potential.","feasibility study; modelling tool; solar park; simulation; electrolyzer; business case; performance analysis; cost analysis","en","master thesis","","","","","","","","2021-10-01","","","","Electrical Engineering | Sustainable Energy Technology","",""
"uuid:feac6e55-43a7-4829-9aea-a9626a01eb63","http://resolver.tudelft.nl/uuid:feac6e55-43a7-4829-9aea-a9626a01eb63","Synchronized quantum network emulator using discrete event simulation","Wubben, Leon (TU Delft Electrical Engineering, Mathematics and Computer Science)","Wehner, Stephanie (mentor); Dahlberg, Axel (mentor); Delft University of Technology (degree granting institution)","2019","","quantum; QNetSquid; simulation; simulaqron; emulation; Quantum information; network; quantum internet; NetSquid; discrete event simulation; synchronization","en","master thesis","","","","","","","","","","","","Computer Science | Software Technology","",""
"uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52","http://resolver.tudelft.nl/uuid:af041e54-7660-4fb1-b68c-0af3aaf27c52","Evaluating SLAM in an urban dynamic environment","van Schouwenburg, Sietse (TU Delft Mechanical, Maritime and Materials Engineering)","Kooij, J.F.P. (mentor); Hehn, T.M. (mentor); Gavrila, D. (graduation committee); Hernández, Carlos (graduation committee); Katsifodimos, A (graduation committee); Epema, D.H.J. (graduation committee); Delft University of Technology (degree granting institution)","2019","Simultaneous Localization And Mapping (SLAM) algorithms provide accurate localization for autonomous vehicles and provide essential information for the path planning module. However, SLAM algorithms as- sume a static environment in order to estimate a location. This assumption influences the pose estimation in dynamic urban environments. The impact of this assumption on day-to-day scenarios of an intelligent vehicle is unknown. A deeper understanding on the effect of dynamic scenarios in an urban environment could lead to simple and robust solutions for SLAM algorithms in intelligent vehicles. The objective of this research is to develop a methodology that isolates the effect of an urban dynamic environment on the per- formance of a SLAM algorithm. This requires constant environment conditions including constant weather conditions, lighting conditions and identical trajectories over time. The methodology is tested with a stereo feature based V-SLAM algorithm called ORB SLAM [19], which illustrates the in-depth analysis that is possi- ble with this experiment. The main research question is: How does a dynamic urban environment influence the pose estimation accuracy of stereo ORB SLAM? Two specific dynamic scenarios are designed to represent a dynamic urban environment: driving behind another vehicle and vehicles approaching on the other side of the road. On these scenarios, an in-depth anal- ysis of ORB SLAM is performed to observe how the algorithm’s design influences the robustness to a dynamic environment. Functions within the algorithm are bypassed to analyze the effect on the performance. Specifi- cally, the place recognition function and map point filtering function are bypassed. The analysis proofs which functions assist in the overall robustness to a dynamic environment. Moreover, an analysis is performed of the algorithm in localization mode to research the effect of utilizing maps that were created under different conditions. The knowledge gained from the full analysis can be utilized to improve other V-SLAM algorithms. The experiment is performed in CARLA [6], an open source simulator. CARLA provides an elaborate sen- sor suite which support multiple camera setups and LIDAR sensors. Furthermore, the simulator provides free maps which represent realistic urban environments and allows for easy and accurate access to the ground truth position. A setup is designed with the simulator that allows complete isolation of the effect of a dy- namic environment. The setup allows full control of lighting conditions, weather conditions and allows iden- tical trajectories over time in different dynamic scenarios. Each scenario is simulated over several different trajectories in which the camera images are converted to rosbags. Each variation of the ORB SLAM algorithm is tested on the produced rosbags. The resulting pose estimations in dynamic conditions are compared to the pose estimations made during static conditions to analyze the effect of dynamic scenarios on the perfor- mance of the algorithm. The method successfully isolated the effect of a dynamic environment on the performance of stereo ORB SLAM. It allows for a detailed analysis which aids in finding the source of performance differences. In general, stereo ORB SLAM displays robust behavior to a dynamic environment. The experiment shows that the algo- rithm is sensitive to false relocalization when the stereo camera setup is driving 10 meters behind another vehicle for a long period of time. During these conditions, ORB SLAM cannot provide accurate pose esti- mations even when the place recognition module is deactivated. Furthermore, the map point filtering does increase the robustness in certain dynamic scenarios. Finally, the data suggests that utilizing maps created in different conditions does influence the pose estimation in localization mode. However, more data is needed to confirm these results. The methodology has proven its value for in depth analysis of robustness to an urban dynamic environ- ment for a SLAM algorithm. This experiment is not limited to ORB SLAM but could be utilized for other monocular and stereo V-SLAM methods, as well as LIDAR based methods. New solutions can be developed to increase robustness to a dynamic environment and tested on the same rosbags. This methodology could be an important tool for the development SLAM algorithms for intelligent vehicles.","SLAM; simulation; computer vision; simulataneous localization and mapping; localization; mapping; visual SLAM; ORB SLAM; CARLA","en","master thesis","","","","","","","","","","","","Mechanical Engineering","",""
"uuid:7a556b44-9a4d-4573-b403-62f3a552845f","http://resolver.tudelft.nl/uuid:7a556b44-9a4d-4573-b403-62f3a552845f","Geography based bi-facial cell design for low LCoE","Ramesh, Santhosh (TU Delft Electrical Engineering, Mathematics and Computer Science)","Weeber, Arthur (mentor); Janssen, Gaby (graduation committee); Delft University of Technology (degree granting institution)","2019","The Levelized Cost of Electricity (LCoE) produced by PV systems is determined by yield (kWh) and cost of the system. Reducing the LCoE of the solar power can be achieved either by increasing the yield or by reducing the cost. The yield of bi-facial PV systems is promoted by high efficiency and a high bi-faciality factor.
Yield due to the front efficiency depends on the parameters ($V_{oc}$ ,$I_{sc}$ and $FF$) that contribute to that efficiency. It was found that the different parameter helped maximize the yield in each climatic condition. For low irradiation and low operating temperature zones, yield improved when the product $I_{sc} \times V_{oc} $ was increased at the cost of the $FF$. While at equatorial tropical climates with fairly high temperatures, yiele improved when $V_{oc}$ was increased at cost of $I_{sc}$ and $FF$. For high irradiance and high temperature desert climates, yield improved when the product $ V_{oc} \times FF $ increased at the cost of $I_{sc}$. Designing cells to suit the operating conditions of the region improved yield per $W_{p}$ thereby reducing the LCoE.
A large part of the cell processing cost is in the metal (silver) used on the cell. The amount of silver is usually optimized for the cell efficiency (i.e. power in W) delivered under standard test conditions, i.e. a solar irradiance of 1000 W/m2. When the metal patterns were optimized for the yield at a climatic zone, results showed that up to 50\% of silver per cell could be saved (From a reference cell considered). Up to 5\% LCoE improvement was theorized.
The irradiance on the bi-facial modules varies with different system orientations (Equator facing, East west tilted, East West Vertical ). The metal patterns were also optimized for the different system orientation at a climatic condition. The results showed metal patterns can be made more thin when designed for vertical systems.
Advanced c-Si cell concepts try to reach the theoretical efficiency by employing different passivation technologies, grid patterns, etc. Each cell technology will have advantage over the other. This makes it interesting to study if we can attribute a cell concept to a climatic condition where it will outperform other cell concepts.","Bifacial; yield maximization; simulation; metal pattern optimization; PV Systems","en","master thesis","","","","","","","","2019-12-31","","","","","",""
"uuid:20478016-cc7d-4c87-aa12-25b46f511277","http://resolver.tudelft.nl/uuid:20478016-cc7d-4c87-aa12-25b46f511277","A Systematic Design Space Exploration of Datacenter Schedulers","Mastenbroek, Fabian (TU Delft Electrical Engineering, Mathematics and Computer Science); Andreadis, Georgios (TU Delft Electrical Engineering, Mathematics and Computer Science)","Iosup, A. (mentor); Delft University of Technology (degree granting institution)","2019","Datacenter infrastructure has become vital for stakeholders across industry, academia and government. To operate efficiently, datacenter operators rely on a variety of complex scheduling techniques, to distribute user workloads across resources. In this work, we leverage a reference architecture for datacenter scheduling to design and implement an instrument for systematic design space exploration of datacenter schedulers. We construct a formal representation of the design space for datacenter schedulers, using scheduling policies collected from real-world schedulers. We then use a genetic algorithm in combination with trace-based simulation to explore the space, optimizing for workload metrics. Through several experiments, we assess the viability of the instrument. We find that our instrument is able to identify patterns in the workloads and adapt the scheduling policies appropriately. Overall, our work leads to numerous findings, which can become valuable for future comprehension and development of schedulers.","cloud computing; design space exploration; reference architecture; genetic algorithm; simulation","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:2fe8fa3f-b4f9-45ea-a8c3-912aa433ba87","http://resolver.tudelft.nl/uuid:2fe8fa3f-b4f9-45ea-a8c3-912aa433ba87","Radar Micro-Doppler Patterns for Drone's Characterisation","Cai, Yefeng (TU Delft Electrical Engineering, Mathematics and Computer Science)","Yarovyi, Olexander (mentor); Krasnov, Oleg (graduation committee); Delft University of Technology (degree granting institution)","2019","Micro-Doppler patterns of multi-propeller drones measured by radar systems are widely used in the classification of different drones, since the micro-Doppler patterns illustrate the velocity and motion properties of the drones. However, on this topic, there are a few issues the current researches have not tackled yet, and these will be discussed in this presentation. First of them is the lack of mathematical description of the micro-Doppler patterns. and most research works are based on real measured data so far. In this thesis, an EM backscattering model in HH plane of drone propeller is developed, simplifying the propeller’s geometry structure as a few cylinder thin wires. Radar signal model and micro-Doppler model are subsequently developed for the thin-wire propeller model when it is rotating. Second, most current researches focus on the micro-Doppler patterns achieved in short CPI cases that are valid for radar systems with PRI much shorter than the rotation period of the drone’s propellers. In this thesis, the drone micro-Doppler patterns in long CPI circumstances are investigated. Features are proposed to characterise the amplitude and frequency distribution of the simulated micro-Doppler spectrum. Applying these features to SVM gives good classification accuracy for the simulated micro-Doppler data. Third, most researches at present are carried out in short range for static or stable hovering drones, while from a practical point of view, it is also of great interest to investigate the drone micro-Doppler patterns in long range and dynamic scenarios. In this thesis, the micro-Doppler patterns of different drones at a distance of 9 kilometres are achieved by S-band radar in long CPI circumstance. Applying the previously proposed features to the real measured micro-Doppler spectra to SVM gives good classification accuracy for drones in hovering and manoeuvring flight modes.","radar; micro-Doppler; drone; simulation; long CPI","en","master thesis","","","","","","","","","","","","Electrical Engineering | Signals and Systems","",""
"uuid:32dff3b9-36d5-470b-a4db-f0691b916df7","http://resolver.tudelft.nl/uuid:32dff3b9-36d5-470b-a4db-f0691b916df7","Autonomous Mobility on-Demand in urban areas: A Rotterdam-Zuid case study","Stevens, Martijn (TU Delft Civil Engineering & Geosciences)","van Arem, B. (mentor); Correia, Gonçalo (mentor); Annema, J.A. (graduation committee); Scheltes, Arthur (mentor); Delft University of Technology (degree granting institution)","2019","Due to connectivity problems, the attractiveness of public transport is limited. Policymakers aim to increase the modal share of public transport to protect the accessibility, livability, safety, sustainability and efficiency in the cities of the future. Applying Autonomous Mobility on-Demand (AMoD) systems as a feeder service for public transport hubs can improve the first- and last-mile trip leg, increasing the attractivity of public transport. It is essential for the implementation of AMoD systems to predict the impacts of varying operational strategies on beforehand. From an operators perspective, especially the financial viability of AMoD operations is vital and yet unclear. An existing gravity-based travel demand estimation model built in OmniTRANS is used to predict the AMoD passenger demand. Besides, an agent-based simulation model is developed using the software Anylogic that is connected to the demand-model as an add-on module to simulate the behavior of passengers and AMoD vehicles within an urban environment. The agent-based simulation model is applied to the case study Rotterdam-Zuid, where Station Zuidplein and Station Lombardijen function as an AMoD hub. The simulation outputs show that activating dynamic ridesharing using wireless fast chargers at the stations results in the most financially viable operation. Activating automatic relocation results in the most costly operation. Compared to existing public transport services, carsharing systems and taxi systems, the AMoD system shows to save a large amount of expenses due to the absence of drivers.","Agent-based modeling; simulation; Transport Demand Modelling; Autonomous vehicles; Shared mobility; Public Transport","en","master thesis","","","","","","","","","","","","Civil Engineering | Transport and Planning","STAD",""
"uuid:be8b955a-dfc8-4265-abef-d230e378a607","http://resolver.tudelft.nl/uuid:be8b955a-dfc8-4265-abef-d230e378a607","Working towards a fast prompt gamma emission simulator based on the Boltzmann Equation","Pols, Willemijn (TU Delft Mechanical, Maritime and Materials Engineering)","Lathouwers, D. (mentor); Lens, E. (mentor); Delft University of Technology (degree granting institution)","2018","Range verification based on prompt gamma detection is an important step to improve dose control for proton therapy. To deduce the proton range from the detected prompt gamma emission, a prediction of the measured profile is required. This study introduces the Boltzmann solver as a faster alternative to the Monte Carlo simulations to produce dose distributions and prompt gamma source terms from proton therapy treatment plans.","proton therapy; boltzmann equation; radiotherapy; simulation","en","master thesis","","","","","","","","2023-10-23","","","","","",""
"uuid:71ec065f-cec0-4b67-a8cd-90882c3a86a7","http://resolver.tudelft.nl/uuid:71ec065f-cec0-4b67-a8cd-90882c3a86a7","Evaluating cooperation policies for rail utilization in the port to hinterland freight transport system: A combined method approach","Karampelas, Dimitrios (TU Delft Civil Engineering and Geosciences)","Tavasszy, Lóri (mentor); Maknoon, Yousef (mentor); Duinkerken, Mark (mentor); Kourounioti, Ioanna (mentor); Delft University of Technology (degree granting institution)","2018","As the margins for improvements in the current freight transport system become limited, researchers address more and more the importance of collaboration between the actors which is crucial for the implementation of new, more efficient transport concepts as synchromodality. In addition, rail is concerned as a sustainable mode of transport that can also achieve economies of scale due to its ability to haul large quantities of goods. This study investigates cooperation policies that affect actors’ behavior to better utilize the rail use and lead to a more efficient system. We propose an innovative approach that combines gaming, simulation and optimization as a mixed method to test and evaluate these policies. The port to hinterland freight transportation system in the range of Port of Rotterdam is used as a case study. First, gaming sessions are organized in order to observe actors’ behavior and collect data. The game that is used was initiated by Port of Rotterdam, especially to identify the problems in this system. Subsequently, by assessing the observed data, a simulation model is developed and different policy scenarios are simulated to quantify their performance. In addition, the optimization model is developed, which sets the upper bound for performance and used as a solid base for comparison between the policy alternatives. Finally, the explanation of the difference between the policies’ and the optimized performance can give an insight on what are the root causes of the inefficiency, what is the best allocation of the resources and where the solutions should be focused.","cooperation; Policies; Collaboration; Freight Transport; Port; hinterland; gaming; simulation; Optimization","en","master thesis","","","","","","","","","","","","Transport, Infrastructure and Logistics","",""
"uuid:ba70bc56-956e-4b8e-b532-d68842c1c830","http://resolver.tudelft.nl/uuid:ba70bc56-956e-4b8e-b532-d68842c1c830","POSUM: A Generic Portfolio Scheduler for MapReduce Workloads","Voinea, Maria A. (TU Delft Electrical Engineering, Mathematics and Computer Science)","Iosup, Alexandru (mentor); Uta, Alexandru (graduation committee); Delft University of Technology (degree granting institution)","2018","MapReduce ecosystems are (still) widely popular for big data processing in data centers. To address the diverse non-functional requirements arising from many and increasingly more sophisticated users, the community has developed many scheduling policies for MapReduce workloads. Although some individual policies can dynamically optimize for single and stable performance objectives, such as minimizing runtime or cost, or meeting deadlines for realtime-jobs, it seems unlikely that individual policies will remain competitive for increasingly more dynamic workloads and objectives. In contrast, in this work we investigate the ability to dynamically balance performance and cost of a portfolio scheduler for MapReduce workloads. To this end, we design and implement a portfolio scheduling technique, that is, a system capable of adapting to the current workload characteristics and target objectives by periodically evaluating its set of potential policies, and of switching to ""the best"" policy that targets the current system state. We implement and evaluate our system with real-world experiments on a workload containing a mixture of real-time and batch jobs, with the purpose of minimizing deadline violations, while keeping batch job slowdown in check. Our results show that POSUM is a promising alternative: it can predict map task runtimes accurately when calculating average input processing rates, while reduces need a more complex model that accounts for an application-dependent component and variability. However, even without precise predictions, the proposed system can out-perform the individual policies of its portfolio for the combined optimization goal.","MapReduce; portfolio scheduling; data center; prediction; simulation; provisioning and allocation; scheduling policies; Hadoop YARN","en","master thesis","","","","","","","","","","","","","",""
"uuid:84bb46d8-9ff7-42eb-9972-d7674a8c413f","http://resolver.tudelft.nl/uuid:84bb46d8-9ff7-42eb-9972-d7674a8c413f","Portable, Neonatal, Continuous Positive Airway Pressure Device for Low-Resource Settings: Evaluation of Feasibility through Simulation and Prototyping","Loe, Kate (TU Delft Mechanical, Maritime and Materials Engineering)","Dankelman, Jenny (mentor); de Visser, Coen (graduation committee); Oosting, Roos (graduation committee); Goos, Tom (graduation committee); Neighbour, Robert (graduation committee); Delft University of Technology (degree granting institution)","2018","An estimated 9 million infants are born prematurely each year in south Asia and sub-Saharan Africa, and the leading cause of death in preterms is respiratory distress syndrome (RDS). Continuous positive airway pressure (CPAP) is a popular treatment for RDS and has been proven to be safe, feasible, and effective for use in low- and middle-income countries (LMICs). The well-documented success of supportive CPAP in LMICs and prophylactic CPAP in developed countries indicates that delivery room CPAP has the potential to be implemented successfully in LMICs. The aim of this thesis is to explore the feasibility of a simple, low-cost, portable neonatal CPAP device for use in the delivery room in LMICs. A portable CPAP device was modelled in Simulink to predict the pressure and flowrate at any point in the CPAP circuit. A prototype composed of a centrifugal fan, silicone tubing, nasal cannula, and a PEEP valve was constructed. The prototype was tested using a Dräger Infant Test Lung to simulate a breathing neonate. The model predicted that neonates with higher peak inspiratory flows risked rebreathing exhaled gas. When compared to the experimental data, it was determined that the model underestimated resistance in the circuit and overestimated the mean pressure delivered to the patient. The prototype effectively delivered a positive pressure to the simulated patient; however, the pressure was not consistent across all experimental conditions. Cannula type, amount of leak, and breathing pattern all impacted the treatment delivered. The Simulink model can be used as a tool to aid in design decisions, but is not highly accurate, and thus does not eliminate the need for practical experimentation. The prototype was a good proof-of-concept and should be investigated further in consultation with clinicians.","continuous positive airway pressure; preterm; low resource settings; simulation; prototype","en","master thesis","","","","","","","","","","","","Biomedical Engineering","",""
"uuid:8734fec7-abdd-4740-ba19-a6763147f19d","http://resolver.tudelft.nl/uuid:8734fec7-abdd-4740-ba19-a6763147f19d","Pressure filtration of milk fat slurries: Development, validation and predictions of a mathematical model","Hazelhoff Heeres, Doedo (TU Delft Applied Sciences; TU Delft ChemE/Chemical Engineering)","van den Akker, H.E.A. (mentor); Kloek, W. (mentor); Delft University of Technology (degree granting institution)","2018","In this study, a pressure filtration model for a slurry of milk fat crystal aggregates is developed, validated and used to investigate the effect of pressure-time profiles. The model focuses on the expression step and describes oil flow locally. The filter cake is modelled as a double porous non-linear elastic medium with permeabilities described by the relation of Meyer & Smith. Conservation equations lead to a coupled system of differential equations, which are numerically solved exploiting a finite-difference scheme. Simulations with the model give insight through graphs of volume fractions versus filter chamber location at any given time step. Diagrams of oil outflow velocities and solid fat content of produced filter cakes show qualitatively good behaviour when compared to experiments. Studying the effect of pressure-time profiles, the model predicts that a low rate of pressure increase gives the driest filter cakes. Simulations also indicate that putting steps in pressure-time profiles is hardly effective.","Pressure filtration; milk fat; expression; mathematical model; simulation; pressure-time profile","en","master thesis","","","","","","","","","","","","Applied Physics","Transport Phenomena and Fluid Flow",""
"uuid:ee75fe8e-7265-4b40-87ed-113f148fa75c","http://resolver.tudelft.nl/uuid:ee75fe8e-7265-4b40-87ed-113f148fa75c","Design of an Aberration Correction System for a Deployable Space Telescope","van Marrewijk, Gijsbert (TU Delft Aerospace Engineering)","Kuiper, J.M. (mentor); Dolkens, D. (mentor); Delft University of Technology (degree granting institution)","2018","Launch costs for high-resolution space telescopes for Earth observation can be reduced when the telescope mirrors are made deployable. However, such a system is subject to optical aberrations that decreases image quality. To counter these aberrations, an Aberration Correction System (ACS) is proposed that uses a deformable mirror (DM) which is calibrated by applying image sharpness optimisation. A stochastic gradient descent algorithm is applied to the output of two image detectors, such that the DM deformation can even be optimised during in-orbit scanning operations, without the need for a dedicated wavefront sensor. The effects of different sharpness metrics and algorithm settings have been analysed. With this novel control method, an average Strehl ratio of above 0.9 and 0.8 can be achieved on the central field and extreme field of the primary detector respectively. Also, in-orbit drift effects can be actively compensated without interrupting nominal operations.","Deformable mirror; image sharpness; stochastic gradient descent; space telescope; Earth observation; machine learning; monomorph mirror; ray tracing; simulation","en","master thesis","","","","","","","","2019-05-07","","","","Aerospace Engineering | Space Systems Engineering","Deployable Space Telescope",""
"uuid:eb4659b4-1c37-4095-992b-e5942903d45d","http://resolver.tudelft.nl/uuid:eb4659b4-1c37-4095-992b-e5942903d45d","Space Modders: Architects, Game Developers and Gamers","Kypriotakis-Weijers, Alex (TU Delft Architecture and the Built Environment)","Kousoulas, Stavros (mentor); Delft University of Technology (degree granting institution)","2018","This paper will outline the connections between videogames and architecture as a form of representing and experiencing physical or digital space and their potential in participatory design. By analyzing communicative and expressive patterns in the videogame community I attempt to find links between architects, game developers and gamers. The ambition is to create an initial framework of how gamification elements can be implemented in the design process and promote commitment and engagement with the public. Keywords: gamification, videogames, participatory design, game developers, architecture, modders, simulation","gamification; videogames; participatory design; game design; architecture; modding; simulation","en","student report","","","","","","","","","","","","Architecture, Urbanism and Building Sciences","",""
"uuid:d20ab3d6-a63d-41b2-b74d-198a3f3f44c5","http://resolver.tudelft.nl/uuid:d20ab3d6-a63d-41b2-b74d-198a3f3f44c5","Uncertainty Quantification Based on Hierarchical Representation of Fractured Reservoirs","Sartori Suarez, Andrea (TU Delft Civil Engineering and Geosciences; TU Delft Geoscience and Engineering)","Voskov, D.V. (mentor); Delft University of Technology (degree granting institution)","2018","In the modeling of fractured reservoirs, the spatial representation plays an important role to enclose the heterogeneities present in the subsurface. The reservoir flow response obtained from simulation in time is determined by the type of method and by the scale of the representation (fine or coarse). As a deterministic model cannot capture the range of possible scenarios, a set of different realizations associated to an ensemble is required to evaluate the variability of flow responses. The challenge is to determine if the coarse scale simulations, practical in terms of performance, can capture the variability present in the set of realizations. This thesis attempts to quantify the uncertainty of different hierarchical levels for fractured reservoirs and determine the possibility of using coarse scale simulations in uncertainty quantification.
The uncertainties, associated to the flow response, were estimated via clustering to a representative subgroup for each ensemble and using Multi-Dimensional Scaling (MDS) distance technique for analysis. Uncertainty trajectories were built in order to analyze the effect of coarsening to the spread in flow response. An advanced framework was built which allows generating an ensemble of realizations with different fracture distributions and connectivity. This framework provides a stable numerical model with the conformal unstructured grid for Discrete Fracture and Matrix (DFM) model at high resolution. Based on this high-fidelity model, we created a coarser DFM representation and numerically upscale it to EDFM models to obtain flow responses at different scales.
It is shown that the representatives of the ensemble at coarser scale do not behave similarly to the finest scale ensemble solution. While, the intermediate coarse levels show an accurate flow response in some realizations, it does not hold for all of them. We demonstrate that MDS analysis can help in estimation of the accuracy of flow responses at different scales. However, more practical applications of MDS still need to be developed.","Fractured reservoirs; simulation; ADGPRS; DFM; upscaling; EDFM; hierarchical ensemble; clustering; Multi-Dimensional Scaling; uncertainty quantification","en","master thesis","","","","","","","","","","","","Petroleum Engineering and Geo-sciences","",""
"uuid:2a625496-e85d-4207-8d6b-0bd06565fdf9","http://resolver.tudelft.nl/uuid:2a625496-e85d-4207-8d6b-0bd06565fdf9","Assessment of Benefits and Drawbacks of ICN for IoT Applications","Drijver, Floris (TU Delft Electrical Engineering, Mathematics and Computer Science)","Litjens, Remco (mentor); Kuipers, Fernando (graduation committee); d' Acunto, Lucia (mentor); Trichias, Kostas (mentor); Delft University of Technology (degree granting institution)","2018","According to its creators, ICN is designed to fit the way we use the internet better than IP currently does.
The use of named data and distributed network layer caching may provide a more efficient utilization of
network resources due to the stateful forwarding plane which allows data to be retrieved from caches
close by the requester, while also providing a higher content delivery performance in terms of content
retrieval delay. Since the IoT is expected to connect billions of devices to the internet, a resource-efficient
network paradigm is needed to cope with the corresponding enormous traffic increase. IoT
deployments also typically follow a distributed data generation and retrieval paradigm, which could
benefit from ICN’s in-network caching approach and stateful forwarding logic. This thesis focuses on
assessing whether ICN is advantageous for the IoT in these aspects, by comparing an ICN approach
to an IP approach for IoT applications.","ICN; IoT; NS-3; simulation; comparison; 6LoWPAN; NDN; CoAP; IP; IPv6","en","master thesis","","","","","","","","","","","","Electrical Engineering | Network Architectures and Services","",""
"uuid:636bdd6f-90be-4163-bf12-3f935585cf4e","http://resolver.tudelft.nl/uuid:636bdd6f-90be-4163-bf12-3f935585cf4e","Probabilistic downtime analysis for complex marine projects: Development of a modular Markov model that generates binary workability sequences for sequential marine operations","Bruijn, Willem (TU Delft Civil Engineering and Geosciences; TU Delft Hydraulic Engineering)","Jonkman, Sebastiaan N. (mentor); van Gelder, Pieter (mentor); Morales Napoles, Oswaldo (mentor); Hendriks, A.J.H. (mentor); Delft University of Technology (degree granting institution)","2017","A complex marine project consist of series of operations, with each operation subject to a predefined operational limit and duration, depending on the equipment being used. If actual weather conditions exceed the operational limit, then the operation cannot be executed and hence downtime occurs. It is up to contractors, such as Boskalis, to accurately estimate the expected downtime in order to determine the project costs. Recently, anew tool has been developed to make downtime assessments by using the Markov theory: the so-called `Downtime-Modular-Markov model' (DMM-model). It abstracts the actual metocean conditions by stochastically producing binary `workability sequences' for each operation, where a distinction has been made between workable and non-workable states given an operational limit. The Markov statistics of the model are based on the characteristics of the observed metocean conditions. Complex marine project simulations are realizable based on these statistics. The purpose of this thesis is to develop the DMM-model for which a software-testing process is applied. In the verification phase the concept and the code of the model are checked on correctness, consistency and completeness. Subsequently, the validation phase addresses to the quality of the model. Three different metocean datasets are used to test the model and its individual modules whether they perform sufficiently accurate. The most important findings of both phases are tackled in the improvement \& extension phase. Adjustments made during this last phase bring the DMM-model to a new state-of-the-art. It is recommended for further study to conduct an uncertainty analysis (quantify the model and parametric uncertainty).","Complex marine project; operation; operational limit; downtime; Markov theory; Downtime-Modular-Markov model; workability sequences; simulation; software-testing; verification; validation; improvement; extension","en","master thesis","","","","","","","","2018-10-20","","","","","",""
"uuid:60ac8b5d-789a-4108-9b3f-da109efc07ca","http://resolver.tudelft.nl/uuid:60ac8b5d-789a-4108-9b3f-da109efc07ca","Simulation model to assess the effective capacity of the wet infrastructure of a port","Macquart, Aubin (TU Delft Civil Engineering & Geosciences; TU Delft Hydraulic Engineering)","Vellinga, T. (mentor); van Dorsser, Cornelis (graduation committee); Daamen, W. (graduation committee); Bijlsma, Rienk (graduation committee); Delft University of Technology (degree granting institution)","2017","Port planning is a complex multidisciplinary subject. To fulfill its functions, it is essential that the different elements of a port work together. The full potential of a terminal can only be reached when the wet infrastructure of a port (access channel, inner basins, turning circles) can keep up with the traffic load. From the literature study, it has become apparent that simulation tools have become increasingly popular for assessing the capacity of ports and waterways. However, the application has often been aimed at a specific case study and the existing models are not easily reusable for new applications. In this master thesis project, the assessment through a generic simulation tool of the effective capacity of the wet infrastructure of a port is investigated. The model considers the processes taking place from the point a vessel arrives at the entrance of the access channel until the start of the (un)loading procedures and the departure of the port until exiting the access channel. The analysis capabilities of the model are demonstrated by studying the Port of Hazira. The main processes of the model relate to vessels obtaining authorisation to sail towards a destination. To receive this authorisation, the vessels must find a moment when the correct weather conditions occur, the tidal elevation is adequate, the waterways are available and sufficient quay length is available. The authorisation is given in a dynamic way, depending on the dimensions of the vessels, waterways and quays. Based on their origin and destination, vessels can determine their route based on the shortest path available and waterways available depending on their vessel type. Once this is done, a vessel will construct a sailing plan by finding a suitable timeslot to through each section of the port. When doing so, a vessel takes into account the sailing plans of other vessels and the sailing rules that apply for each section. As a result, a vessel can construct a suitable sailing plan based on an origin and destination which can be applied to any port layout. During this study, it has become apparent that many processes should be included to properly determine the capacity of port. Simulation software offers the possibility of including all these processes and observe their interactions in order to locate bottlenecks more efficiently. Simio has proven to be able to incorporate all the required process to properly model the wet infrastructure of a port. However, it does not offer a user-friendly interface to handle different scenarios and facilitate the handling of both the input and output of the model. To this easier, an interface has been created with Scenario Navigator. This interface enable the storing and comparing of input parameters and results of different scenarios.","port planning; simulation; Simio; marine operations; logistics","en","master thesis","","","","","","","","","","","","","",""
"uuid:0caecfdb-85ec-4f5f-b5a9-cc98dcb9722a","http://resolver.tudelft.nl/uuid:0caecfdb-85ec-4f5f-b5a9-cc98dcb9722a","Simulations of steady and oscillating flow in diffusers","Schoenmaker, Lars (TU Delft Mechanical, Maritime and Materials Engineering)","Boersma, B.J. (mentor); Pourquie, M.J.B.M. (graduation committee); Pecnik, R. (graduation committee); Delft University of Technology (degree granting institution)","2017","In this thesis diffuser performance will be simulated with computational fluid dynamics, which will be done with the program Ansys Fluent. The expanding geometry creates an adverse pressure gradient and under certain conditions there will also be separation.
At first a relative simple simulation will be done of a one-directional incompressible turbulent flow. It will show how different turbulencemodels performat a Reynolds number of 15,000 inside a conical diffuser with an angle of 2θ=8°, no separation is expected in this geometry. The turbulence models investigated are the k-ε, the k- ω and the RSM model. The mesh refinement is tested on produced accuracy of the results. The final results of the simulations are compared with both DNS (direct numerical simulation) and experimental results. All models produce different behavior, due to transport equations, algebraic models and empirical constants used. The deviations are dependent on geometry and flow conditions, where certain turbulence models are better for specific cases. The simulations showed different representations. The flow behavior of the k- ω model was the most realistic, due to the correct core velocity although it showed flow reversal at the wall. Various parameters are reviewed, such as velocity profile, flow reversal, pressure coefficient, friction coefficient and turbulent statistics.
The oscillating flow will represent a case which is more closely to the flow seen inside a thermoacoustic engine. The geometry is a rectangular diffuser and the flow is compressible. It is a much more complex flow with frequencies ranging from 6-21 Hz during the simulations. Laminar, transitional and turbulent cases are simulated by Reδ numbers of 380, 580 and 740 with varying displacement amplitudes. The transitional k-kl-ω model is used, because of its ability to also simulate laminar and transitional cases besides only turbulent cases. Additional the k- ω SST model is tested. The velocity profiles are not simulated well with the k-kl-ω model, which was caused by the under estimation of simulated turbulence near the wall. It was found that for both models separation will induce early and low in the diffuser and expand downstream as the cycle passes. As a result the Reynolds shear stresses show higher values earlier in the cycle. It was also seen that the reattachment would differ in pattern. The trend is found that separation begins earlier with increasing Reynolds number and increasing displacement amplitude. The minor losses, or irreversibilities, vary in accuracy, the effect of the displacement amplitude is not always seen for variables which are dependent on the magnitude of the pressure. In addition turbulence would not show an increase at the point of transition compared to turbulent cases. Both models seem to deliver deviating results, but the k- ω SST models the cases better.
Qi ~ 105 up to perpendicular fields of 35 mT. By application of these resonators, we have described the successful continuous wave qubit spectroscopy of a graphene transmon qubit at B|| = 1 T with a minimal linewidth of 166 MHz and demonstrated manipulation of the qubit frequency between 3.2-7 GHz with electric field. This is the first ever measured superconducting qubit that shows these properties at a magnetic field of 1 T.","topological; quantum computing; graphene; superconducting; majorana; measurement; qubit; superconducting qubit; qubit spectroscopy; microwave resonator; coplanar waveguide; magnetic field; cQED; circuit quantum electrodynamics; vortex pinning; Abrikosov; parity; readout; transmon; superconducting transmon; finite element analysis; simulation; CST; time domain","en","master thesis","","","","","","","","","","","","Applied Physics","",""
"uuid:30751226-0e05-442a-98d4-f3a9d4b4933b","http://resolver.tudelft.nl/uuid:30751226-0e05-442a-98d4-f3a9d4b4933b","A Hydraulics Simulation System: Using a Hardware In the Loop approach","Zaccà, Gabriele (TU Delft Electrical Engineering, Mathematics and Computer Science); van Rijn, Joey (TU Delft Electrical Engineering, Mathematics and Computer Science)","van der Meijs, Nick (mentor); Delft University of Technology (degree granting institution)","2017","","hydraulics; HIL; hardware in the loop; simulation","en","bachelor thesis","","","","","","","","2020-07-03","","","","","",""
"uuid:d5f7b0d1-1fd7-4fb1-a3b7-0756e7ae165e","http://resolver.tudelft.nl/uuid:d5f7b0d1-1fd7-4fb1-a3b7-0756e7ae165e","New Data Sources in Road Infrastructure Management: A game-based experiment into the effects of new data sources on condition assessment and decision-making within the operations and maintenance phase of asphalt paved road infrastructures","Düzgün, Baris (TU Delft Civil Engineering and Geosciences; TU Delft Engineering Structures)","Wolfert, Rogier (mentor); Schraven, Daan (mentor); Brous, Paul (mentor); van de Ruitenbeek, Martinus (mentor); Delft University of Technology (degree granting institution)","2017","Professionals in the Operations and Maintenance phase of national road
infrastructure projects are making decisions with large consequences based on low frequency measurements and subjectivity prone expert observations. A solution is expected in sensor innovations, IoT and User Generated Data to generate more frequent measurements and less subjective observations. The use of these methods has been researched and proven, however the main focus of these studies was often improvement of the technological capabilities or implementation in current practises with limited research into their contribution and effects on professionals in the construction sector. This research describes an experiment that tested the effects of more data and more diverse data on the decision-making of professionals in the construction industry. An attempt to answer this question is made by modelling different data sources into a Serious Game and testing the assumptions. After analysis of the gaming data, the questionnaires and the debriefing it can be concluded that in this experiment there was a correlation between better assessments and higher scores. However experts did not assess damage differently when presented with extra information, nor did they make significantly different decisions. From the qualitative section of the experiment the explanation was found that the extra information proved too much and experts were able to extrapolate with the marginal data that represents the current industry practise. This suggests that new information requires training, as our built environment gets richer in terms of data, the assessment of this built environment becomes too much for humans to cope with and solutions can be sought in the application smart algorithms, Machine Learning and Artificial Intelligence.","User Generated Data; Crowdsourcing; Big Data; Smart Asset Management; Object Generated Data; Asset Maintenance; Road Infrastructures; Internet of things; Game-based simulation experiment; experiment; simulation; serious gaming","en","master thesis","","","","","","","","","","","","","",""
"uuid:b012a2e8-7ae0-4b26-aed8-8c32df1aff6e","http://resolver.tudelft.nl/uuid:b012a2e8-7ae0-4b26-aed8-8c32df1aff6e","Strain Rate Dependen 3-Point Bending Test and Simulation of a Unidirectional Carbon/Epoxy Composite","Righi, R.","Kassapoglou, C. (mentor)","2017","In order to use composite materials in automotive production while ensuring that passengers safety it is crucial to fully understand the strain rate effect on their mechanical properties.This study implements a strain rate dependent 3-point bending test setup for a high rate servo-hydraulic testing machine. A finite element model to simulate strain rate dependent 3-point bending tests has been created and used for the validation of material cards that consider strain rate dependent material properties.The test setup developed allows the correct characterisation of the strain rate dependent 3-point bending tests. The study also demonstrates that strain rate effects can be successfully considered in finite element simulations.","Strain rate; 3-point bending; impact; FEM; material card; simulation; digital image correlation","en","master thesis","","","","","","","","2022-03-20","Aerospace Engineering","Aerospace Structures & Materials","","","",""
"uuid:9911251a-7f9d-4c45-bc4b-e7fe943e3019","http://resolver.tudelft.nl/uuid:9911251a-7f9d-4c45-bc4b-e7fe943e3019","Heave reduction of the crane tip of a J-class vessel by inducing a roll moment using active ballast systems","De Jonge, J.S.","Huijsmans, R.H.M. (mentor)","2017","To compensate the heaving motion of a payload during deep water offshore installations, hanging from an on-board crane on a monohull vessel, a new concept is being investigated. By actively rolling the vessel using ballasting systems, the coupled heave motion of the crane tip can be reduced significantly. This is investigated by building a computer simulation in Matlab/Simulink, and testing for multiple wave patterns. Two ballast types are proposed for the motion control; transversely moving a solid mass on deck, or pumping ballast water between the ballast wing tanks. Furthermore, both systems were tested at two different ballasting speeds. The results from the simulations showed that, for the wave patterns used during the simulations, the addition of an active ballast system to counter the heave of the crane tip is beneficial, and results in decreases of the heave amplitude of the crane tip of up to 90% when using an active ballast tank ballasting system.","simulation; ship motion; motion control; Matlab; Simulink; balast systems","en","master thesis","","","","","","","","2022-02-06","Mechanical, Maritime and Materials Engineering","Maritime and Transport Technology","","","",""
"uuid:5321a5d4-ab40-4403-b09c-70c617abfc77","http://resolver.tudelft.nl/uuid:5321a5d4-ab40-4403-b09c-70c617abfc77","Optimization of Island Electricity System: Transition to a sustainable electricity supply system on islands through the implementation of a hybrid system including ocean energy technologies","van Velzen, L.","Blok, K. (mentor)","2017","Climate change without adequate countermeasures has become one of humanity's greatest threat. Energy production by means of renewable energy sources is therefore one of the crucial measures that will play a paramount role in reducing the pollutant emissions of fossil fuel dependency. Small islands in particular are an exemplary case of the extraordinary dependence on oil, the energy system often being entirely dependant on diesel generators. The relative high cost of sustaining this practice in combination with the geoeconomic properties of islands provides a unique incentive for the transition to renewable energy. By definition, islands are surrounded by water, making them highly vulnerable to the effects of climate change. In addition to the risk of being surrounded by water, it also provides a vast set of possibilities. Harnessing energy from waves, tides and the difference in seawater temperatures (OTEC) are just some of the examples. In this thesis, the effect of ocean energy integration is investigated. A simulation and optimization model of the electricity supply system is developed. A multi-objective genetic algorithm optimization regarding cost (LCOE) and renewable energy integration is performed. The model covers; PV solar, wind, tidal, wave and OTEC as well as battery storage as components of a renewable energy system. The resulting model is applied to two case study islands (Shetland and Aruba), the effect of the hybrid system including ocean energy technologies is determined. The cost optimal system was found to produce energy with an LCOE below the conventional fossil fuel energy cost. This corresponds to a renewable energy share of approximately 65%, consisting solely of wind energy. The cost was determined to have a significant influence on the system configuration. Currently, due to the high cost of energy based on their pre-commercial stage, ocean energy sources are added to the energy mix at high renewable energy shares (above 75% renewable coverage). The hybrid systems including the ocean energy sources displayed an evenly spread energy production. Based on this study, the future of integrating ocean energy provides an encouraging outlook. Cost will need to be reduced further for ocean energy to become economically viable. With the right investments in ocean energy, this process can be accelerated and will become viable.","Ocean energy; renewable energy; electricity system; optimization; simulation","en","master thesis","","","","","","","","","Technology, Policy and Management","Engineering, Systems and Services","","","",""
"uuid:245b8941-08f3-4322-a442-d50e174e3c05","http://resolver.tudelft.nl/uuid:245b8941-08f3-4322-a442-d50e174e3c05","Determination of the attitude state of space debris objects based on Satellite Laser Ranging, using Envisat as a test case","Lagaune, B.F.","Doornbos, E.N. (mentor)","2016","The attitude state of the passive Envisat satellite (ESA) has been estimated before using various techniques like Satellite Laser Ranging, radar and using light-curves. This research focusses on the use of Satellite Laser Ranging. Due to the relatively large (meter scale) offset between the center-of-mass of the satellite and the reflector where the laser signal is reflected back to the transmitting and receiving ground station, large oscillations in the range residuals are visible. These oscillations show the rotating behaviour of Envisat, and can be translated to its rotational state by re-producing this signal using a corresponding attitudemodel and the offset between the reflector and the center-of-mass. First a purely theoretical case was considered where a known simulated orbit and attitude where estimated for various cases in order to validate the use of the estimation scheme. Afterwords, the real Satellite Laser Ranging data of Envisat was used for the time period 2013-2015.","SLR; attitude; estimation; determination; Envisat; space debris; simulation; GEODYN; tumbling","en","master thesis","","","","","","","","2016-12-21","Aerospace Engineering","Astrodynamics and Space Missions","","","",""
"uuid:aaed2149-23ec-4930-9896-937608bf8eb2","http://resolver.tudelft.nl/uuid:aaed2149-23ec-4930-9896-937608bf8eb2","Computation of Landau Levels and Shubnikov-de Haas Oscillations in Quantum Heterostructures","Buijtendorp, B.","Wimmer, M.T. (mentor)","2016","We develop a computational method that can efficiently simulate Landau levels in quantum heterostructures, by reducing the problems dimensionality. We then apply this to a GaSb/InAs/AlSb broken gap quantum well in the trivial and inverted regimes. This heterostructure can be tuned into a 2DTI phase, and has possible applications in topological quantum computing. In the Landau fan of the inverted regime we observe the hole band shifting into the electron band, and an electron state crossing the gap to the hole band and vice versa. In the trivial regime we study the magnetic oscillations in the density of states near the Fermi energy, and observe a pronounced beating for a broadening of 0.3 meV.","computational physics; landau levels; landau quantization; heterostructures; simulation; shubnikov-de haas oscillations; density of states oscillations; kwant","en","bachelor thesis","","","","","","","","","Applied Sciences","Quantum Nanoscience","","Theoretical Physics Group","",""
"uuid:0da53df4-df77-4afd-9bc0-89ccca593719","http://resolver.tudelft.nl/uuid:0da53df4-df77-4afd-9bc0-89ccca593719","Thermal Simulation of Low Concentration PV/Thermal System using a Computational Fluid Dynamics Software","Stylianou, S.","Smets, A.H.M. (mentor)","2016","Cogenra company has created a low concentration PV/Thermal system that produces both thermal energy and electricity simultaneously, mainly for commercial and industrial applications. The realization of a system that combines photovoltaic modules, solar thermal collectors and concentrating mirrors makes it a complicated system to study. So far, only simple studies have been made on Cogenra’s LCPVT system including a 2-dimensional model. In this project, the possibility of using a Computational Fluid Dynamics software for analysing the low concentration PV/Thermal system of Cogenra has been studied. The CFD software Ansys Fluent has been used, in which a model was created in accordance to the Cogenra LCPVT system. After validating the results, the model has been used for analysing the system’s performance under various conditions in order to realize the system’s losses. Furthermore, due to the numerous components of the system, the analysis of the LCPVT system becomes a multi-variable problem. For this reason, three main parameters (mass flow rate, optical concentration, PV type) that affect the system’s performance has been chosen and studied in order to improve the system’s overall efficiency. Since the system has both electrical and thermal outputs, an equivalent efficiency was determined to express the two different efficiency terms. For the purpose of comparing the performance of the LCPVT system with the traditional photovoltaic modules, one other simple model was created in Ansys Fluent. This model has also been simulated under the same conditions as the Cogenra system in order to observe the difference in output between the LCPVT system and the photovoltaic modules. The low concentration PV/Thermal system has also been compared with other solar thermal systems such as a PV/Thermal system, a Concentrated Thermal system and a simple Solar Thermal System.","CPVT; CFD; thermal; electrical; efficiency; concentration; simulation","en","master thesis","","","","","","","","2016-08-30","Applied Sciences","Electrical Sustainable Energy","","Sustainable Energy Technologies","",""
"uuid:bf1b4f72-ce54-4f1e-b097-b269add01738","http://resolver.tudelft.nl/uuid:bf1b4f72-ce54-4f1e-b097-b269add01738","Modeling and simulation of intrafusal muscle fiber using a multi-sarcomeric model","Oborin, N.","De Vlugt, E. (mentor)","2016","Muscle spindle is an organ of proprioception that plays an important role in neuro-muscular control of the human joints. It is composed of intrafusal fibers, the mechanical properties of which determine the afferent response of the spindle. Intrafusal fibers are not homogenous: they are composed of many sarcomeres, have localized fusimotor innervation and have varying myosin composition throughout their length. Most models of intrafusal fibers do not take these structural considerations into account and model it as a single sarcomere, that way omitting potential emergent behavior that arises from a population of sarcomeres. The effects of sarcomere length inhomogeneity on the behavior of intrafusal fiber is not known. In this study a multi-sarcomeric model is developed and simulated with a varying activation shape along the intrafusal fiber to see whether an emergent behavior in present and how it manifests. Results show that relative activation of contracting sarcomeres has the largest effect. The fiber model with varied activation showed history dependence arising from non-homogenous initial sarcomere length distribution. The model also demonstrated amplitude-dependent behavior under multisine stretches that did not appear in a non-multi-sarcomeric model. In conclusion, it can be stated that multi-sarcomeric models can be beneficial in exploratory studies as they can demonstrate behavior that cannot be described with simplistic models.","modelling; simulation; muscle; intrafusal fibers; multi-sarcomeric","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Biomechanical Engineering","","Biomedical Engineering","",""
"uuid:7ec271ca-2a51-4b89-b4ef-9e2badeb0493","http://resolver.tudelft.nl/uuid:7ec271ca-2a51-4b89-b4ef-9e2badeb0493","Lowering the Turnaround time for Aircraft component MRO services: A case study at KLM Engineering & Maintenance","Van Rijssel, R.E.","Lodewijks, G. (mentor); Beelaerts van Blokland, W.W.A. (mentor); Van Duin, J.H.R. (mentor)","2016","In this thesis a framework is built to find flow improvement measures to lower the turnaround time for aircraft component MRO processes. This framework is tested in a case study at KLM E&M. The main research question that is answered in this research is: What flow improvement measures can be used to lower the turnaround time of components in aircraft component MRO processes such that the average turnaround time can be lowered from 21 to 10 days at KLM E&M? To create this framework, case studies of aircraft component MRO processes were analyzed. Four quantifiable characteristics were found, being: flow type, amount of repair paths, equipment criticality and the moment of work-scope determination. Afterwards applicable improvement theories were studied on these characteristics and a framework was created for aircraft component MRO process flow improvement. Hereafter, a case study process was researched using the DMAIC cycle. First of all, it is advised to introduce two new KPI's; the 'TAT-waiting time' and the 'On time start', to monitor the waiting time. Furthermore, it was found that the shop has a single piece flow, the process follows a single path, the equipment is not critical and that the work-scope is determined during the process. When these characteristics are put in the flow improvement framework, it can be seen that lean, lean in MRO and quick response manufacturing fit best for the case process. However, in the case process the work-scope determination should be moved forward to be able to plan the work better and create a pull process. The selected improvement theories were researched in more detail on flow improvement measures. By using a simulation model it was found that with a supermarket system with a capacity constraint, an increase in technician capacity, lower disruption times and amounts, it is possible to lower the total TAT of the case process to 10 days on average. It can therefore be concluded that the improvement framework works for this case. For further research it is recommended to investigate the use of other simulation software and expand the simulation model to the total component MRO supply chain. Furthermore, it is advised to test the framework on other processes within KLM E&M and at other aircraft component MRO companies.","flow improvement; lean; MRO; aircraft component; turnaround time; simulation; DES; simio; maintenance","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Transport Engineering and Logistics","","Transport, Infrastructure and Logistics","",""
"uuid:2b97a9a9-ea55-4803-88b5-f4e0b71ad26f","http://resolver.tudelft.nl/uuid:2b97a9a9-ea55-4803-88b5-f4e0b71ad26f","Atrium Deltion Revised: Comfort Research and Design Intervention for the improvement of indoor comfort in the atrium of Deltion college","Boschman, B.E.F.","Van den Engel, P.J.W. (mentor); Schnater, F.R. (mentor); Hordijk, G.J. (mentor)","2016","The current building industry needs drastic transformation to meet new performance standards regarding energy use and indoor climate requirements. It so happens that even with new buildings, the results leave something to be desired. This Master thesis describes the specific case of an atrium in the Deltion College building. The research is carried out to find which climate requirements are not met and how they can be improved. The main focus is on visual and thermal comfort. The design is the result of a design-by-research approach in which the maximum result is achieved with minimal interventions. Simulations are used to find the most comfortable solution with the highest energy efficiency. This research shows an example of how a building can be made more sustainable while improving indoor comfort.","atrium; sustainable; design intervention; building technology; simulation; indoor climate","en","master thesis","","","","","","","","","Architecture and The Built Environment","Building Technology","","Climate Design","",""
"uuid:f2fcf3f0-dfc5-4a75-aab3-cffc2e1ad348","http://resolver.tudelft.nl/uuid:f2fcf3f0-dfc5-4a75-aab3-cffc2e1ad348","Cost Effective Attitude Control Validation Test Methods for CubeSats Applied to PolarCube","Clarke, M.A.H.","Guo, J. (mentor)","2016","The problem of testing the performance of an attitude control system presents new challenges and opportunities when conducted on a CubeSat scale. This is the result of drastic reduction in physical properties (size, mass, torques) as well as project resources (funds, manpower, time) when compared to traditional satellite programs. This thesis presents an analysis of the problem of validating active attitude control of a CubeSat before launch and a proposed methodology that is demonstrated on the Colorado Space Grant Consortium’s PolarCube satellite. In order for an attitude control system’s performance to be measured, it must be provided with a physical environment that allows the system to act similarly to how it would in orbit and its behavior must be recorded in such a way that metrics of performance can be derived. To date, published tests of this nature on CubeSats have been limited in their precision due to uncertainty in external torques on the attitude control system and have mostly been conducted on commercially available attitude control system modules. A string suspension testbed was chosen to provide a simulation of microgravity that allows the system to rotate free of friction. This thesis builds on the practices for string suspension testing developed for the MicroMAS CubeSat mission in which a ""fit-predict-fit"" method of producing metrics of attitude control system performance was first implemented for CubeSats. The project set out to identify and solve points of failure that were limiting measurement performance of the tests conducted on the MicroMAS system and ultimately produce more accurate measurements and predictions of testbed and attitude control system dynamic response. An engineering model of the satellite bus was designed and built to provide independent power, wireless communication and data handling to the attitude determination and control subsystem. An attitude determination method was developed using MEMS magnetometers, accelerometers and rate gyroscopes to operate within a laboratory environment. A model of the dynamics of the test model’s behavior in the testbed was created to generate predictions of the test model’s response to test conditions, act as a platform to compare measured and expected test results, and verify the attitude determination method. Attitude determination performance was determined through a combination of direct testing and dynamics modeling in software. The methods found a maximum (worst case scenario) heading determination error of 4.6 ? after feed-forward correction based on characterization tests. Oscillation tests were used to determine the external torque properties of the string suspension testbed to within two significant figures, a drastic improvement in performance compared to the MicroMAS test results. Less than $300 were spent on hardware dedicated to testing. The overall system is marked by its simplicity and cost-effectiveness. The results will render attitude control validation testing and consequently the use of active attitude control more accessible to future CubeSat missions. Improvements in performance when compared to the MicroMAS test results were identified as the result of more robust and flexible software modelling of string suspension testbed dynamics, improved methods of characterizing testbed external torque properties as well as improved attitude determination performance.","attitude; control; CubeSat; string; test; hardware in the loop; determination; testbed; method; microgravity; simulation; validation; XBee; Arduino; PolarCube; Space Grant; TU; Delft; Boulder; pointing; magnetometer; imu; system; ALL-STAR; COSGC; nanosatellite; satellite; reaction wheel","en","master thesis","","","","","","","","","Aerospace Engineering","Spaceflight","","Space Engineering","",""
"uuid:c9faa1b9-c543-4438-b3b9-3f8d2631158c","http://resolver.tudelft.nl/uuid:c9faa1b9-c543-4438-b3b9-3f8d2631158c","Aircraft Engine Combustor Maintenance: A model to measure MRO turnaround time","Mogendorff, W.A.","Lodewijks, G. (mentor); Beelaerts van Blokland, W.W.A. (mentor); Van Duin, J.H.R. (mentor)","2016","This thesis involves a case-study of aircraft engine combustor maintenance at KLM E&M, which has been used as a basis to develop a discrete event simulation model that allows TAT to be measured, and the effects of changes to the main process value drivers to be successfully tested. The main research question this research sought to answer is: What are the value drivers that determine the turnaround time of the aircraft engine combustor maintenance repair and overhaul process from a Lean Six Sigma perspective? From literature and preliminary research it was possible to identify TAT value drivers and define performance criteria. The main value drivers have been found to be capacity, capabilities and components. Planning and routing have been defined as influential factors that aid in steering the process. In order to come to this answer the current state of combustor maintenance at KLM E&M has been analysed, using the Six Sigma Define, Measure, Analyse, Improve, Control (DMAIC) framework. A conceptual framework including the value drivers was developed. Using this framework and Lean Six Sigma tools the combustor maintenance process has been analysed in order to define the main relationships between value drivers, as well as the current state performance at KLM. In order to simulate the process and test the effects of changes to the value drivers the current state process was modelled in Simio. Using this model it has been possible to define TAT and to test the influence of the value drivers. This has lead to recommendations regarding how TAT can be reduced.","Lean; Six Sigma; maintenance; MRO; simulation; KLM; DES; Simio","en","master thesis","","","","","","","","2016-04-30","Civil Engineering and Geosciences","Transport, Infrastructure & Logistics","","","",""
"uuid:33ca082f-ec20-4212-8947-ce4898039d8f","http://resolver.tudelft.nl/uuid:33ca082f-ec20-4212-8947-ce4898039d8f","Performance of Existing Integrated Car Following and Lane Change Models around Motorway ramps","Oud, M.","Farah, H. (mentor); Van Beinum, A. (mentor); Hoogendoorn, S.P. (mentor); Wiggenraad, P.B.L. (mentor); Happee, R. (mentor)","2016","Models are an important tool for decision making. However, in order to get proper results, these models must be validated and only be used in situations where the conditions of the validation apply. Blind trust on a model can lead to unexpected and inaccurate results. Advancements can be made to reduce the number of situations where this occurs. Not only by making the models more accurate, but also by doing more field studies for validation of behavioural aspects of the traffic. One of these aspects is the process of lane changing and car following behaviour. These two aspects determine the general longitudinal and lateral driving behaviour. Mathematical models that describe these types of movements for each individual vehicle provide the building blocks for microscopic simulation. In most models, these two aspects are modelled independently, but newer models, such as the integrated driving behaviour model (Toledo, 2003), attempt to mould this into an integral decision structure. This research attempts to validate the lane changing and car following behaviour of three models: FOSIM, VISSIM and the aforementioned Integrated driving behaviour model. These models are compared against a dataset from TNO of the motorway A270, in situations where free flow conditions apply. The models are tested on the desired speed distribution, the merging point distribution, the accepted gap distributions and the lane change distribution. The lane changes that are being found are classified by their distinctive causes, the so-called “triggers”. Six triggers are defined for lane-change classification. The main result is that calibration and validation play a major role in the validity of the models. For all tested simulation packages, their default parameters did not reflect the observed data. This means that the driver’s attitude and the traffic conditions have a large impact on the general driver behaviour. In free-flow traffic conditions, Dutch drivers tend to be risk-averse, as reflected in the low number of voluntary lane changes and the wide gap acceptance distribution. This risk-averseness is usually not part of a model’s default parameter set and therefore calibration is essential to simulate the traffic correctly. Furthermore, the different triggers helped to get a clearer view about what type of lane changes occur, where, and why they occur. The FOSIM simulation results show that this model has serious limitations. A main point is that this model is too deterministic regarding driver characteristics. Although in theory probabilistic factors could be added to the model, further advancements of the model, such as implementing probabilistic behaviour, requires reprogramming of the simulation package, which was not possible within this research. VISSIM gave better results, but it over-estimates the number of voluntary lane changes in free flow conditions on Dutch motorways when using the default behavioural parameters. Further calibration of these parameters did partially correct this error, but the remaining estimation errors differ per voluntary lane change trigger; courtesy and speed gain related lane changes are under-estimated while lane changes to keep right were over-estimated. Furthermore, the gap acceptance behaviour was not much improved. This may indicate that other boundary conditions, such as traffic generation, where wrongly assumed in the simulation. Also, one could argue if a gap selection algorithm could improve the accuracy. Further research is required to test these hypotheses. The Integrated driver behaviour model could not be completed within the time constraints of this research, but analysis of the car-following aspect of this model shows that this model has some limitations that could be easily solved by several counter-measures. Driver observation and acceleration behaviour issues could be solved by integrating psycho-physiological factors into the model, such as observation thresholds and multiple acceleration regimes. A main recommendation is to perform more validation research of current models to gather more calibrated parameter sets for a wide range of traffic conditions. The used data collection method, road side cameras, was an accurate enough method to gather enough data for this research. This method can be widely applied for other researches too with different camera mounting points, such as lamp posts and sign gantries. The triggers that have been defined in this research could be used in other studies to find if there are differences in driver behaviour for each trigger. However, within this research, 10 to 15 percent of the lane changes could not be classified in one of the six triggers. This may indicate that there is either a classification error or a missing trigger. For VISSIM, ranges of recommended parameter values for Dutch traffic in free-flow have been found and are provided in this study.","simulation; lane change; behaviour; calibration; model","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","Transport and Planning","",""
"uuid:97bbbf1e-eed8-4f0b-be61-d3f222aab929","http://resolver.tudelft.nl/uuid:97bbbf1e-eed8-4f0b-be61-d3f222aab929","Vessel routing for sweeping of marine litter in a port area","Van Tol, M.C.M.","Duinkerken, M.B. (mentor); Negenborn, R.R. (mentor); Lodewijks, G. (mentor)","2016","In literature it is widely accepted that 80% of the marine litter originates from land based sources. Since seaports usually are strategically situated with an inland waterway connection, it is not surprising that seaports have to deal with marine litter. Port authorities would like to avert excessive amounts of marine litter by sweeping marine litter. In this way the risk posed to vessels and the negative environmental impact of marine litter is reduced. Nowadays, these vessels are usually only deployed after complaints on excessive amounts of marine litter. In this paper an innovative routing method is proposed to sweep marine litter in a port area proactively. This routing method makes use of input from a prediction model considering the location and accumulation of marine litter. The routing method is formulated as a mixed-integer programming (MIP) model. To benchmark the performance of the model simulations are performed, whereby the performance of different sweeping policies is compared.","marine litter; port; inventory routing problem; mixed integer programming; stochastic programming; simulation","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","Transport Engineering & Logistics","",""
"uuid:69af32e0-c8df-45c0-8eaa-ab892d5f8a73","http://resolver.tudelft.nl/uuid:69af32e0-c8df-45c0-8eaa-ab892d5f8a73","Improving the last mile in a public transport trip with automated vehicles using an agent based simulation model: A Delft case study","Scheltes, A.F.","Homem de Almeida Correia, G. (mentor); Van Arem, B. (mentor); Happee, R. (mentor); Wiggenraad, P.B.L. (mentor)","2015","The last mile in a public transport trip appears to be one of the main deterrents in public transport to be competitive with other modes of transport. The reason for this bad performance of the last mile can be related to the slow and inflexible character of the last mile. Where PRT systems aspire to deliver an on demand, direct service to a passenger, they are bound to fixed separated infrastructure and would therefore face high investment costs for the application of PRT systems on the last mile. As automated vehicles can make use of any kind of road that is available the investment costs are considerably lower. The system presented in this thesis is a last mile transportation service operated by automated vehicles on existing infrastructure. Multiple operational strategies have been simulated using an agent based simulation model in AnyLogic. This simulation model has been applied on the case study Delft Zuid - Technological Innovation campus. For this case study a travel demand survey has been conducted, from which the results serve as one of the main inputs of the simulation model. The outcomes of the simulations indicate that relocating empty vehicles, intermediate charging have a positive effect on the performance of the system on the last mile without compromising any other system performance parameter. Pre-booking of vehicles (via a smartphone app) showed to be very beneficial with regard to the average waiting time for a passenger. However, as vehicles are locked for a longer period the system capacity decreases. The speed of the vehicles appeared one of the main determinants for the energy use of the vehicles and therefore the available system capacity. Speed variations have shown large reductions in the average travel time for a passenger.","last mile problem; automated vehicles; public transport; operation strategies; agent based; simulation","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","","51.99149, 4.36398"
"uuid:c8ce58b7-3ef8-429a-b61a-36cd3675077c","http://resolver.tudelft.nl/uuid:c8ce58b7-3ef8-429a-b61a-36cd3675077c","Probabilistic downtime analysis for complex marine projects: A state-of-the-art model based on Markov theory to generate binary workability sequences for sequential operations","Rip, J.","Jonkman, S.N. (mentor); Van Gelder, P.H.A.J.M. (mentor); Morales Napoles, O. (mentor); Hendriks, A.J.H. (mentor)","2015","A marine operations encounters downtime if its operational limit is exceeded during project execution. Accurate information about the expected downtime during a marine project is important information in the tender phase of such a project. The purpose of this thesis is therefore to give insight in the available methods for downtime analysis in different categories of marine operations and to examine the applicability of a new stochastic model to use in downtime simulations for complex projects. The proposed `modular Markov model' with linked Markov chains abstracts the actual metocean conditions by producing binary `workability sequences' for each operation (i.e. states representing whether a time step is `workable' or `not workable' given the operational limit of the operation). This model is able to generate workability sequences for individual operations and both coupled and uncoupled sequential operations, with operational limits determined by any number of metocean parameters. A small-scale validation on 6 operational limits and an application on a case-study project (offshore wind farm foundation installation) showed that workability sequences of individual operations are described realistically (i.e. including seasonality and persistency) and that the model yields promising results for it to be suitable to analyse the downtime risk of complex marine projects. It is however recommended that the validation is extended and that an uncertainty analysis is performed (to quantify the parametric and model uncertainty).","downtime analysis; marine operation; marine project; operational limit; sequential operations; parallel operations; workability sequence; Markov chain; linked Markov chains; offshore wind farm; simulation","en","master thesis","","","","","","","","2017-12-11","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:b1c0303b-cd4d-4608-90b7-41a9fbe32215","http://resolver.tudelft.nl/uuid:b1c0303b-cd4d-4608-90b7-41a9fbe32215","The dynamic modeling of diesel engines in support of risk control in adverse conditions: TU Delft collaboration in the SHOPERA program","Kouroutzis, S.","Visser, K. (mentor)","2015","Ship accidents are caused daily others not so important with only scratches on the ship hulls and others are fatal for human life and also the environment. Some of the accidents are caused due to human error, but many are generated due to extreme conditions and to the minimized capabilities of the ships' designs. Ships seem to be underpowered in certain conditions with limited maneuverability as a result of being either old ships with less advanced equipment or newly built ships that need to satisfy specific regulations. There are regulations as the new EEDI lines which have as main purpose to minimize carbon dioxide emissions, though they still have the probability to impose risks in the propulsion and the maneuverability of new ships. In this thesis the dynamic behavior of diesel engines in heavy weather conditions is examined in order to identify the risks that threaten its functionality and overall the ship's propulsion. Research has been done in defining the engine's operational limits and the diesel engine's envelope. Furthermore, tests were performed on behalf of a European Commission's collaborative project called Energy Efficient Safe SHip OPERAtion (SHOPERA) and both numerical and software tools were designed for this project that define the engine dynamics and characteristics.","diesel engines; heavy weather; EEDI; simulation; SHOPERA","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","Mechanical Systems & Integration","",""
"uuid:d17f95fe-25e1-4d5d-98b0-f03e0bfb50f4","http://resolver.tudelft.nl/uuid:d17f95fe-25e1-4d5d-98b0-f03e0bfb50f4","Using a decision tree to analyse results of a simulated execution of operational planning decisions of a container terminal","Van Rhijn, R.A.","Verbraeck, A. (mentor); Lukosch, H.K. (mentor); Li, S. (mentor); Saanen, Y.A. (mentor); Zutt-de Fockert, F. (mentor)","2015","Container terminals are important nodes in the worldwide supply chain. The planning of the daily operations of a container terminal is a complex task and sub optimal decisions can be made due to interconnected sub processes, interrelated decisions, time and uncertainty. If suboptimal planning decisions are made in the planning, consequences will occur during the operation, which cause a decrease in performance. The perfor-mance of a terminal is measured in average Quay Crane (QC) performance. The QC performance determines the turnaround time of vessels. A delay in the turnaround time can cause huge financial loses and a decrease in the reliability of the service a terminal offers. To overcome sub optimal planning decisions, a simulated execution of a planning can be used. If suboptimal planning decisions are made, the consequences will occur during the simulated execution. By analysing the results of the simulation these consequences can be recognised, and thereby the suboptimal decisions can be identified. These decisions should be revised to prevent recurrence of the consequences in the real operation. This can only be done after the planning is finished and before the operation commences, 2-3 hours before the operation. If used not more than 2 Hours are left in the planning process to simulate the planning, analyse the results and revise planning decisions. A simulation takes 15 minutes, including overhead time and performing 3 iterations leaves only around 10 minutes to analyse and 10 minutes to revise decisions. The problem is the lack of a method to use the results of the simulation in such a short time frame. The aim of this thesis is to develop a method that makes it for a planner possible to identify the consequences, define a suitable revision and implement it in the short amount of available time before the operation commences. The research question is stated as follows: How can simulation be used to improve the operational planning of a container terminal before the real execution? The method chosen to solve this problem is a decision tree, which can reveal patterns, identify an assignable cause and define a suitable solution. The developed decision trees are able to reveal the consequences in the results of a simulated execution by checking statistics of the simulated execution with thresholds. A sequence of checks can lead to 3 types of decisions: i. A revision of a planning decision is proposed if the QC performance is lower than desired, the cause could be identified and a solution is available. ii. The tree proposes to lower the planned performance, if the planned performance could not be reached, but no cause was identified or no solution is available. iii. Do nothing is proposed if the desired QC performance is reached, in this case there is no need to make adjustments. To develop this decision tree seven steps are performed; 1. Analysis of the operation and planning, 2. Define the solution space, 3. Perform a root cause analysis, 4. Connect the causes with solutions, 5. Develop the decision trees, 6. Set the thresholds, 7. Evaluate the decision trees. The developed decision tree is evaluated by a case study and expert opinion interviews. In the case study two planning datasets are improved by the use of the decision trees, one reached an increase in QC performance of 1.1 containers/hour and the second of 2 containers/hour. The experts indicated that if the method would be developed further it has the potential to solve the problem. The main conclusions are: - By the analysing the results of a simulated execution of planning decisions with a decision tree, revi-sions of suboptimal planning decisions can be defined - After automation of the decision tree the analysis can be done within 1 minute - The seven steps led to a decision tree that proposed effective solutions during the case study - When the recommendations are processed the decision tree has together with Plan Validation the potential to support a planner to improve their planning and on the longer term planners might be-come more skilled in planning and strategically improvements can be identified From both the case study as from the expert opinion recommendations followed for further research. The main recommendations can be formulated as follows: - The decision trees developed in this research require improvements before implementation is possible - The decision tree should be automated - How the solutions are presented to the planner should be developed (the interface and visualisation) - Plan Validation in combination with the decision tree should be tested in further case studies, by checking it with real planners and by using it on a real terminal","container terminal; planning; simulation","en","master thesis","","","","","","","","","Technology, Policy and Management","Transport, Infrastructure and Logistics","","Engineering","",""
"uuid:803ceabb-64ff-4ee7-a521-ddafe81fb15d","http://resolver.tudelft.nl/uuid:803ceabb-64ff-4ee7-a521-ddafe81fb15d","Storage sharing at import dry-bulk terminals: A case study at Gadani Energy Park","Van den Brand, S.","Duinkerken, M.B. (mentor)","2015","The Gadani Energy Park is a to-be build project in which ten coal-fired power plants are established at one location near the shore of Karachi, Pakistan. In the initial situation, all power plants manage their own on-shore operations. The objective of this study is to gain insight in the advantages and disadvantages of yard collaboration in bulk environments, by using the Gadani Energy Park as an example. The expected benefits gained by collaboration have to be distributed fairly to encourage owners to participate. This is done by using the Shapley value, which proves to provide a feasible cost allocation structure.","dry-bulk terminals; coal-fired power generation; simulation; game theory","en","master thesis","","","","","","","","2017-01-29","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","","Transport Engineering and Logistics","",""
"uuid:7042e0b6-f8cb-4254-a861-e841f7458a2b","http://resolver.tudelft.nl/uuid:7042e0b6-f8cb-4254-a861-e841f7458a2b","Optimizing the design and development of thick film heaters for consumer products","Dorhout, H.","Song, Y. (mentor); Opiyo, E.Z. (mentor); Kloppers, G. (mentor)","2015","Ferro Techniek specializes in the use of enameling techniques in heating elements for consumer and industrial applications. They distinguish themself by delivering customized sub-assemblies for consumer products in the form of Thick Film Heaters (TFHs). These TFHs are used in several consumer products like Nespresso machines, food processors, steam ovens, and many others. The TFHs are flat heating elements that consist of a stainless steel plate, an enamel layer, and a printed electrical circuit protected by a glass insulating layer. The current development of these thick film heaters requires a large amount of prototype and testi iterations, taking up valuable time. Ferro Techniek wants to improve the current product development process by reducing the amount of iterations in the development process and shortening the duration of a project. TFHs are more expensive than the competing heating technologies, but offer benefits that other heating solution cannot, such as a very high power density, compact size and high energy efficiency. These benefits allow designers of consumer goods to design with more freedom and produce more energy efficient products, design aspects which are especially important in the high-end of the market. A very important trend in the consumer goods market is the shortening of the product life cycle caused by consumers who expect original equipment manufacturers (OEMs) to shorten the time between new product introductions. Ferro Techniek, as supplier of heating elements to these OEMs can improve its market position by shortening its own development time. The current development process roughly exists of four stages; initiation, study, development and implementation. Mainly during the study and development stages, there are many design iterations needed to obtain a proof of working principle under operating and extreme conditions. Two case studies of recent product development processes show that the total development time of one heating element can take up to 3 years and it shows what kind of technical challenges occur. The introduction of simulation in the form of a finite element analysis of the TFH to the current development process, especially in the study and development stages, can partially replace the iterations needed in the development process and reduce development duration. To be able to simulate the TFHs, the production process is analyzed, because it causes an initial deformation and stress in the enameled plates. A method for determining the elasticity of the enamel is presented, because suppliers of enamel do not supply any. The bond between enamel and metal is also studied from literature, but could only be defined in a general way, because there is currently no qualitative method of determining the bond strength. Also, anisotropic behavior of the metal substrate is studied to be able to obtain material specification for the input of the simulation. With the results for the enamel’s elasticity, the metals anisotropic behavior and the bond between the two materials defined, a number of square test plates have been simulated and validated, and subsequently the TFH plates. From these plates, the pre-stress and deformation, caused by a difference in thermal expansion of both materials, could be determined. The stress profile show that it is most likely that the enamel will fail during usage at the edges of the plates, due to a lower compressive pre-stress at those points. The thin layered structure of the TFH required a high amount of elements in the mesh, which requires high computing power to solve the simulations. A transient thermal analysis is used as a thermal load input for a transient structural analysis, but due to the lack of computing power available during this project, no conclusions about the deformation and stress development in the usages of a TFH can currently be made or validated. The study does however show a method of simulating and evaluating the TFHs usage, which can in the future be executed by a specialized external company. Two major conclusions can be drawn from redesigning the product development process with simulations. The first is that implementing simulation can be seen as an additional problem solving strategy next to the current deduction strategy (e.g. formalized knowledge of laws of physics) and induction strategy (knowledge obtained from testing and prototyping). The second conclusion is that the benefits of simulations go further than the obvious time saving, i.e.; (1) Experiment new and non-conventional possibilities and designs (2) Gain insights in physical properties that otherwise cannot be measured, like internal stress development (3) Discover relations between design parameters (4) Clearly communicate test results By adding simulations to the development process, Ferro Techniek can improve and shorten their development time, possibly find new heating solutions and thereby improve their market position.","simulation; finite element analysis; enamel; product development process; enameling; thick film heaters","en","master thesis","","","","","","","Campus only","","Industrial Design Engineering","Design Engineering","","","",""
"uuid:7d4f8e30-6f4d-4b02-bfcc-5fd345e8b76c","http://resolver.tudelft.nl/uuid:7d4f8e30-6f4d-4b02-bfcc-5fd345e8b76c","Beyond the borders of electricity: The cross border effects of a German capacity market on the Netherlands","Swager, C.R.","Herder, P.M. (mentor); De Vries, L.J. (mentor); Cunningham, S. (mentor); Iychettira, K.K. (mentor); Helmer, D.N. (mentor)","2014","Within electricity markets, serious concerns exist whether a competitive electricity market will provide the necessary incentives for investment in generation. In Europe several countries are looking at the options of implementing a capacity mechanism. A capacity market provides a possible solution to this problem of generation adequacy but the effectiveness of different methods is disputable and one-to-one comparison is nearly impossible. With Germany deciding on the implementation of a capacity market, concerns arise regarding the cross border effects on the Dutch market. The main research question that stressed the problem is formulated as: “To what extent does the implementation of a capacity market in Germany influence the performance of the Dutch electricity market?” Answer to this research question can help policy makers in assessing policy decision in the electricity market including cross border effects. From literature, several performance indicators are derived. A combination including both system indicators as well as the policy goals reliable, sustainable and affordable are used. The starting point for the modelling part of this research is the Power2Sim electricity market model. In order to be able to answer the research question, the Power2Sim model needs to be extended with two modules. An investment module and a capacity market module. The two modules are modelled in excel and a Visual Basic script is developed to create interaction between the three modules. The data used as input for the model consist of a wide range of reports and empirical data. The results of the model show that the introduction of a capacity market in Germany has an effect on the performance of the Dutch electricity sector. The main finding is that a capacity market leads to higher investments in the country it is implemented. The cross border effects include improvements in expected loss of load hours, total consumer costs and CO2 emission. These effects are enlarged with the expanding of the interconnection capacity between Germany and the Netherlands. CO2 price sensitivity was taken into account as well resulting in some interesting observations regarding interaction between a capacity market and CO2 prices. Following the interpretation of the results, several recommendations have been proposed. The recommendations depend on the view of the policy makers. The three indicators reliability, affordability and sustainability include trade-off. Consequently, policy makers need to decide on which indicator to give preference. A general decision to be made for policy makers is whether they value an independent electricity sector more than the free rider benefits of being dependent on German capacity. Every study is subjected to some kind of limitations. For this study, the limitations can be found in the number of scenarios that have been ran, the number of countries that have been analysed in detail and the basic way of evaluating investment. Concluding, this study has led to several contributions. This study shows that the Power2Sim model can be extended to fit capacity mechanisms. This can be valuable for policy makers in the electricity market in evaluating decision considering the implementation or consequences of a neighbouring capacity mechanisms. Secondly this study has contributed to the validation of existing studies that measure the effect of a capacity market. It furthermore extended the existing research by adding specific cross border effects for the Netherlands under various conditions. At last the research provides directions for future research on the issues that either cannot be explained or could not be fit in the current model structure.","electricity markets; generation adequacy; capacity market; cross border effects; simulation","en","master thesis","","","","","","","","","Technology, Policy and Management","Energy and Industry","","Systems Engineering Policy Analysis and Management","",""
"uuid:72411732-3a67-4d80-98a6-dd3fa9d43c1b","http://resolver.tudelft.nl/uuid:72411732-3a67-4d80-98a6-dd3fa9d43c1b","TEP-SIPE for Joint Impedance Identification: Experimental Evaluation","Ragnarsdottir, K.L.","De Vlugt, E. (mentor); Vallery, H. (mentor)","2014","To allow for efficient and robust gait pattern, humans instantaneously modulate their joint impedance. Lower-limb amputees and patients suffering from neurological diseases partly lack that ability. Therefore, prosthetic and rehabilitative devices have been designed to re- store and repair nominal locomotion. For bio-inspired control of these devices, the underlying physiological behavior of the lower-extremity must be identified and quantified. Deeper under- standing is also important to determine appropriate therapy for patients with upper motor neuron diseases. Different identification methods exist to quantify joint impedance during well-controlled, static tasks using continuous random perturbations. However, methods are lacking that which quantify joint impedance during walking The goal of this Master thesis was to experimentally validate a novel method to identify joint impedance during the stance phase of walking. The thesis investigated whether transient, endpoint perturbations applied using an instrumented treadmill were sufficient to identify the dynamics of a single-joint system where all properties were known. The influences from the experimental setup were evaluated and finally, the method was applied in a pilot study of a human subject standing on the treadmill. Results indicate that joint parameters could be consistently estimated with a good fit. How- ever, they were not physically meaningful and did not match the true parameters of the system. This was caused by influences from the experimental setup, which could not be di- rectly subtracted from the data. Model simulation demonstrated the sensitivity of the method to measurement noise and that the set of estimated parameters were not unique. The limitations of the method and the available experimental setup were carefully identified in this study. Thorough guideline for the method was developed which facilitates the use of the method and sharpens the goal for further research.","Transient Endpoint Perturbations (TEP); System Identification and Parameter Estimation (SIPE); joint impedance; validation; simulation","en","master thesis","","","","","","","","2019-11-24","Mechanical, Maritime and Materials Engineering","BioMechanical Engineering","","BME","",""
"uuid:8cf4bc16-5318-46a0-8f1c-c83443bb4a7f","http://resolver.tudelft.nl/uuid:8cf4bc16-5318-46a0-8f1c-c83443bb4a7f","Assessment of Port Marine Operations Performance by Means of Simulation. Case study: The Port of Jebel Dhanna/Ruwais – UAE","Piccoli, C.","Vellinga, T. (mentor); Wijdeven, B. (mentor); Daamen, W. (mentor); Valstar, J.M. (mentor)","2014","The assessment of port marine operations performance by means of simulation is treated in this master thesis project. The navigational services provided to vessel at the access channels, from their arrival to the berthing operations and from unberthing to vessels departure, are investigated and evaluated. In the case study, FlexSim is used as the simulation tool. Since no previous marine operations performance assessment using FlexSim were found in literature, its application for this purpose is tested and evaluated. From the literature review, it is concluded that despite the fact that the use of logistic simulation models is increasing, and many practical applications are identified, the number of publications and academic studies is still limited. By describing extensively the methods used in this graduation project, it is expected that though a case study is analysed, the given research approach can be adapted and harnessed in other problems with similar aims. Case Study - The Port of Jebel Dhanna/Ruwais The production of Abu Dhabi National Oil Company is expected to increase and new terminals are planned at the Port of Jebel Dhanna/Ruwais in order to cope with this progress. The marine traffic is expected to be almost doubled from 2014 to 2030. The approach channels to the port are relatively long and occasionally limited in width and depth. Some sections are restricted to one-way traffic and priority is given to outgoing vessels. Additionally, tidal windows are imposed to deep draught vessels in depth restricted sections. With the expected increase in marine traffic, it is possible that congestion of the access channel will become a limiting factor. The evolution of the Port of Jebel Dhanna/Ruwais marine operations performance from 2014 to 2030 is evaluated based on FlexSim results. The conclusion is that performance is acceptable for the forecasted 2030 traffic and no real bottleneck at the access channels is identified; the berth unavailability is the main cause of delay. In order to be able to identify the marine operations bottleneck, the traffic is artificially increased. For the simulated fleet mix, the performance of the marine operations reaches an unacceptable level when 900 vessels per year are added to the 2030 forecast. The existing infrastructure is unable to cope with the demand when 1200 vessels per year are added to the 2030 forecast. In order to make it possible for the port to handle these 1200 extra vessels, three interventions are proposed: routing three vessel classes instead of one through a secondary channel; deepening one of the one-way sections; and widening one of the one-way sections. Six measures, formed by the combination of the three interventions are simulated. The most effective measure is the combination of the three of them; however, each measure has a different cost related to it; therefore, only a cost benefit analysis can indicate the best alternative. FlexSim Evaluation After performing all simulations involved in the graduation work, the FlexSim simulation software is considered to be adequate to perform port marine operations simulations. Realistic results are obtained with more than acceptable simulation times and with a moderate time for implementation, which decreases with the modeller increasing experience. However, the possibility of using default FlexSim functions for implementing the traffic regulations would be appreciated.","simulation; logistic; access channel; FlexSim; port performance; marine operations","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","Ports and Waterways","",""
"uuid:43992ba8-2446-480a-aa65-95c7fa25cb57","http://resolver.tudelft.nl/uuid:43992ba8-2446-480a-aa65-95c7fa25cb57","Multi-chip dataflow architecture for massive scale biophysically accurate neuron simulation","Hofmann, J.A.","Van Leuken, T.G.R.M. (mentor)","2014","The ability to simulate brain neurons in real-time using biophysically-meaningful models is a critical pre-requisite grasping human brain behavior. By simulating neurons' behavior, it is possible, for example, to reduce the need for in-vivo experimentation, to improve artificial intelligence and to replace damaged brain parts in patients. A biophysically accurate but complex neuron model, which can be used for such applications, is the Hodgkin-Huxley (HH) model. State of the art simulators are capable of simulating, in real-time, tens of neurons, at most. The currently most advanced simulator is able to simulate 96 HH neurons in real-time. This simulator is limited by its exponential growth in communication costs. To overcome this problem, in this thesis, we propose a new system architecture, which massively increases the amount of neurons which is possible to simulate. By localizing communications, the communication cost is reduced from an exponential to a linear growth with the number of simulated neurons As a result, the proposed system allows the simulation of over 3000 to 19200 cells (depending on the connectivity scheme). To further increase the number of simulated neurons, the proposed system is designed in such a way that it is possible to implement it over multiple chips. Experimental results have shown that it is possible to use up to 8 chips and still keeping the communication costs linear with the number of simulated neurons. The systems is very flexible and allows to tune, during run-time, various parameters, including the presence of connections between neurons, eliminating (or reducing) resynthesis costs, which turn into much faster experimentation cycles. All parts of the system are generated automatically, based on the neuron connectivity scheme. A powerful simulator that incorporates latencies for on and off chip communication, as well as calculation latencies, can be used to find the right configuration for a particular task. As a result, the resulting highly adaptive and configurable system allows for biophysically-accurate simulation of massive amounts of cells.","Inferior Olivary Nucleus; systemc; simulation; biologically accurate; massive scale; brain modelling; Spiking Neural Network; linear scalability","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","Embedded Systems","",""
"uuid:b1b070cf-7be3-4cb9-994f-c77c1d372a2f","http://resolver.tudelft.nl/uuid:b1b070cf-7be3-4cb9-994f-c77c1d372a2f","How to Win the Game? Strategic, Simultaneous, Many-to-Many, Non-Cooperative Negotiation","Hoogslag, J.F.","De Bruijn, J.A. (mentor); Warnier, M.E. (mentor); Nikolic, I. (mentor)","2014","Negotiations are an everyday phenomena, and yet the process is so complex, ill-structured and even chaotic that analysis becomes challenging. Consequently, we currently lack systematic understanding of the dynamics that govern negotiations concerning multiple issues with multiple participants who behave strategically. The objective of this thesis is to enhance this understanding by investigating the effects of different negotiation setups and possible agent tactics (strategic behaviour).","negotiation; strategic behaviour; tactic; agent; simulation","en","master thesis","","","","","","","Campus only","","Technology, Policy and Management","Technology, Policy and Management","","","",""
"uuid:e03e77ce-25d4-4cb6-91c3-0cfcf9b1784c","http://resolver.tudelft.nl/uuid:e03e77ce-25d4-4cb6-91c3-0cfcf9b1784c","Schiphol's dynamic traffic Management: A case study on rescheduling","De Vries, N.A.","Van Arem, B. (mentor); Goverde, R.M.P. (mentor); Corman, F. (mentor); Wiggenraad, P.B.L. (mentor); Bouman, P. (mentor); Schaafsma, A.A.M. (mentor)","2014","Around the railway station of Schiphol, railway traffic is controlled by an automated dynamic traffic management system (DVM). This system is not optimal for the future service pattern of NS and NS Hispeed. My research explores the possibilities of a new DVM by analysing different rescheduling elements by means of microsimulations in OpenTrack.","Railway Traffic Management; transport; Schiphol; railway; rescheduling; Dynamic Traffic Management; NS; ProRail; DVM; OpenTrack; simulation; rerouting; retiming; reordering; traffic","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","","",""
"uuid:bf3d50eb-c103-4867-a625-b1a6c98b714b","http://resolver.tudelft.nl/uuid:bf3d50eb-c103-4867-a625-b1a6c98b714b","Multidisciplinaire zorg voor kwetsbare ouderen: Ontwerp en capaciteitsbepaling van een poliklinisch proces in het Reinier de Graaf Gasthuis","Bos, J.","Veeke, H.P.M. (mentor); Fransen, A.E.P. (mentor); Boomkens, H.R. (mentor)","2014","The Reinier de Graaf Hospital in Delft has concluded that care for frail elderly with impaired mobility is not always optimal. Due to multimorbidity, traditional monodisciplinary care is not the best possible care. The Reinier de Graaf Hospital set out to improve their care process to serve the frail elderly best. It was found that the target group has a high risk for functional decline, dependence and reduced quality of life. Thus the target group can benefit from short admission and throughput times. In order to deliver the best possible care, multidisciplinary collaboration is found to be essential. Using soft and hard systems theory combined with thorough data analysis, the current process was evaluated. It was found that the average combined admission and throughput time was 196 days for the target group, in this period a patient visited the hospital 8,6 times. The hospital visits were found to be predominantly for outpatient care. The multidisciplinary care was largely delivered in a sequential manner. This leads to a lack of collaboration between medical specialist, to unnecessary long throughput times and to many hospital visits. To improve the quality of the health care logistics, improvements have been proposed in a process redesign. Instead of delivering care in a sequential manner, the redesign allows for parallel multidisciplinary treatment. In order to do so, a triage appointment is suggested at the start of the care route and appointments with multiple medical specialists should be clustered on a single day. Short admission and throughput times can be achieved by reserving capacity based on expected patient flow. By means of data analysis the expected future input volume is set at 14 patients per week and the mean capacity required was calculated accordingly. To assess the performance of the proposed design under stochastic influence, a discrete event simulation was developed and verified. The simulation results indicate that if 110% of the theoretical capacity is used, for the average patient the throughput time can be reduced by 50% to 73%, depending on the average wait time between visits. On average a simulated patient visited the hospital 4,4 times for 6,7 appointments. The cost for these improvements are an average occupancy drop for medical doctors of 15%.","health care; logistics; delft systems approach; simulation; discrete event simulation; logistiek; ziekenhuis; Reinier de Graaf; simulatie; care route; zorgroute","nl","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","","Production Engineering and Logistics","",""
"uuid:23b4c2be-dcc7-4e12-a1bb-4b1a7e6f3a71","http://resolver.tudelft.nl/uuid:23b4c2be-dcc7-4e12-a1bb-4b1a7e6f3a71","Project Gaia - Gemeente Delft: Touch","Bholanath, R.; Van der Hoeven, E.; De Quillettes, K.","Eisemann, E. (mentor); Bidarra, R. (mentor); Tutenel, T. (mentor)","2013","A touch table is a perfect way of collaborating within a group. It allows multiple group members to interact with an application at the same time. This in turn allows every member of the group to directly participate in the team process rather than having to send their ideas up the chain of command. An area where such an application might come in useful is in the managing of water ows in a city. This is especially important in Dutch cities, which often lie beneath the sea level. If multiple team members could simultaneously see where there were problems with the water ows in the city and if every team member could present their solutions easily and intuitively, this could be very beneficial. For the bachelor project, such an application has been developed by the team. The application shows a model of the city of Delft which is accurate with respects to the actual relative sizes of terrain and buildings. The user can then spawn water at any place on the map and observe how the water ows through the city. But the user can do more than observing, he or she can also change the elevation of the terrain. This will allow a user to see how this change affects the flow of water. The agile programming method scrum was used in the development of the program. This gave the team the exibility they needed to successfully develop the program. Every team member was assigned a specific role and a planning for every scrum sprint was conceived to make sure that the project would go smoothly. Various tools were also used to this end, these tools handled some of the more mundane aspects of the development or made some hard parts a little easier. Even though some things did not go as expected during the development, the program as it is at this moment in time fulfills all the goals that were set up beforehand. This means a working prototype is able to run on the touch table of the Gemeente Delft and that every visitor of the i-Lab will soon be able to try out the application.","water; touch; simulation; physics","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","Computer Graphics and Visualization","",""
"uuid:883796f3-49fc-46de-92f8-cd2bbd592d77","http://resolver.tudelft.nl/uuid:883796f3-49fc-46de-92f8-cd2bbd592d77","Hardware acceleration of simulations of distributed systems","Turi, D.","Langendoen, K.G. (mentor); Dulman, S.O. (mentor); Varbanescu, A.L. (mentor); Iosup, A. (mentor)","2013","In recent years the study of complex systems gained prominence. Since it is usually difficult to use classical mathematic models to understand these systems, scientists and engineers have to resort to simulations. Currently programming languages like CUDA C and OpenCL are available to run large scale simulations on a GPU architecture. Alternatively simulations can be executed on agent-based simulators such as NetLogo. The former enables the writing of highly performant programs whereas the latter offers a simple language and execution model which is accessible to people with little programming experience. The purpose of this thesis is to devise a proof-of-concept simulator called CudaSimulator which attempts to create a middleground between NetLogo and CUDA, which on the one hand retains the simplicity of NetLogo and on the other hand executes simulations on a GPU architecture. Apart from the general context this thesis is motivated by a specific demand. The Snowdrop project at the Embedded Software Group, TU Delft, aims to find algorithms that are written in the NetLogo programming language and exhibit certain emergent behavior. In order to find such algorithms Genetic Programming is used. Since a Genetic Programming framework has to simulate and evaluate a lot of algorithms and this process is usually lengthy, it could be accelerated on a GPU architecture. The CudaSimulator is therefore developed in such a way that it could serve as a backend to the Genetic Programming framework. The thesis includes an evaluation of the simulator which shows the situations in which the CudaSimulator is more feasible than alternative solutions.","MANET; distributed; simulation; CUDA; GPGPU; NetLogo","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Technology","","","",""
"uuid:41eb449d-1be2-4bbf-a8b5-32b7bef3d751","http://resolver.tudelft.nl/uuid:41eb449d-1be2-4bbf-a8b5-32b7bef3d751","Sensitivity analysis of residential building simulations: Model choice for specific applications and critical parameters","Fabriek, P.H.","Infante Ferreira, C.A. (mentor); Itard, L.C.M. (mentor)","2013","Sensitivity analysis of a residential building. To observe critical parameters that influence the building energy behaviour.","TRNSYS; simulation; influential behavior","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","P&E","","ETh","",""
"uuid:db2cc72c-2786-4481-abd3-b40e07678884","http://resolver.tudelft.nl/uuid:db2cc72c-2786-4481-abd3-b40e07678884","Analysis and Optimization of a Machined Steel Kit Manufacturing Process","Rose, C.D.","Pruyn, J.F.J. (mentor); Voorend, R. (mentor); Jellema, H.P.T. (mentor)","2013","IHC Metalix, a producer of machined steel kits for the shipbuilding industry, is currently in the process of improving production efficiency by modifying an existing crane as well as by installing two additional cranes, a pallet conveyor, and a pallet storage rack. The exact effects of these improvements on the production process have not been quantified. Furthermore, the influences of the order properties on the production process not known. The goal of this project is to analyze the effect of the process upgrades and order properties on the production process. This project also aims to generate and test the effect of additional process improvements. Different order portfolios were created to represent the current order book of IHC Metalix and possible changes in the order book in the next few years. The influences of these order portfolios on the production process were determined using a sub-process capacity calculation and simulation model. The simulation model was also used to implement and test further improvements to the process. This study found that the process improvements installed in the past year should increase production capacity by approximately 25%. The plate cutting machines were found to be the process bottleneck for all of the order portfolios. Large ship types with simple structures (such as pipelaying vessels and construction projects) have a positive effect on the production capacity. Small vessels with complex structures (such as yachts, tugs, and inland cruise vessels) reduced the total production capacity. Coasters and dredgers were found to have little effect on the total production capacity. To improve the production process, it is recommended that two large part finishing tables are removed to make space for two additional flatrack positions. The printing algorithm of the vector plotters mounted to the cutting machines should also be improved as much as possible. If additional production capacity is required, a separate plate printer could also be installed. These improvements can increase the production capacity up to 18%.","simulation; production process; machined steel kit","en","master thesis","","","","","","","","2018-05-03","Mechanical, Maritime and Materials Engineering","Marine & Transport Technology","","Ship Design, Production, and Operation","",""
"uuid:30267c95-a7c9-40e3-bd3f-1baadd661fc7","http://resolver.tudelft.nl/uuid:30267c95-a7c9-40e3-bd3f-1baadd661fc7","A simulation model of mixed traffic flow at non-signalised intersections, based on the Shared Space approach","De Jong, L.E.","Hoogendoorn, S.P. (mentor); Daamen, W. (mentor); Brookhuis, K.A. (mentor)","2013","The objective of this thesis project is to analyze the traffic behaviour that the Shared Space approach assumes to enforce in a traffic space designed according to its principles, and to determine the impact of this behaviour on traffic performance and safety, isolated from location-specific elements. This is done by means of a new conceptual model and an implementation of it in a simulation model. Shared Space is, if anything, a road design process, with a vision on the functions of public spaces and on the role of stakeholders in designing them. Traffic behaviour and traffic control measures should not severely restrict other functions of public spaces. Shared Space distinguishes between social behaviour and traffic behaviour. Social behaviour could sometimes overrule established traffic rules. This shift may lead to a reduced level of experienced safety. This is supposed to help to improve the objective safety levels, based on (reverse) risk compensation. Analyzing Shared Space traffic demands a simulation model to be applicable to an intersection, accommodate different modalities, include the road and the movements on it, model the impact of the road environment and contain a form of conscious decision-making. In scientific literature, a large number of appropriate models can be found, such as gap-acceptance models, conflict-point models and social force models. The conceptual model emphasises the dynamic relation between two fundamental traffic processes, negotiation and movement. Negotiation between road users will determine their accelerations, and subsequently their speeds and positions. All modalities are involved in the negotiation process, assuming sufficient communication to clarify intentions. Every road user has two behavioural variables: an initiative factor and a politeness factor. The initiative factor determines whether the road user, during negotiating, would like to take precedence even if it does not have priority. The politeness factor impacts whether the road user will offer or accept that the other road user will proceed despite its not having priority. Road users have fixed paths consisting of curve points connected by line elements. In the negotiation process, it is calculated whether the trajectories of any two road users will be in conflict within a certain time horizon. If so, both road users can have alternative acceleration patterns, leading to alternative trajectories, which can also be tested for potential conflicts. The two behavioural variables initiative and politeness have an impact on the range and sequence of considered potential accelerations. The simulation model, a combination of object-oriented and procedural elements, is implemented in Matlab. The program generates a graphical user interface, which can be used to adjust input variables and visualise output data. The model's face validity has been analyzed on the basis of simulations of eight traffic situations, concerning following behaviour, conflict handling and the impact of initiative and politeness. The simulation results are largely as expected for moderate traffic demand. The simulation model is applied to four cases in order to determine the impact of varying initiative and politeness shares on two indicators for an intersection's performance and safety: the average speed and the Time Exposed to critical TTC (TET) value. Increased initiative and politeness do not lead to a higher average speed or a lower aggregated TET value. There is a significant correlation for individual road users between taking initiative and experiencing a higher average speed and a lower TET value. For politeness, such a correlation was not found. The Shared Space approach can be modelled as a concept and implemented as a simulation program for heterogeneous traffic on a generic intersection. The simulation results do not back up the assumption that the approach would have an impact on traffic performance and safety.","Shared Space; traffic flow model; mixed traffic; non-signalised intersection; simulation","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Transport & Planning","","Transport, Infrastructure and Logistics","",""
"uuid:437f3bf4-108c-41eb-97de-5270d6d3264b","http://resolver.tudelft.nl/uuid:437f3bf4-108c-41eb-97de-5270d6d3264b","Design of a Sustainable Electric Vehicle Charging Station","Bakolas, B.V.E.","","2012","Electric vehicles only become useful in reducing greenhouse gas emissions, if the electricity used to charge their batteries comes from renewable energy sources. This thesis was conducted within the electric mobility framework of the Green Village, the project put forward to test the Green Campus Concept. The objective was to design a Station that charges electric vehicles, using sustainable energy technologies. To achieve an optimal performance of the selected components, a particular layout architecture was suggested. Additionally, a computer model was developed to simulate the Station operation under variant energy generation and consumption inputs, as established by fitted meteorological data and predicted usage patterns. Simulations were run using the Station model and the corresponding results were analyzed. Finally the economic aspects of the project implementation were examined and conclusions were drawn regarding the commercialization of its conceptual attributes.","electric vehicles; charging station; renewables; simulation; smart power flow control","en","master thesis","","","","","","","","","Applied Sciences","Fundamental Aspects of Materials & Energy","","Sustainable Energy Technology","",""
"uuid:98f9e876-5826-45bd-92ce-3f65e474582a","http://resolver.tudelft.nl/uuid:98f9e876-5826-45bd-92ce-3f65e474582a","A general RDE-based simulator for statistical timing analysis","Rodriguez Rodriguez de Guzman, J.","Berkelaar, M. (mentor); Tang, Q. (mentor)","2012","Accurate timing analysis of digital integrated circuits is becoming harder to achieve with current and future CMOS technologies. The shrinking feature sizes lead to increasingly important local process variations (PV), making existing methods like corner-based static timing analysis (STA) yield overly pessimistic results. While industry faces the uncertainty introduced by PV with time-consuming Monte Carlo (MC) simulations, this thesis presents a general purpose statistical circuit simulator for accurate timing analysis. This simulator uses a statistical simplified transistor model (SSTM) as its main building block, which allows the accurate modeling of both combinational and sequential circuits, and it is able to perform a fast statistical timing analysis of any input circuit by solving a system of random differential equations (RDE). Different experiments, ranging from simple cells to complex combinational circuits, were conducted to validate the simulator accuracy and performance for the 45nm CMOS technology. The obtained results show accurate results for both deterministic and statistical analysis of the circuit signals while effectively reducing the runtime when compared to MC simulations.","timing; simulation; statistical; RDE","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics & Computer Engineering","","MSc EE Microelectronics","",""
"uuid:2d568014-8acb-4e8e-9d39-91c76f499a46","http://resolver.tudelft.nl/uuid:2d568014-8acb-4e8e-9d39-91c76f499a46","Simulation of Geochemical Processes during Low Salinity Water Flooding by Coupling Multiphase Buckley-Leverett Flow to the Geochemical Package PHREEQC","De Bruin, W.J.","Zitha, P.L.J. (mentor)","2012","Simulations carried out for low salinity water flooding often do not include geochemical processes. Salt concentration, and thus the salinity, is modelled as a water tracer that does not react with the reservoir formation. The goal of this MSc thesis is to improve the understanding of the influence of geochemical processes on the mixing of formation water and injection water, during low salinity water flooding. The geochemical processes taken into consideration are CO2-buffering, ion exchange and mineral dissolution. An initial understanding of the geochemical processes was gained by performing numerous simulations with the U.S. Geological Survey geochemical package PHREEQC. A limitation of this simulator is that it only allows for single-phase aqueous flow. To overcome this limitation, a multiphase Buckley-Leverett simulator has been developed in MATLABR that couples oil-water flow to the geochemical package PHREEQC. Subsequently, the newly developed simulator was used to study the effects of geochemical processes on the increase in oil recovery. In addition, simulations were performed to study low salinity slug sizes and dispersion. Although the low salinity mechanisms are still subject of extensive research, it is assumed that increases in oil recovery due to low salinity water flooding can be modelled as a change in relative permeability, from oil- or mixed-wet to more water-wet. Simulation results showed that fully removing calcite (calcite content 0.97 Wt%) from the reservoir, requires an excessive amount of pore volumes of low salinity water to be flushed through the reservoir. Therefore, dissolution of all calcite seems a near injector well-bore effect only. In the majority of the case study field, the minimum salinity level reached will be around 910 ppm. Simulations also showed that, during the injection of low salinity water into the case study field, Na+ attached to the cation exchanger is replaced by Ca2+. This is a result of the preferential adsorption of double valence ions when lowering the ionic strength, and decreasing the Na+/Ca2+ ratio in the reservoir. In simulation runs where geochemical interactions were included, higher salinity levels were observed in the reservoir compared to passive salt tracer simulations. In addition to an increase of 160 ppm due to the initial calcite dissolution, a secondary increase due to calcite dissolution as a result of cation exchange was noted. Depending on the amount of exchange sites, significantly higher ion concentrations (?2000 ppm) were observed. As the low salinity effect is assumed to be triggered solely by the salinity level, including geochemical interactions can therefore lead to a lower low salinity EOR potential. The increase in oil production observed for a non-geochemical affected secondary low salinity injection scheme (1.0 pore volume formation water followed by 4.0 pore volumes low salinity water) is 5.8% of the originally oil in place (OOIP) compared to a high salinity injection scheme (5.0 pore volumes of formation water), for low salinity thresholds ranging from 1000-3000 ppm. By including geochemical effects, the amount of incremental oil was 0.5%, 3.2%, 5.7% or 5.8% of the OOIP for a salinity threshold of 1000 ppm, 1500 ppm, 2000 ppm, or 3000 ppm, respectively. This indicates that, especially for low values of the low salinity threshold, geochemical interactions may be of importance for the EOR potential. However, it is important to note that the amount of calcite and number of cation exchange sites have been calculated based on bulk rock data. In addition, it has been assumed that the aqueous phase is in contact with all calcite and clay. By doing so, the effects of the geochemical interactions are overestimated. Dispersion was found to be very important for the determination of minimum low salinity slug sizes. However, no accurate dispersion data were available for the case study field to verify the current model. Simulation results showed that frequent (2 days/month) injection of seawater slugs during low salinity flooding may increase salinity levels throughout the whole reservoir above the threshold values, effectively eliminating the increase in oil production. Injecting larger seawater slugs on a less regular interval (2 weeks/year) results in fractions of the reservoir having a higher salinity than the threshold value. However, the overall impact on the cumulative oil production was far less (-0.6% of the OOIP compared to no seawater slugs). An interesting continuation of this project would lie in a detailed study of the chemical composition of the rock surface. As the cation exchange sites are likely to be less, the impact of cation exchange induced calcite dissolution on the salinity is reduced. This will result in an increase of low salinity EOR potential.","low; salinity; water; injection; flooding; oil; eor; ior; phreeqc; geochemistry; geochemical; simulation; buckley; leverett; interactions","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience & Engineering","","","",""
"uuid:f5819979-5874-497d-9ec5-8645916b9c9a","http://resolver.tudelft.nl/uuid:f5819979-5874-497d-9ec5-8645916b9c9a","Maximum Likelihood Estimation of Linear Time-Varying Pilot-Vehicle System Parameters","Kers, M.","Mulder, M. (mentor); Pool, D.M. (mentor); Van Paassen, M.M. (mentor); Chu, Q.P. (mentor)","2012","","Maximum Likelihood Estimation; Human-Machine Interaction; HMI; aerospace; system identification; parameter estimation; Gauss-Newton; Boltzmann Sigmoid; control; simulation; modeling; LTV; Linear Time-Varying; MLE","en","master thesis","","","","","","","","2016-06-01","Aerospace Engineering","Control & Operations","","Human-Machine Interaction","",""
"uuid:1864aa0e-6617-4640-b409-7773418a3724","http://resolver.tudelft.nl/uuid:1864aa0e-6617-4640-b409-7773418a3724","The added value of simulation in increasing maturity levels of customer service processes","Weijers, R.E.R.M.","Barjis, J. (mentor)","2012","Discrete Event Simulation (DES) is widely used in analyzing and improving business processes. Business process maturity models (BPMM) are used to define maturity levels of business processes and give recommendations how maturity levels can be increased. While each has proven their efficiency in their own way, very little attention is paid to the complementary role they play when combined. In this paper, we discuss and demonstrate the added value of simulation in conjunction to maturity models. The quantitative added value of simulation models as well as the added value in change management are discussed. We used two case studies from a large financial organization to demonstrate the added value of simulation in conjunction to BPMM. The finding of this research resulted in a set of recommendations about suitability of DES at different maturity levels. Furthermore recommendations are given for further research in the field of maturity levels supported by simulation.","simulation","en","master thesis","","","","","","","","2015-01-01","Technology, Policy and Management","TPM","","System Engineering","",""
"uuid:472c6f99-18a9-494c-b526-5b9b986df0e3","http://resolver.tudelft.nl/uuid:472c6f99-18a9-494c-b526-5b9b986df0e3","Predicting the Diffusion of Technology in the Market Adaptation Phase","Mutapcic, O.","Ortt, J.R. (mentor); Rook, L. (mentor); Kwee, Z. (mentor)","2012","Introduction of a new product in the market brings new opportunities to a company but also involves many risks. Many companies cease to exist in this turbulent period, even before the technology reaches mass-diffusion. In this study we look at the processes in the period of pre-diffusion and in particular in the market adaptation phase. With this thesis, we try to identify the set of most important variables at play and use this set to model these processes. We use this to provide an answer on how to successfully predict the length of the market adaptation phase, with the data available at/or prior to the start of the phase. Three different approaches were tried: the simple audit, extended audit and Monte Carlo simulation. Our findings indicate that there is no single/basic, minimum and universal set of variables suitable for prediction/modeling. Rather, the minimum set of variables depends on the type of model, type of industry, complexity and the number of available data. Furthermore, we show that the performance of the models heavily depends on the amount of the data available. The main conclusion of this thesis is that a successful prediction of the length of the market adaptation phase is indeed possible but that it requires a careful consideration of quality and availability of the data, the choice of variables for modeling, initial conditions of the model and the choice of the model itself. In our case, the implementation of the Monte Carlo algorithm in the simulation of the process proved to provide the best results.","pre-diffusion; high-tech; simulation","en","master thesis","","","","","","","Campus only","2013-03-01","Technology, Policy and Management","Technology, Strategy and Entrepreneurship","","","",""
"uuid:47cc88ca-c7f7-4255-9153-51ac1df625a3","http://resolver.tudelft.nl/uuid:47cc88ca-c7f7-4255-9153-51ac1df625a3","Port of Rotterdam Anchorages Study: An occupancy evaluation using simulation","Devillé, S.B.","Vellinga, T. (mentor); Daamen, W. (mentor); De Jong, M. (mentor); Verkiel, J.W. (mentor)","2011","In this thesis the current anchoring situation, especially in terms of occupancy and space use within anchorages, in the anchorages in the offshore approach of the port of Rotterdam in assessed by means of data analysis of historical data. Using this data analysis as input a stochastic simulation model is constructed in MATLAB to assess occupancy and evaluate capacity in these anchorages. With the contructed simulation model the current anchoring situation in the anchorages in the offshore approach of the port of Rotterdam is modelled correctly and accurately, and furthermore the model is generic and hence can be used by many other ports as well.","anchorage; anchoring; occupancy; capacity; simulation","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","Ports and Waterways","",""
"uuid:8885f6ba-2c23-4e35-b453-1eaac378e4cd","http://resolver.tudelft.nl/uuid:8885f6ba-2c23-4e35-b453-1eaac378e4cd","Stochastic simulation of delay propagation: Improving schedule stability at Kenya Airways","Schellekens, B.A.J.","Van der Zwan, F. (mentor); Omondi, T. (mentor); Kibati, J. (mentor); Curran, R. (mentor)","2011","A large challenge for passenger airlines is the design of a profitable flight schedule, for example at Kenya Airways who operates a highly connected hub-and-spoke network. The goal of this research is create a fundamental insight into the stochastic nature of delay propagation in a passenger hub-and-spoke network which allows airlines to increase schedule stability. A model has been developed to simulate the propagation of delays through a flight network whilst incorporating passenger connectivity. The analogy that can be made is the toppling of dominoes, where the fall of the first domino illustrates the primary delay caused. The conceptual model is an activity-on-node flight schedule representation where a delay can propagate through either the lines of flight or via a passenger connection if the resource is not ready. Through a Monte-Carlo simulation per flight a relation is made and visualized between the duration of the primary delay and the delay severity, i.e. the number of flights the primary delay affects downstream. From this the Expected Delay Severity is derived, a proposed flight robustness metric, where the delay severity curve is weighed against the probability of a primary delay. The simulation has been validated using empirical data from the operations of Kenya Airways. Integrating passenger connectivity is found to be essential in representing the true delay propagation in a hub-and-spoke network. Implementation at Kenya Airways can be realized by a pro-active approach that uses the stochastic simulation of delay propagation and allows to control the system impact of disruptions by identifying factors that can improve schedule stability. Several flight-retiming improvements have been made and implemented using the proposed methodology.","robust planning; flight scheduling; simulation; delay severity","en","master thesis","","","","","","","","2012-01-04","Aerospace Engineering","Air Transport and Operations","","Master Thesis","",""
"uuid:9c4e589d-417c-43c8-a19a-0632094dce0e","http://resolver.tudelft.nl/uuid:9c4e589d-417c-43c8-a19a-0632094dce0e","Sustainable tourism simulation in Norway","Tiemensma, S.L.","Thissen, W.A.H. (mentor); Van Daalen, C. (mentor); Peeters, P. (mentor); Mayer, I.S. (mentor)","2011","The tourism sector of Norway is one of the sectors that aim for sustainability and is the subject of an ambitious project. With an estimated 5% of the total global CO2 emissions coming from tourism, this is a good sector to examine for improvements in sustainability. The tourism sector in combination with the government started the project Sustainable Destination Norway 2025. The project carried out by the Norwegian research group Vestlandsforsking, they involved Paul Peeters, associate Professor Sustainable Transport&Tourism of CSTT, for his expertise in sustainable tourism. The Norwegian government wants their tourism sector to be sustainable based on the following goals: 30% reduction of CO2 emissions in 2025 compared to 2005 1 million extra inbound tourists in 2025 compared to 2005 A higher contribution of tourism to the Gross Domestic Product The question that was addressed in this research is: In what way insight can be gained in the complex system of sustainable tourism in Norway, and how to communicate these insights to stakeholders in the tourism sector? In a joined facilitating role with Paul Peeters a mediated modelling session was conducted to obtain the input of the Norwegian research group Vestlandsforsking. During this session we created a few rough sketches of the subsystems in the tourism sector. The rough sketches produced in Norway have been used as a starting point and inspiration for the causal analysis. In this causal analysis more insight was created in the influential factors of the tourism system, and the way they affect each other. The main dilemma in the tourism system is the large contrast in the goals of sustainable destination Norway, getting in more inbound tourist almost directly increases the amount of CO2 emissions. Strong measures against CO2 emissions for instance in the form of taxes on air travels, almost immediately has a negative effect on the inbound tourists which also influences the tourism revenues. However, more tax on air travel could also lead to more domestic tourists so the effect on the revenues is more complicated to predict. To get more insight in these relations a computer simulation was made based on the causal analysis. There was a need for a way to quantitatively go in depth into the relations of the different factors of sustainability in the Norwegian tourism sector. In order to make good policy for the future, more insights need to be created in the behaviour of the system and ways to influence this. Building a System Dynamics model can provide these insights and help the tourism sector create policy to increase the sustainability of the sector. The model has been divided into six subparts: Tourism streams (inbound, outbound and domestic), Global / Norwegian economy, Transport modes (air, public transport and car), Local trips, CO2 emissions, Revenues. Statistical data for the model was produced by experts from Vestlandsforsking, gaps were filled by the expertise of Paul Peeters. The model was built based on System Dynamics and created in Powersim 8, modelling software that is based on System Dynamics flow diagrams. The program numerically solves the differential equations that are produced by the system of stock and flow equations. To build trust in the model several tests have been performed to test model structure and model behaviour, a sensitivity analysis, extreme value testing and historic data validation. The model showed weaknesses at extreme values, which is important to keep in mind when using the model. In the sensitivity analysis the model outcomes were not troublesome, all variables showed normal sensitivity some a little more than others but all within boundaries. Most questionable in the verification phase are the derivative constructions that were used to prevent illegal loops in the model, however they should not affect the outcome of the model. In the model use it became apparent that the tourism sector of Norway has a serious problem getting their CO2 emissions down with 30 percent by 2025. The baserun shows a large increase in CO2 emissions, and during the testing of the policy options it became clear that this situation is not easy to improve. Norway has to work together in all policy fields, combining their forces to tackle the increase of CO2 emissions and turn it into a reduction. To be able to let tourism experts use this model a graphical user interface was created which hides the complexity that is in the simulation model and gives them the opportunity to test the policy measures they desire and see the important outcomes. To be able to communicate the important lessons of the system dynamic model, a simulation game has been created based on the model. This way people who have never experienced computer modelling will still be able to use it. A second goal of the model is to facilitate discussion amongst stakeholders from the government and tourism sector. The game has been tested in a political arena in Norway, where the way the game facilitates discussion has been examined. The simulation game was received very positively by the users and the project team of Vestlandsforsking. In itself the game performed really well, both goals of the game have been accomplished. The game turned out to be a great facilitator for discussion in the first session and experts in the tourism field felt like they had learned many lessons from the game. Since it is hard to quantify the results of this first simulation gaming session in Norway, an experimental session at a tourism university in Germany was held with questionnaires for scientific measurement. The goal of the session was to show the added value of simulation gaming to the amount of learning. In three different workshops the students were introduced to the challenges of sustainable tourism in Norway: a classic brainstorming session, a System Dynamics modelling session and a simulation gaming session. To be able to identify which of the three workshops taught the participants the most, these questionnaires, testing knowledge before and after the workshops, have been analysed with SPSS. Even though the results of the experiments were not statistically significant, they did point towards an added value for simulation gaming in communicating the important aspects of a complex system. So the answer to the main research question “What is the best way to gain insight in the complex system of sustainable tourism in Norway, and how to communicate these insights to stakeholders in the tourism sector?” could indeed be the combination of System Dynamics modelling and simulation gaming. The recommendations that follow from this research are: Improvements of current research: Get the list of missing data from Vestlandsforsking to be able to perform the historical data validation more accurately. Performing the experiments with larger groups and an actual control group instead of three groups that test different sets of policy tools. To really improve the understanding of complex systems for stakeholders it would be interesting to add the group model building part to the simulation gaming. In this project the extreme values as well as the sensitivity of the model have been analysed. At 110% the model performed well, but with 200% the model gave problems with producing outcomes. It would be very useful to know where the problem of the model starts, somewhere between the 110% and the 200%. Possible development directions for the model: Try to develop a more general version of the model to be able to play the game in several countries, the results from the conference in Balestrand were extremely positive. A more regional version could also be used in Norway to get the discussion going in all regions of Norway. Broaden the sustainability of the model, CO2 emissions as only performance indicator is soon not enough for the sustainability debate. Possible development directions for the game: Give the departments a budget for their policy choices and assign prices to the policy options. Different competing goals for the different departments, in the described sessions all departments worked towards reaching the emission goal. The presentation of the game could be improved with for instance a board game like addition. A single player version of the game could be developed with for instance different levels of complexity and the possibility to have different objectives of play. Further research: Investigate the acceptance of the model in real situations, will stakeholders still be enthusiastic when the outcomes are negative for them? What are the differences in learning between a single player mode of the simulation game and the multiplayer mode of the simulation game?","system dynamics; serious gaming; simulation","en","master thesis","","","","","","","Campus only","","Technology, Policy and Management","Multi Actor Systems","","Policy Analysis","",""
"uuid:5b6801de-6e23-47db-9faf-74f3202127dd","http://resolver.tudelft.nl/uuid:5b6801de-6e23-47db-9faf-74f3202127dd","Developing a Decision Support System for the Logistics Planning of Reel-lay Projects at Heerema Marine Contractors","Sahin, E.","Verbraeck, A. (mentor); Seck, M. (mentor); Cunningham, S. (mentor); Sturm, N. (mentor); Van Zandwijk, K. (mentor)","2011","","simulation; decision support systems; pipe-lay; reel-lay; logistics planning","en","master thesis","","","","","","","","","Technology, Policy and Management","Systems Engineering","","Engineering and Policy Analysis","",""
"uuid:efb44d67-3ab2-41b8-a23f-aac2349f80a4","http://resolver.tudelft.nl/uuid:efb44d67-3ab2-41b8-a23f-aac2349f80a4","Influence of Chemical Reactions on In Situ Combustion: A Simulation Study","Hussain, A.A.A.","Rudolph, E.S.J. (mentor); Khoshnevis Gargar, N. (mentor)","2011","In-situ combustion (ISC) is an enhanced oil recovery process during which air or oxygen-enriched air is injected into a reservoir. The oil in the reservoir reacts with the oxygen and the so-called combustion front is formed and propagates through the reservoir, generating heat and flue gases. During the process, numerous chemical reactions take place in different zones and temperature ranges. For the description of the process the oil is represented by pseudo components. The definition of the pseudo component defines the reaction schemes implemented in the numerical simulator. The reaction kinetics are described by relative simple order reactions for which the reaction rates are calculated using the Arrhenius-type equations. Estimating the input parameters of the Arrhenius equation is a giant obstacle in ISC modelling. Combustion tube experiments are performed to acquire oil, water and gas production data, the effluent composition and temperature profiles which depend on the oil and reservoir rock properties. Estimating the Arrhenius parameters can be done by history matching these experiments. Due to the quite large amount of parameters non-unique solutions are found. Unfortunately, so far the resulting adjusted parameters are not tested if they describe a chemical-physical sound and realistic behavior. In this research an ISC tube experiment with an Athabasca bitumen was simulated using a commercial thermal simulator (CMG STARS). The cumulative oil and gas production and the temperature profiles of the experiment were used for verification of the simulations. The first simulation was done with the input parameters as stated by Yang and Gates (2009). In this simulation the reaction rate parameters were chosen such that coke formation from asphaltene by cracking already commences at temperatures of around 343 K and coke formation from asphaltenes by oxidation at temperatures of around 650 K. Further, in the applied reaction schemes methane combustion is assumed to be up to a factor 1030 slower than hydrocarbon gas combustion. In this study, the reaction kinetics were changed to see the influence of the reaction kinetics parameters of asphaltene cracking and asphaltene oxidation at lower temperatures. Further, the reaction rates describing methane combustion was set equal to the kinetic parameters of hydrocarbon gas combustion. From these simulations it was found that the hydrocarbon gas combustion reaction does not significantly influence the ISC process. Changing the reaction kinetics of asphaltene cracking and oxidation does influence the ISC process significantly; asphaltene cracking occurs fasters and starts at lower temperature, more coke is formed and combusted in the simulation but less oil is produced than in the base case. Furthermore, the injection rate of the air was varied to identify the impact of the fuel/oxygen ratio on the production data. A higher air injection rate shows that the combustion front moves through the reservoir in a shorter amount of time; which indicates that it is possibly economically favorable to inject air at a higher rate into an oil reservoir in which ISC is conducted.","in-situ combustion; simulation; sensitivity analysis","en","bachelor thesis","","","","","","","","","Civil Engineering and Geosciences","Applied Earth Sciences","","Petroleum Engineering","",""
"uuid:2673e7dd-af22-4d35-b3f7-e051d665bd80","http://resolver.tudelft.nl/uuid:2673e7dd-af22-4d35-b3f7-e051d665bd80","Gaining new insights regarding traffic congestion, by explicitly considering the variability in traffic","Miete, O.M.","Hoogendoorn, S.P. (mentor); Vrijling, J.K. (mentor); Van Gelder, P.H.A.J.M. (mentor); Van Lint, J.W.C. (mentor); Taale, H. (mentor); Wiggenraad, P.B.L. (mentor)","2011","In hydraulic engineering it is known that for the evaluation of the performance of a system, a probabilistic approach is preferable to a deterministic one. The essence of such a probabilistic approach is that random variability/uncertainty is explicitly taken into account. In this graduation project, this probabilistic way of looking at a system is applied to the traffic system, in the context of analyzing (ways to alleviate) traffic congestion. Basically, the mechanism behind traffic congestion can be described as a process of interaction between the traffic demand and supply on a road network. Both this traffic demand and supply show a significant level of temporal variability, which makes the resulting traffic conditions variable as well. Traditionally, in evaluations of the effectiveness of proposed congestion relief measures this variability is taken into account only in a limited or simplified way, or even not at all. Often simply a kind of ‘representative’ situation is calculated. The main objective of this research project was to reveal what kind of new insights can be obtained if we actually do explicitly/systematically take into account the variable nature of daily motorway congestion. After a comprehensive study into the sources of the variability in the traffic conditions, and the selection of appropriate performance indicators, a quantification model was developed. The main principle of this model is that a large number of traffic simulations are performed for varying traffic demand and supply values. Subsequently, the desired performance indicators are computed from the combined set of simulation results. In order to explore the (potential) new insights obtained by explicitly considering the variability, the developed model was applied to a reasonably sized real-life motorway network. From the results it is clear that a ‘representative’ calculation (in which all demand and supply variables are taken at their ‘representative’ level, which for example could be the mean or median value) does not give a good impression of the performance of the traffic system. It underestimates the congestion in certain respects, and – obviously – does not provide information on the uncertainty in travel times (which is an important factor in the societal costs of traffic congestion). The research has shown that if the variability in traffic is explicitly considered, new insights can be obtained into the relative importance of different (variable) influence factors. This was demonstrated by ‘deactivating’ these influence factors in the model (one at a time). The results of this demonstration indicate that the capacity variations due to the intrinsic randomness in human driving behavior play a central role in (peak period-related) congestion. Such information yields important insights into how traffic congestion can be remedied most effectively. By considering the example of a rush-hour lane, the research has shown that new insights can also be obtained into the effectiveness of specific measures that are proposed to alleviate traffic congestion. It turned out that the ‘traditional’ way of evaluating may actually result in a significant underestimation of the benefits of a measure. The precise nature and extent of the additional/revised insights will be highly context and measure specific, however. Of course, these new insights are not necessarily all positive in nature. Some more negative aspects of a measure could be brought to light as well. The above implies that in practice more systematic attention should be given to the variability in traffic, when evaluating the effectiveness of measures that are proposed to alleviate congestion. Because of the complexity involved, this would have to be done by using a model in which the different sources of variability are explicitly accounted for, such as (a further developed version of) the model developed in this project.","traffic congestion; variability; probabilistic; measures; evaluation; performance; traffic demand; traffic supply; variable; representative; randomness; stochasticity; stochastic; effectiveness; alleviate; traffic conditions; motorway; variation; variations; evaluations; simulation","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering / Transport & Planning","","","",""
"uuid:1c5639e7-f7ee-4b9b-b15a-074039906860","http://resolver.tudelft.nl/uuid:1c5639e7-f7ee-4b9b-b15a-074039906860","The Simulation-based Multi-objective Evolutionary OptimizatioN (SIMEON) Framework","Halim, R.A.","Verbraeck, A. (mentor); Seck, M.D. (mentor); Cunningham, S. (mentor); Van Houten, S.P. (mentor)","2010","A powerful combination of simulation and optimization has been successfully applied to solve real-world decision making problems (Fu et al., 2000; Fu, Glover, & April, 2005). Unfortunately, there are scientific and application problems with this method. Firstly, there is no transparent and formal structure to define the integration between simulation and optimization. Secondly, there are challenges to ensure a proper balance between the various desired features of the simulation-based optimization method (i.e. generality, efficiency, high-dimensionality and transparency)(Fu, 2002). This research provides two contributions to the problems above by providing: 1) the design of the framework that addresses the knowledge gap above; 2) the implementation of the framework that fulfills the aforementioned features in Java. The proposed framework is developed based on Zeigler’s modeling and simulation framework and the phases of an optimization study in operations research. The test and evaluation show that the desired features are successfully satisfied.","framework; simulation; multi-objective; evolutionary; optimization","en","master thesis","","","","","","","","","Technology, Policy and Management","Systems Engineering, Policy Analysis, and Management (SEPAM)","","Systems Engineering","",""
"uuid:d7593370-ccb1-4882-a7ab-67e89386e21b","http://resolver.tudelft.nl/uuid:d7593370-ccb1-4882-a7ab-67e89386e21b","Realising Transformation through Simulation: A universal indicator for transformation by using serious gaming","Scholtes, R.","Janssen, M.F.W.H.A. (mentor)","2010","Municipalities in the Netherlands are required to improve the quality of the public services and become more accessible through the internet. Theory on transformation says that to acquire that goal these government organisations must transform their organisational structure to become more process and chain based. A lot of municipalities appear to be behind the national schedule. The most important reason why these municipalities are behind schedule is that implementation of structural organisational transformation depends heavily on employees understanding the importance and implications of such a change. To improve the process towards a transformed organisation, municipalities can use interventions and methods provided by consultancy firms such as simulations which can increase the understanding of the implications and importance of structural transformation in government organisations. However, currently, there is no measurement of the effect simulation games have on the understanding and acceptance under participants. This thesis describes the creation for a questionnaire as a measurement tool that will be able to measure the effect of serious games. It will measure the view of employees on factors that are relevant for transformation in government organisations. During this research, data that was collected before and after “chain simulation” sessions will be used to conduct an analysis to identify factors that were affected during these sessions. In addition, experts on developing and facilitating simulations will be asked to provide insight in the results found in this analysis. Based on these analyses, a conceptual model and a research protocol will be presented to explain how the questionnaire can be used.","transformation; serious gaming; simulation; validation; questionnaire","en","master thesis","","","","","","","Campus only","2010-11-04","Technology, Policy and Management","ICT","","SEPAM","",""
"uuid:071e61fe-5242-445f-b5fc-4995c4cf454f","http://resolver.tudelft.nl/uuid:071e61fe-5242-445f-b5fc-4995c4cf454f","Dynamic Portfolio Choice: A Simulation Approach with an Application to Multiple Assets","Nijssen, J.M.","Van der Weide, J.A.M. (mentor)","2010","","portfolio; dynamic; simulation","en","master thesis","","","","","","","","2010-09-17","Electrical Engineering, Mathematics and Computer Science","Applied mathematics","","Kansrekening & Statistiek","",""
"uuid:e28927bf-3d9b-43fe-bffb-22d9038186b1","http://resolver.tudelft.nl/uuid:e28927bf-3d9b-43fe-bffb-22d9038186b1","A Sensitivity Study into Strapdown Airborne Gravimetry","Inácio, P.M.G.","Gunter, B.C. (mentor); Klees, R. (mentor)","2010","Airborne gravimetry is an important tool for the geodesy and geophysics communities. Able to provide medium to high-resolution measurements over large areas, it is the link between the low-resolution satellite measurements and expensive terrestrial campaigns, especially in remote areas. To explore the potential of airborne gravimetry, the Gravimetry using Airborne Inertial Navigation (GAIN) project was recently established at the faculty of Aerospace Engineering at TU-Delft, and is currently building and testing an in-house strapdown airborne gravimetry system with the objective of providing low-cost, high-accuracy gravity data for use in a wide range of applications in geodesy and geophysics. Within this thesis, the inertial sensors that will be used within the GAIN strapdown IMU are calibrated and modeled with a simulator to predict the accuracy of the airborne system when completed. A sensitivity study of several campaign parameters is done to understand which parts of the hardware and operating conditions are critical to the performance of the system. Of the list of applications for airborne gravity data, natural resource exploration is one of the more demanding in terms of accuracy and resolution, with a requirement of 0.5-2mGal at 2km resolution. This is beyond the range of current strapdown systems, so in addition to assessing the performance of the current strapdown system, additional tests were made to see what would be needed to achieve this higher level of accuracy. The simulation results suggest that the performance of the GAIN strapdown system, under ideal conditions, would be 1.4mGal at 2km resolution. Furthermore, the performance is limited by the accelerometers whose accuracy must improve by a factor of three before the 0.5mGal level can be achieved; however, other options were identified that could also be used to achieve this.","airborne gravimetry; inertial navigation; IMU calibration; simulation; sensitivity analysis","en","master thesis","","","","","","","","2012-08-15","Aerospace Engineering","Department of Remote Sensing","","Physical and Space Geodesy","",""
"uuid:6d56368c-a3c4-4e5d-8234-ef70a7f26b78","http://resolver.tudelft.nl/uuid:6d56368c-a3c4-4e5d-8234-ef70a7f26b78","Standard Cell Behavior Analysis and Waveform Set Model for Statistical Static Timing Analysis","Nigam, A.","Van der Meijs, N. (mentor); Berkelaar, M. (mentor)","2010","As we are moving toward nanometre technology, the variability in the circuit parameters and operating environment (Process, Voltage and Temperature (PVT)) are increasing, causing uncertainty in the circuit performance. Statistical Static Timing Analysis (SSTA) is a category of methodologies to analyse the variations in delay due to PVT variations. This thesis work is a part of the MODERN project, which is involved in developing a new SSTA methodology. In this thesis, the variation of the delay in 45nm standard cells is analysed. In industry practice, the Monte Carlo method is often used to estimate the statistical moments. This method needs a large number of simulation iterations and these simulations are parameter distribution dependent. A fast statistical moment estimation method is proposed in this work. The proposed methodology is at least 100 x faster than the Monte Carlo method and simulations are independent of the parameter distribution. In the SSTA methodology of the MODERN project, the signal waveforms with their variations are preserved at each pin of the standard cell. The concept of a ""set of waveforms"" as a representation of a variable electrical signal is also developed in this thesis work. Possible methods to represent the set of waveforms and their integration with the timing analysis methodology are analysed. The pseudo circuit based representation turns out to be the most compact model. A methodology for the analysis of the accuracy and efficiency of the pseudo circuit model is proposed.","STA; SSTA; digital circuit; timing analysis; EDA; PVT; variation; Monte Carlo; 45nm; methodology; simulation; MODERN","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Microelectronics","","Circuit and System Group","",""
"uuid:c25459fa-60c9-4cb3-acd0-ad6035ce6b0f","http://resolver.tudelft.nl/uuid:c25459fa-60c9-4cb3-acd0-ad6035ce6b0f","Modelling and simulation of bone implant healing","The, M.T.A.","Vermolen, F. (mentor)","2010","In this report a model for the ingrowth of a prosthesis in a bone will be formulated and simulated. First an overview is given of all the biological processes that are part of this healing process and what external factors can influence this process. Then a first mathematical model is presented in which the mechanical stimuli (one of the external factors that can influence the healing process) are neglected. The model is solved by numerical means with both the finite volume method as the finite element method. For the finite element method a short introduction is first given to get familiar with this technique. Results for this model will be presented, followed by a short discussion about the results and a conclusion. Subsequently the previous model is extended to incorporate mechanical stimuli, this is done by combining it with an elasticity equation. A short introduction will also be given to the theory of linear elasticity. The model is then solved using finite element analysis and finally also the results for this extended model will be presented, together with once again a short discussion and conclusion.","implant; bone healing; simulation; modelling; FEM; biology; prosthesis","en","bachelor thesis","","","","","","","","2010-09-01","Electrical Engineering, Mathematics and Computer Science","Applied Mathematics","","Numerical Analysis","",""
"uuid:e963bf3a-0663-4310-9725-7c3db9305493","http://resolver.tudelft.nl/uuid:e963bf3a-0663-4310-9725-7c3db9305493","Supporting Workforce Planning With a Simulation Based Tool","Van Dijk, K.J.W.","Verbraeck, A. (mentor); Seck, M. (mentor); Cunningham, S. (mentor); Van Houten, S. (mentor)","2010","An organizations workforce is one of its most important strategic assets. Getting the right workforce at the right time is a very hard job in a constantly changing labor market. To make this job a little bit easier a workforce planning tool has been developed. This tool supports Human Resource Specialists with giving insight into how an organizations workforce will develop in the near future. The tool has been developed, building on the expertise of field experts and it has been validated in several different ways. This tool will enable fast informed decision making that with the help of the tool’s Excel interface can be easily communicated to the organizations management.","simulation; decision support; workforce planning","en","master thesis","","","","","","","","","Technology, Policy and Management","Systems management","","sepam","",""
"uuid:d9b524b0-d2e1-4bde-883a-cc6313a1d8c0","http://resolver.tudelft.nl/uuid:d9b524b0-d2e1-4bde-883a-cc6313a1d8c0","Automated Implant-Processor Design","Dave, D.","Gaydadjiev, G. (mentor); Strydis, C. (mentor)","2010","As we move towards an aging population, it is likely that an increasing number of people will require an increasing diversity of implants, but at a lower cost to the society. Also, as computer technology progresses, smaller, more powerful, and less battery intensive implants can be designed. However, present implant design methodology is highly inefficient at meeting these goals as it suffers from non-reuse of existing knowledge by relying heavily on custom designs and ASICs. The SiMS project was started with the goal of creating pre-designed, pre-tested, and pre-certified toolbox of components for biomedical implants that can be assembled in a modular fashion for various application scenarios. One of the most important components in such a tool-box is the processor. Designing such a processor is a non-trivial task and previous work has concentrated on studying the effect of changing the processor input-parameters (such as caches), one parameter at a time. The present work represents a shift in this methodology, as we now allow co-variation in all possible input parameters in order to find optimal configurations in terms of the output objectives - power, performance, and area. Towards this end, we implement ImpEDE -- ""Implantable-processor Evolutionary Design-space Explorer"" -- a framework that performs multi-objective optimization of processor parameters, and hence gives as output a Pareto optimal set of processors. The framework consists of a cache simulator and a cycle-accurate processor simulator running benchmarks and workloads designed for medical implants, in order to simulate the optimization objectives. A popular, highly configurable, multi-objective genetic algorithm, NSGA-II, performs the actual optimization. Supporting scripts add modularity by acting as the interface between the genetic algorithm and the simulators, enabling easy replacement with new simulators. The whole framework is parallelized such that extra computation cycles of the idle laboratory CPUs can be utilized, thereby giving a considerable speedup without requiring any special hardware. We perform experiments on the non-dominated solution fronts evolved by the framework on a sub-set of benchmarks, in order to optimize parameters of the genetic algorithm, with an aim towards speeding up convergence. We also examine the effects of changing the workload size run by the benchmarks. A solution Pareto optimal front consisting of optimal processor configurations across all benchmarks is found. This front is used as a reference in order to characterize the benchmarks in the ImpBench suite. Finally, the objective space of the reference front is compared to existing implant designs, and a set of ""generic processors"" are chosen such that all the existing implant applications studied can be covered.","implant; pareto; genetic algorithm; design-space exploration; optimization; power; area; energy; processor; simulation","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Engineering","","Computer Engineering","",""
"uuid:34c6ec08-cb78-452e-8581-54be260588eb","http://resolver.tudelft.nl/uuid:34c6ec08-cb78-452e-8581-54be260588eb","Supporting Workforce Planning","Van Dijk, K.J.W.","Seck, M. (mentor); Cunningham, S. (mentor); Verbraeck, A. (mentor); Van Houten, S. (mentor)","2010","The goal of this project is to develop a workforce planning tool. A workforce planning tool is a decision support system (Keen and Sol, 2008) that enables Accenture to make better informed and faster decisions about its future workforce. The workforce planning tool has been developed with a design science approach. This approach is an iterative process where advancing insights dictate the direction of the project (Hevner et al., 2004). This process is especially suited for projects where the end result cannot be clearly defined beforehand because the iterative nature of this approach allows working towards a good solution. Chapter 2 concludes that in order to be able to effectively optimize the organization’s performance, workforce planning can help gaining an understanding with the internal and external dynamics of the workforce of an organization. With these insights a workforce that is better equipped to achieve the business goals can be created. In section 1.5 it is argued that a decision support system should support its users with making better decision not replace them. Not everyone is an expert when it comes to using IT systems. Therefore, according to section 4.5, a decision support tool needs a good looking and clear interface to improve the tool’s usage. A simulation model forms the basis of the workforce planning tool. Discrete event simulation has been picked as the most appropriate method of inquiry for the workforce planning tool Chapter 3 concluded it was best suited to meet the demands that the problem situation put forward. The workforce planning tool is tailored to Accenture’s organization. In chapter 4 several analyses have been done to reveal the structure of Accenture’s organization and its goals. Successful adoption of the workforce planning tool by the organization requires an adoption strategy. In order to determine the best strategy insight is needed into the strategic behaviors of the people influenced by the workforce planning tool. In section 3.1 it is argued that Input data and output data that is used for communicating the tool’s results is very important for the tool’s success. Managing this data is thus an important aspect of the design. Supporting Strategic Workforce Planning 7/117 The Figure below shows what elements the workforce planning tool should have. This Chapter will further specify how these three elements are designed and implemented. THREE IMPORTANT ELEMENTS FOR THE ACCENTURE WORKFORCE PLANNING TOOL The figure below shows a part of the final results of this project. At this screen it is possible to select what experimental scenario the user wants to run. In order to change the scenario’s parameters navigation to other screens is required. MANAGING THE SCENARIO’S The model within the tool has been validated in several different ways. A face validation where the models basic functionality was checked has taken place. For this purpose a special testing mode has been developed. The input of the parameters has been checked. Accenture’s experts evaluated the model’s results for correctness. Added to that a historical data analysis has been done to see if the tool was able to recreate what happened in the past. These tests have all improved the reliability of the tool’s results in their own way. Finally this study has produces several recommendations: 1. Include skills of employees in the tool. 2. Do some research on how the employee demand is calculated. 3. Structure knowledge resources. What skills are present in the current day organization? 4. Improve portability of the workforce planning tool.","workforce planning; simulation; decision support","en","master thesis","","","","","","","","2010-05-09","Technology, Policy and Management","Systems Management","","","",""
"uuid:21e2a318-b08d-4edb-bebf-fcc672b07c39","http://resolver.tudelft.nl/uuid:21e2a318-b08d-4edb-bebf-fcc672b07c39","Tracer Dispersion: The effect of gravity, inertia & diffusion on fluid flow simulations","Hustoft, L.","Berentsen, C. (mentor)","2010","This project focuses on developing a parallel Navier Stokes simulator capable of modeling single phase dispersion in porous media at the pore scale. Many different factors can contribute to dispersion on a pore scale and this study was limited to fluid flow related factors. The model, including convective motion and diffusive (Fickian) spreading, also incorporates the influences of gravity, inertia and viscous forces on the motion. The precise interaction between these forces and the way these interactions contribute to dispersion is not fully understood from a theoretical point of view. In order to gain insight into dispersion, pore scale simulations are performed in a domain consisting of a few grains. The primary objective of the project was to investigate whether the parallel computational power of NVidia graphics cards (GPUs) could be utilized to tackle the computational intensive Navier Stokes equations. Mainstream NVidia graphics cards (GPUs) appeared to be unable to model the system of equations due to hardware limitations and in addition a conventional parallel processor (CPU) code was developed. Based on a small sample of simulations, diffusion appears to be a significant factor controlling the distribution of the tracer. Inertia may also play a significant role, depending on the alignment relative to pore geometry and gravity. For the case we consider gravity appears to have the least influence on dispersion.","dispersion; tracer; simulation","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Section Petroleum Engineering","","","",""
"uuid:25ed03c9-a5a4-4f4f-8671-a37a846e81bd","http://resolver.tudelft.nl/uuid:25ed03c9-a5a4-4f4f-8671-a37a846e81bd","Coping with uncertainties in the rail sector","Smit, M.","Baggen, J.H. (mentor); Van Wee, G.P. (mentor)","2010","A method is developed that deals with uncertainties concerning the possible future train services and to what extent different infrastructure alternatives accommodate these train services. A case study is done for the rail line The Hague - Rotterdam. Amongst other methods, a policy analysis approach, a traffic modeling program and a net present value method has been used. The result is a method that enables the rail sector to figure out what infrastructure alternative accommodates what kind of train service. Furthermore, for the line The Hague - Rotterdam this includes 3 different train services tested on 4 infrastructure alternatives. Based on the net present values choices can be made what alterenative to choose when a train service is to be implemented.","rail; simulation; policy analysis; net present value","en","master thesis","","","","","","","","","Technology, Policy and Management","Transport Policy and Logistics' Management","","EPA","",""
"uuid:1b4c982d-8d39-40fc-9184-282f4116a585","http://resolver.tudelft.nl/uuid:1b4c982d-8d39-40fc-9184-282f4116a585","Regularization of Water Flooding Optimization","Malekzadeh, R.","Jansen, J.D. (mentor)","2005","The use of smart well technology to optimize water flooding introduces a large number of control parameters both in space (well segments) and time. The problem of finding the optimal control parameters to maximize net present value as an objective function can be solved with the aid of a gradient-based optimization method. Using too many parameters may lead to a large number of local maxima in the objective function, so the gradient-based optimization method may result in suboptimal solutions. In this thesis, proper orthogonal decomposition is applied to regularize gradient-based control parameter optimization by projecting the original high dimensional control space onto a low dimensional subspace and thus reduce the number of control parameters. Since in a low dimensional subspace there are fewer local maxima, the solution is more likely to reach a local maximum that is in the close vicinity of the global solution. To evaluate the efficiency of our proposed method, ordinary multiscale parameterization as developed by Lien et al. (2005) is also applied to the optimization of the control parameters. A multiscale approach starts from optimization of a very coarse representative parameter. Then the number of parameters is gradually increased until convergence is reached. Numerical examples indicate that a regularization approach with the aid of proper orthogonal decomposition may speed up the convergence rate, and also may increase the convergence to the global solution within shorter optimization time compared to optimization without regularization technique. The method effectively reduces the control effort by grouping multiple well settings in space and time and treating them as one control parameter.","smart well; simulation; optimization; regularization","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Department of Geotechnology","","Section for Petroleum Engineering","",""
"uuid:83c39948-f5fa-4ef6-a54c-02dfdc30f916","http://resolver.tudelft.nl/uuid:83c39948-f5fa-4ef6-a54c-02dfdc30f916","Evaluating tram schedules with the aid of simulation","Talma, B.J.","Verbraeck, A. (mentor); Kanacilo, E.M. (mentor); Kamerling, W. (mentor); van Duin, J.H.R. (mentor)","2005","HTM is a major role player in the public transportation sector of the Netherlands with its tram and bus services. The company is continually working to improve their infrastructure management policies by considering aspects such as vehicle priorities, driver allocation and vehicle schedules. One obstacle in the way of HTM’s continual improvement is that they have no scientific and systematic method of evaluating possible alternatives that might be considered due to infrastructure changes or schedule modifications. Currently, possible changes are evaluated by heavily relying on expertise and historical data. While the use of expertise and historical data in itself is not undesirable, and in fact quite valuable, using these alone mean that HTM does not have a quick, consistent and cost-effective way of assessing possible changes to their tram network. This means that proposed alternatives for new schedules cannot be validated or tested before it is implemented. The high implementation costs and the low frequency of schedule changes give ample incentive for an effective evaluation method. This project addressed this need for an evaluation method by developing a simulation framework suitable for modelling tram infrastructure and the accompanying schedules. This simulation framework consists of existing and newly created building blocks that are loosely coupled to the input specification of the infrastructure and schedules. The purposeful separation of the data and the functional processes of the building blocks allow decision-makers at HTM to easily build new models that represent proposed alternatives to the tram network and organisation. The developed simulation framework was tested by building a model of a single HTM tramline. This model was used to validate the simulation framework and to evaluate the tram schedules according to a number of performance indicators.","simulation; HTM; public transportation; scheduling","en","master thesis","","","","","","","Campus only","2013-08-02","Technology, Policy and Management","Systems Engineering","","Engineering and Policy Analysis","",""
"uuid:625e82cf-973f-4c7e-9893-b226604063f4","http://resolver.tudelft.nl/uuid:625e82cf-973f-4c7e-9893-b226604063f4","Sand-mud distribution in the Amelander inlet: Sand and mud transport computations in a tidal inlet","Nieuwenhuis, O.","Stive, M.J.F. (mentor); Van Ledden, M. (mentor); Wang, Z.B. (mentor); Winterwerp, H. (mentor); Roelvink, J.A. (mentor)","2001","In the tidal inlets in the Dutch Wadden Sea, the most important sediment types are sand, silt and clay. A distinction between the sediment types is made, because clay particles have cohesive properties. Consequently, the clay particles do not behave as individual particles but tend to stick together. Floes are formed whose size and settling velocity are larger than those of the individual particles. The bed is mostly composed of a mixture of different sediment types. The bed has cohesive properties (and is called a mud bed) if the clay percentage is higher than 5-10%. At present, computer-models are not able to simulate the sand-mud distribution in tidal inlets. The bed composition variation cannot be taken into account in the computations. This is caused by a lack of knowledge about the processes and mechanisms that take place during mud transport and sand-mud interaction. Therefore, the goal of this study is to gain more knowledge about sand-mud distribution in tidal inlets. In this study the tidal basin of the Amelander Inlet has been chosen as area of interest. The first part of the study consists of a data analysis. The data is taken from the Sediment Atlas (Rijkswaterstaat, 1998) and consists of the grain size distribution at sample points in the tidal basin of the Amelander Inlet. To obtain these grain size distributions, samples were taken every kilometre or every five hundred meters. The samples were taken in the period April-July 1995. The grain size distribution was determined with the Malvern method. It is know that the Malvern method underestimates the finer fractions in a sample. The second part of the study consists of using numerical models to try to simulate the observed sand-mud distribution in the Amelander Inlet. In the data analysis, research questions were raised which can be answered with the models in this part. These questions concern the wave penetration, the tidal penetration, the water depth at high water and the level of the sand and mud flats. The computations are made within the Delft3D modelling system. First, hydrodynamic computations are made in Delft3D-FLOW. After that, sediment transport computations are made separately for the sand bed and the mud bed. The main conclusions from this study are: 1) The areas where deposition takes place are mainly determined by the occurring tidal action. The influence of waves is large at small water depths, and determines how much and how fast the mud particles are suspended. Whether the mud particles are transported after being stirred-up, is determined by the actual tidal action. 2) Waves also are of influence for the deposition of mud layers. Waves in the tidal basin spread the mud particles from areas with high wave attack (high concentration) towards areas with low wave attack (low concentration). Due to this redistribution of mud, the mud particles, on a large scale, are transported from west to east in the tidal basin. 3) Qualitatively, the computed mud deposition agrees with the measured mud contents on a large scale. With the help of the second deposition computation and the erosion computation, part of the differences can be explained and the measured mud contents are understood better. Further experimenting with the wind speed, computing different phases in a certain sequence and the computation times for the different phases can improve the results and help understand and predict the measured mud contents on a more accurate level.","tidal inlet; sediment distribution; simulation","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Hydraulic Engineering","","","",""
"uuid:af403d6e-8479-490b-82da-96428bd73234","http://resolver.tudelft.nl/uuid:af403d6e-8479-490b-82da-96428bd73234","Overbank flow in the river Allier: A flow model","Bart, P.J.","De Vriend, H.J. (mentor); Wang, Z.B. (mentor); Van den Berg, J.H. (mentor); Stelling, G.S. (mentor)","2000","For several years the Department of Physical Geography ofthe University of Utrecht has conducted surveys on the river Allier in France. These surveys always took place during periods of low discharge because at high or even moderate discharges measurements are impossible. As information on the flow during a flood is important to understand the river morphology, a flow model of a part of the Allier was made to simulate the flow during a flood. During a survey in the summer of 1998 bathymetric data and flow measurements were collected. With this data a flow model was made and calibrated. The discharge during the survey was approximately 20 m3Is. During the calibration it became clear that the downstream boundary condition (a water level) could not be generated well. This problem was overcome by moving the boundary to a flow measurement section where the water level for a discharge of20 m3/s was known. However this left a problem for simulation at higher discharges than 20 m3/s. The influence ofan error in the downstream boundary condition was estimated both numerically as well as with the Bresse approximation. Both methods showed the backwater effect introduced by an error to extent for about 1000 m upstream of the boundary. The magnitude of a water level error however, was shown to decrease rapidly in the upstream direction. To simulate flow during a flood several simulations were made, steady- (with a constant discharge) and unsteady state (with a varying discharge). Ten steady state simulations were made, increasing in discharge from 100 to 1000 m3/s. In the unsteady state run the flood of November 1994 was simulated. The simulations showed the flow mainly to follow the main channel, leading to an inbank flow pattern. The position of the secondary flow cells - where the bend radius of curvature is smallest - also indicated an inbank flow pattern. Velocities up to 4 m/s were found in the main channel leading to very large bed shear stresses. At several places the flow was directed onto the point bars. The bed shear stress magnitude here indicated that large grain sizes could be transported onto the point bars. The differences between the steady state and the unsteady state simulations were small. Although there were some differences the flow pattern and the magnitude of the velocity were the same. This means that for a global impression ofthe flow pattern at a certain discharge, a steady state simulation is sufficient. This saves a lot of computation time as the unsteady state simulation has a much larger computation time. Armour layers are layers 0 f coarse grains on top of the bed. They were found at several places in the survey area. During the survey a number of the armour layers were sampled. With the aid of the Oak Creek model by Parker (1990) the threshold of motion ofthe grain sizes within these armour layers was estimated. By combining the Oak Creek model and the bed shear stresses from the flow model it was shown that the threshold of motion was exceeded for all grain sizes within the sampled armour layers. Also a rough indication of the surface grain size distribution was given based on the Oak Creek model and the bed shear stresses derived from the flow model. However, the applicability of Oak Creek model to the river Allier was not tested. This requires sediment transport measurements. For the various coefficients in the Oak Creek model the literature values were used.","river discharge; simulation","en","master thesis","TU Delft, Civil Engineering and Geosciences, Hydraulic Engineering","","","","","","","","Civil Engineering and Geosciences","","","","",""
"uuid:877e157d-c224-4a6b-9b5e-824e88b111ce","http://resolver.tudelft.nl/uuid:877e157d-c224-4a6b-9b5e-824e88b111ce","Modeling of Complex Reaction Systems: Steam Cracker","Goethem, M.W.M. Van","Van Leeuwen, C. (mentor); Verheijen, P.J.T. (mentor)","1998","Steam pyrolysis of ethane and naphtha is an important chemical bulk process. It produces ethylene and propylene, which are important base chemicals. In order to be competitive, crackers have to be operated at near optimal conditions. Hence, a simulation program of the process, particularly of the pyrolysis is very helpful. KTI uses and licenses such a program called SPYRO*. Development of this program has started over 20 years ago. Consequently, it uses a closed model. It has been the objective of this study to investigate the feasibility of the development of an open version of SPYRO. Here open means that the equations are written in residual form .This enhances the flexibility of the program very much. For our studies we have used the model of Froment for ethane cracking because the documentation to make an open SPYRO model was insufficient. This Froment model has been modified as to improve the modeling of the bends. It has been checked, whether the solution of this model would pose any problems. It was found that the index might become more than 1 during integration. As yet no sound physical explanation has been found for this phenomena. It also follows from investigation of the index that a start-up problem of the numerical integration exists for the original set of differential equations. We have found a more elegant method to circumvent this problem than Froment. Moreover, we were able to solve the set of equations for bad initial conditions (equal to the boundary conditions). The ordinary differential equations of the model are turned into algebraic equations using orthogonal collocation on finite elements. This allows the model to be solved with an equation solver. The results were compared with various commercial numerical integrators. Excellent agreement was found for limited numbers of sections and collocation points. The speed of solution of the linearized set of modal equations depends on the size, the sparsity and structure of the Jacobian. The latter has an enormous effect on the fill-in of the L and U decomposition matrices. We found a very satisfying structure by modification of the equations and proper arrangement in the Jacobian. On the basis of the above results we may draw the following conclusions regarding the feasibility of the development of an Open SPYRO model. Unfortunately we had to use a simple model of Froment rather than the SPYRO equations themselves. Nevertheless, we have concluded that such a development is feasible. Within a reasonable time an accurate solution will be found even with bad starting values. The computation time can be further reduced with a smart initialization procedure.","kinetic systems; simulation","en","master thesis","","","","","","","","","Applied Sciences","Chemical Engineering","","","",""